Sample records for analytical modeling techniques

  1. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  2. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  3. WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL

    EPA Science Inventory

    The Wellhead Analytic Element Model (WhAEM) demonstrates a new technique for the definition of time-of-travel capture zones in relatively simple geohydrologic settings. he WhAEM package includes an analytic element model that uses superposition of (many) analytic solutions to gen...

  4. Meta-Analytic Structural Equation Modeling (MASEM): Comparison of the Multivariate Methods

    ERIC Educational Resources Information Center

    Zhang, Ying

    2011-01-01

    Meta-analytic Structural Equation Modeling (MASEM) has drawn interest from many researchers recently. In doing MASEM, researchers usually first synthesize correlation matrices across studies using meta-analysis techniques and then analyze the pooled correlation matrix using structural equation modeling techniques. Several multivariate methods of…

  5. An Example of a Hakomi Technique Adapted for Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Collis, Peter

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is a model of therapy that lends itself to integration with other therapy models. This paper aims to provide an example to assist others in assimilating techniques from other forms of therapy into FAP. A technique from the Hakomi Method is outlined and modified for FAP. As, on the whole, psychotherapy…

  6. Analytical and numerical techniques for predicting the interfacial stresses of wavy carbon nanotube/polymer composites

    NASA Astrophysics Data System (ADS)

    Yazdchi, K.; Salehi, M.; Shokrieh, M. M.

    2009-03-01

    By introducing a new simplified 3D representative volume element for wavy carbon nanotubes, an analytical model is developed to study the stress transfer in single-walled carbon nanotube-reinforced polymer composites. Based on the pull-out modeling technique, the effects of waviness, aspect ratio, and Poisson ratio on the axial and interfacial shear stresses are analyzed in detail. The results of the present analytical model are in a good agreement with corresponding results for straight nanotubes.

  7. Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.

    PubMed

    Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen

    2015-10-01

    Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.

  8. The application of emulation techniques in the analysis of highly reliable, guidance and control computer systems

    NASA Technical Reports Server (NTRS)

    Migneault, Gerard E.

    1987-01-01

    Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.

  9. Flexible aircraft dynamic modeling for dynamic analysis and control synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1989-01-01

    The linearization and simplification of a nonlinear, literal model for flexible aircraft is highlighted. Areas of model fidelity that are critical if the model is to be used for control system synthesis are developed and several simplification techniques that can deliver the necessary model fidelity are discussed. These techniques include both numerical and analytical approaches. An analytical approach, based on first-order sensitivity theory is shown to lead not only to excellent numerical results, but also to closed-form analytical expressions for key system dynamic properties such as the pole/zero factors of the vehicle transfer-function matrix. The analytical results are expressed in terms of vehicle mass properties, vibrational characteristics, and rigid-body and aeroelastic stability derivatives, thus leading to the underlying causes for critical dynamic characteristics.

  10. Analytical methods in multivariate highway safety exposure data estimation

    DOT National Transportation Integrated Search

    1984-01-01

    Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...

  11. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  12. Trace metal speciation in natural waters: Computational vs. analytical

    USGS Publications Warehouse

    Nordstrom, D. Kirk

    1996-01-01

    Improvements in the field sampling, preservation, and determination of trace metals in natural waters have made many analyses more reliable and less affected by contamination. The speciation of trace metals, however, remains controversial. Chemical model speciation calculations do not necessarily agree with voltammetric, ion exchange, potentiometric, or other analytical speciation techniques. When metal-organic complexes are important, model calculations are not usually helpful and on-site analytical separations are essential. Many analytical speciation techniques have serious interferences and only work well for a limited subset of water types and compositions. A combined approach to the evaluation of speciation could greatly reduce these uncertainties. The approach proposed would be to (1) compare and contrast different analytical techniques with each other and with computed speciation, (2) compare computed trace metal speciation with reliable measurements of solubility, potentiometry, and mean activity coefficients, and (3) compare different model calculations with each other for the same set of water analyses, especially where supplementary data on speciation already exist. A comparison and critique of analytical with chemical model speciation for a range of water samples would delineate the useful range and limitations of these different approaches to speciation. Both model calculations and analytical determinations have useful and different constraints on the range of possible speciation such that they can provide much better insight into speciation when used together. Major discrepancies in the thermodynamic databases of speciation models can be evaluated with the aid of analytical speciation, and when the thermodynamic models are highly consistent and reliable, the sources of error in the analytical speciation can be evaluated. Major thermodynamic discrepancies also can be evaluated by simulating solubility and activity coefficient data and testing various chemical models for their range of applicability. Until a comparative approach such as this is taken, trace metal speciation will remain highly uncertain and controversial.

  13. Geophysical technique for mineral exploration and discrimination based on electromagnetic methods and associated systems

    DOEpatents

    Zhdanov,; Michael, S [Salt Lake City, UT

    2008-01-29

    Mineral exploration needs a reliable method to distinguish between uneconomic mineral deposits and economic mineralization. A method and system includes a geophysical technique for subsurface material characterization, mineral exploration and mineral discrimination. The technique introduced in this invention detects induced polarization effects in electromagnetic data and uses remote geophysical observations to determine the parameters of an effective conductivity relaxation model using a composite analytical multi-phase model of the rock formations. The conductivity relaxation model and analytical model can be used to determine parameters related by analytical expressions to the physical characteristics of the microstructure of the rocks and minerals. These parameters are ultimately used for the discrimination of different components in underground formations, and in this way provide an ability to distinguish between uneconomic mineral deposits and zones of economic mineralization using geophysical remote sensing technology.

  14. Preliminary numerical analysis of improved gas chromatograph model

    NASA Technical Reports Server (NTRS)

    Woodrow, P. T.

    1973-01-01

    A mathematical model for the gas chromatograph was developed which incorporates the heretofore neglected transport mechanisms of intraparticle diffusion and rates of adsorption. Because a closed-form analytical solution to the model does not appear realizable, techniques for the numerical solution of the model equations are being investigated. Criteria were developed for using a finite terminal boundary condition in place of an infinite boundary condition used in analytical solution techniques. The class of weighted residual methods known as orthogonal collocation is presently being investigated and appears promising.

  15. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  16. Modeling of phonon scattering in n-type nanowire transistors using one-shot analytic continuation technique

    NASA Astrophysics Data System (ADS)

    Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel

    2013-10-01

    We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.

  17. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  18. Advancing statistical analysis of ambulatory assessment data in the study of addictive behavior: A primer on three person-oriented techniques.

    PubMed

    Foster, Katherine T; Beltz, Adriene M

    2018-08-01

    Ambulatory assessment (AA) methodologies have the potential to increase understanding and treatment of addictive behavior in seemingly unprecedented ways, due in part, to their emphasis on intensive repeated assessments of an individual's addictive behavior in context. But, many analytic techniques traditionally applied to AA data - techniques that average across people and time - do not fully leverage this potential. In an effort to take advantage of the individualized, temporal nature of AA data on addictive behavior, the current paper considers three underutilized person-oriented analytic techniques: multilevel modeling, p-technique, and group iterative multiple model estimation. After reviewing prevailing analytic techniques, each person-oriented technique is presented, AA data specifications are mentioned, an example analysis using generated data is provided, and advantages and limitations are discussed; the paper closes with a brief comparison across techniques. Increasing use of person-oriented techniques will substantially enhance inferences that can be drawn from AA data on addictive behavior and has implications for the development of individualized interventions. Copyright © 2017. Published by Elsevier Ltd.

  19. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  20. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  1. Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.

    PubMed

    Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L

    2013-01-01

    Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.

  2. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  3. The contribution of Raman spectroscopy to the analytical quality control of cytotoxic drugs in a hospital environment: eliminating the exposure risks for staff members and their work environment.

    PubMed

    Bourget, Philippe; Amin, Alexandre; Vidal, Fabrice; Merlette, Christophe; Troude, Pénélope; Baillet-Guffroy, Arlette

    2014-08-15

    The purpose of the study was to perform a comparative analysis of the technical performance, respective costs and environmental effect of two invasive analytical methods (HPLC and UV/visible-FTIR) as compared to a new non-invasive analytical technique (Raman spectroscopy). Three pharmacotherapeutic models were used to compare the analytical performances of the three analytical techniques. Statistical inter-method correlation analysis was performed using non-parametric correlation rank tests. The study's economic component combined calculations relative to the depreciation of the equipment and the estimated cost of an AQC unit of work. In any case, analytical validation parameters of the three techniques were satisfactory, and strong correlations between the two spectroscopic techniques vs. HPLC were found. In addition, Raman spectroscopy was found to be superior as compared to the other techniques for numerous key criteria including a complete safety for operators and their occupational environment, a non-invasive procedure, no need for consumables, and a low operating cost. Finally, Raman spectroscopy appears superior for technical, economic and environmental objectives, as compared with the other invasive analytical methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A Meta-Analytic Investigation of Fiedler's Contingency Model of Leadership Effectiveness.

    ERIC Educational Resources Information Center

    Strube, Michael J.; Garcia, Joseph E.

    According to Fiedler's Contingency Model of Leadership Effectiveness, group performance is a function of the leader-situation interaction. A review of past validations has found several problems associated with the model. Meta-analytic techniques were applied to the Contingency Model in order to assess the validation evidence quantitatively. The…

  5. Three-Dimensional Dynamic Deformation Measurements Using Stereoscopic Imaging and Digital Speckle Photography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prentice, H. J.; Proud, W. G.

    2006-07-28

    A technique has been developed to determine experimentally the three-dimensional displacement field on the rear surface of a dynamically deforming plate. The technique combines speckle analysis with stereoscopy, using a modified angular-lens method: this incorporates split-frame photography and a simple method by which the effective lens separation can be adjusted and calibrated in situ. Whilst several analytical models exist to predict deformation in extended or semi-infinite targets, the non-trivial nature of the wave interactions complicates the generation and development of analytical models for targets of finite depth. By interrogating specimens experimentally to acquire three-dimensional strain data points, both analytical andmore » numerical model predictions can be verified more rigorously. The technique is applied to the quasi-static deformation of a rubber sheet and dynamically to Mild Steel sheets of various thicknesses.« less

  6. Emulation applied to reliability analysis of reconfigurable, highly reliable, fault-tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.

  7. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  8. Investigation of the feasibility of an analytical method of accounting for the effects of atmospheric drag on satellite motion

    NASA Technical Reports Server (NTRS)

    Bozeman, Robert E.

    1987-01-01

    An analytic technique for accounting for the joint effects of Earth oblateness and atmospheric drag on close-Earth satellites is investigated. The technique is analytic in the sense that explicit solutions to the Lagrange planetary equations are given; consequently, no numerical integrations are required in the solution process. The atmospheric density in the technique described is represented by a rotating spherical exponential model with superposed effects of the oblate atmosphere and the diurnal variations. A computer program implementing the process is discussed and sample output is compared with output from program NSEP (Numerical Satellite Ephemeris Program). NSEP uses a numerical integration technique to account for atmospheric drag effects.

  9. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  10. Multilevel Modeling and Policy Development: Guidelines and Applications to Medical Travel.

    PubMed

    Garcia-Garzon, Eduardo; Zhukovsky, Peter; Haller, Elisa; Plakolm, Sara; Fink, David; Petrova, Dafina; Mahalingam, Vaishali; Menezes, Igor G; Ruggeri, Kai

    2016-01-01

    Medical travel has expanded rapidly in recent years, resulting in new markets and increased access to medical care. Whereas several studies investigated the motives of individuals seeking healthcare abroad, the conventional analytical approach is limited by substantial caveats. Classical techniques as found in the literature cannot provide sufficient insight due to the nested nature of data generated. The application of adequate analytical techniques, specifically multilevel modeling, is scarce to non-existent in the context of medical travel. This study introduces the guidelines for application of multilevel techniques in public health research by presenting an application of multilevel modeling in analyzing the decision-making patterns of potential medical travelers. Benefits and potential limitations are discussed.

  11. Multilevel Modeling and Policy Development: Guidelines and Applications to Medical Travel

    PubMed Central

    Garcia-Garzon, Eduardo; Zhukovsky, Peter; Haller, Elisa; Plakolm, Sara; Fink, David; Petrova, Dafina; Mahalingam, Vaishali; Menezes, Igor G.; Ruggeri, Kai

    2016-01-01

    Medical travel has expanded rapidly in recent years, resulting in new markets and increased access to medical care. Whereas several studies investigated the motives of individuals seeking healthcare abroad, the conventional analytical approach is limited by substantial caveats. Classical techniques as found in the literature cannot provide sufficient insight due to the nested nature of data generated. The application of adequate analytical techniques, specifically multilevel modeling, is scarce to non-existent in the context of medical travel. This study introduces the guidelines for application of multilevel techniques in public health research by presenting an application of multilevel modeling in analyzing the decision-making patterns of potential medical travelers. Benefits and potential limitations are discussed. PMID:27252672

  12. Culture-Sensitive Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Vandenberghe, L.

    2008-01-01

    Functional analytic psychotherapy (FAP) is defined as behavior-analytically conceptualized talk therapy. In contrast to the technique-oriented educational format of cognitive behavior therapy and the use of structural mediational models, FAP depends on the functional analysis of the moment-to-moment stream of interactions between client and…

  13. New techniques for the analysis of manual control systems. [mathematical models of human operator behavior

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.

    1971-01-01

    Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.

  14. Trends in analytical techniques applied to particulate matter characterization: A critical review of fundaments and applications.

    PubMed

    Galvão, Elson Silva; Santos, Jane Meri; Lima, Ana Teresa; Reis, Neyval Costa; Orlando, Marcos Tadeu D'Azeredo; Stuetz, Richard Michael

    2018-05-01

    Epidemiological studies have shown the association of airborne particulate matter (PM) size and chemical composition with health problems affecting the cardiorespiratory and central nervous systems. PM also act as cloud condensation nuclei (CNN) or ice nuclei (IN), taking part in the clouds formation process, and therefore can impact the climate. There are several works using different analytical techniques in PM chemical and physical characterization to supply information to source apportionment models that help environmental agencies to assess damages accountability. Despite the numerous analytical techniques described in the literature available for PM characterization, laboratories are normally limited to the in-house available techniques, which raises the question if a given technique is suitable for the purpose of a specific experimental work. The aim of this work consists of summarizing the main available technologies for PM characterization, serving as a guide for readers to find the most appropriate technique(s) for their investigation. Elemental analysis techniques like atomic spectrometry based and X-ray based techniques, organic and carbonaceous techniques and surface analysis techniques are discussed, illustrating their main features as well as their advantages and drawbacks. We also discuss the trends in analytical techniques used over the last two decades. The choice among all techniques is a function of a number of parameters such as: the relevant particles physical properties, sampling and measuring time, access to available facilities and the costs associated to equipment acquisition, among other considerations. An analytical guide map is presented as a guideline for choosing the most appropriated technique for a given analytical information required. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  16. Numerical and analytical modeling of the end-loaded split (ELS) test specimens made of multi-directional coupled composite laminates

    NASA Astrophysics Data System (ADS)

    Samborski, Sylwester; Valvo, Paolo S.

    2018-01-01

    The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.

  17. An analytic performance model of disk arrays and its application

    NASA Technical Reports Server (NTRS)

    Lee, Edward K.; Katz, Randy H.

    1991-01-01

    As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.

  18. Pavement Performance : Approaches Using Predictive Analytics

    DOT National Transportation Integrated Search

    2018-03-23

    Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...

  19. Analytical transmissibility based transfer path analysis for multi-energy-domain systems using four-pole parameter theory

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Mohammad Jalali; Behdinan, Kamran

    2017-10-01

    The increasing demand to minimize undesired vibration and noise levels in several high-tech industries has generated a renewed interest in vibration transfer path analysis. Analyzing vibration transfer paths within a system is of crucial importance in designing an effective vibration isolation strategy. Most of the existing vibration transfer path analysis techniques are empirical which are suitable for diagnosis and troubleshooting purpose. The lack of an analytical transfer path analysis to be used in the design stage is the main motivation behind this research. In this paper an analytical transfer path analysis based on the four-pole theory is proposed for multi-energy-domain systems. Bond graph modeling technique which is an effective approach to model multi-energy-domain systems is used to develop the system model. In this paper an electro-mechanical system is used as a benchmark example to elucidate the effectiveness of the proposed technique. An algorithm to obtain the equivalent four-pole representation of a dynamical systems based on the corresponding bond graph model is also presented in this paper.

  20. Classifying Correlation Matrices into Relatively Homogeneous Subgroups: A Cluster Analytic Approach

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Chan, Wai

    2005-01-01

    Researchers are becoming interested in combining meta-analytic techniques and structural equation modeling to test theoretical models from a pool of studies. Most existing procedures are based on the assumption that all correlation matrices are homogeneous. Few studies have addressed what the next step should be when studies being analyzed are…

  1. Model of separation performance of bilinear gradients in scanning format counter-flow gradient electrofocusing techniques.

    PubMed

    Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L

    2015-03-01

    Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  3. Active Control of Inlet Noise on the JT15D Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.

    1999-01-01

    This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.

  4. Data management system performance modeling

    NASA Technical Reports Server (NTRS)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  5. The Role of Teamwork in the Analysis of Big Data: A Study of Visual Analytics and Box Office Prediction.

    PubMed

    Buchanan, Verica; Lu, Yafeng; McNeese, Nathan; Steptoe, Michael; Maciejewski, Ross; Cooke, Nancy

    2017-03-01

    Historically, domains such as business intelligence would require a single analyst to engage with data, develop a model, answer operational questions, and predict future behaviors. However, as the problems and domains become more complex, organizations are employing teams of analysts to explore and model data to generate knowledge. Furthermore, given the rapid increase in data collection, organizations are struggling to develop practices for intelligence analysis in the era of big data. Currently, a variety of machine learning and data mining techniques are available to model data and to generate insights and predictions, and developments in the field of visual analytics have focused on how to effectively link data mining algorithms with interactive visuals to enable analysts to explore, understand, and interact with data and data models. Although studies have explored the role of single analysts in the visual analytics pipeline, little work has explored the role of teamwork and visual analytics in the analysis of big data. In this article, we present an experiment integrating statistical models, visual analytics techniques, and user experiments to study the role of teamwork in predictive analytics. We frame our experiment around the analysis of social media data for box office prediction problems and compare the prediction performance of teams, groups, and individuals. Our results indicate that a team's performance is mediated by the team's characteristics such as openness of individual members to others' positions and the type of planning that goes into the team's analysis. These findings have important implications for how organizations should create teams in order to make effective use of information from their analytic models.

  6. An analytical technique for predicting the characteristics of a flexible wing equipped with an active flutter-suppression system and comparison with wind-tunnel data

    NASA Technical Reports Server (NTRS)

    Abel, I.

    1979-01-01

    An analytical technique for predicting the performance of an active flutter-suppression system is presented. This technique is based on the use of an interpolating function to approximate the unsteady aerodynamics. The resulting equations are formulated in terms of linear, ordinary differential equations with constant coefficients. This technique is then applied to an aeroelastic model wing equipped with an active flutter-suppression system. Comparisons between wind-tunnel data and analysis are presented for the wing both with and without active flutter suppression. Results indicate that the wing flutter characteristics without flutter suppression can be predicted very well but that a more adequate model of wind-tunnel turbulence is required when the active flutter-suppression system is used.

  7. Meta-Analytic Methods of Pooling Correlation Matrices for Structural Equation Modeling under Different Patterns of Missing Data

    ERIC Educational Resources Information Center

    Furlow, Carolyn F.; Beretvas, S. Natasha

    2005-01-01

    Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for…

  8. Cost and schedule analytical techniques development

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This contract provided technical services and products to the Marshall Space Flight Center's Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) for the period of 3 Aug. 1991 - 30 Nov. 1994. Accomplishments summarized cover the REDSTAR data base, NASCOM hard copy data base, NASCOM automated data base, NASCOM cost model, complexity generators, program planning, schedules, NASA computer connectivity, other analytical techniques, and special project support.

  9. International Natural Gas Model 2011, Model Documentation Report

    EIA Publications

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  10. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.; Anderson, M. R.

    1985-01-01

    Optimal-control-theoretic modeling and frequency-domain analysis is the methodology proposed to evaluate analytically the handling qualities of higher-order manually controlled dynamic systems. Fundamental to the methodology is evaluating the interplay between pilot workload and closed-loop pilot/vehicle performance and stability robustness. The model-based metric for pilot workload is the required pilot phase compensation. Pilot/vehicle performance and loop stability is then evaluated using frequency-domain techniques. When these techniques were applied to the flight-test data for thirty-two highly-augmented fighter configurations, strong correlation was obtained between the analytical and experimental results.

  11. Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.

    PubMed

    Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U

    2015-05-01

    The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. A comparison of force sensing techniques for planetary manipulation

    NASA Technical Reports Server (NTRS)

    Helmick, Daniel; Okon, Avi; DiCicco, Matt

    2006-01-01

    Five techniques for sensing forces with a manipulator are compared analytically and experimentally. The techniques compared are: a six-axis wrist force/torque sensor, joint torque sensors, link strain gauges, motor current sensors, and flexibility modeling. The accuracy and repeatability fo each technique is quantified and compared.

  13. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.

    PubMed

    Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark

    2011-01-01

    Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.

  14. Industrial Demand Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.

  15. Analytical Model of Large Data Transactions in CoAP Networks

    PubMed Central

    Ludovici, Alessandro; Di Marco, Piergiuseppe; Calveras, Anna; Johansson, Karl H.

    2014-01-01

    We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP) and the IEEE 802.15.4 standard for medium access control (MAC). The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability. PMID:25153143

  16. Transportation Sector Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.

  17. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  18. Evaluation of support loss in micro-beam resonators: A revisit

    NASA Astrophysics Data System (ADS)

    Chen, S. Y.; Liu, J. Z.; Guo, F. L.

    2017-12-01

    This paper presents an analytical study on evaluation of support loss in micromechanical resonators undergoing in-plane flexural vibrations. Two-dimensional elastic wave theory is used to determine the energy transmission from the vibrating resonator to the support. Fourier transform and Green's function technique are adopted to solve the problem of wave motions on the surface of the support excited by the forces transmitted by the resonator onto the support. Analytical expressions of support loss in terms of quality factor, taking into account distributed normal stress and shear stress in the attachment region, and coupling between the normal stress and shear stress as well as material disparity between the support and the resonator, have been derived. Effects of geometry of micro-beam resonators, and material dissimilarity between support and resonator on support loss are examined. Numerical results show that 'harder resonator' and 'softer support' combination leads to larger support loss. In addition, the Perfectly Matched Layer (PML) numerical simulation technique is employed for validation of the proposed analytical model. Comparing with results of quality factor obtained by PML technique, we find that the present model agrees well with the results of PML technique and the pure-shear model overestimates support loss noticeably, especially for resonators with small aspect ratio and large material dissimilarity between the support and resonator.

  19. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    PubMed Central

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  20. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    PubMed

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  1. The Analytical Limits of Modeling Short Diffusion Timescales

    NASA Astrophysics Data System (ADS)

    Bradshaw, R. W.; Kent, A. J.

    2016-12-01

    Chemical and isotopic zoning in minerals is widely used to constrain the timescales of magmatic processes such as magma mixing and crystal residence, etc. via diffusion modeling. Forward modeling of diffusion relies on fitting diffusion profiles to measured compositional gradients. However, an individual measurement is essentially an average composition for a segment of the gradient defined by the spatial resolution of the analysis. Thus there is the potential for the analytical spatial resolution to limit the timescales that can be determined for an element of given diffusivity, particularly where the scale of the gradient approaches that of the measurement. Here we use a probabilistic modeling approach to investigate the effect of analytical spatial resolution on estimated timescales from diffusion modeling. Our method investigates how accurately the age of a synthetic diffusion profile can be obtained by modeling an "unknown" profile derived from discrete sampling of the synthetic compositional gradient at a given spatial resolution. We also include the effects of analytical uncertainty and the position of measurements relative to the diffusion gradient. We apply this method to the spatial resolutions of common microanalytical techniques (LA-ICP-MS, SIMS, EMP, NanoSIMS). Our results confirm that for a given diffusivity, higher spatial resolution gives access to shorter timescales, and that each analytical spacing has a minimum timescale, below which it overestimates the timescale. For example, for Ba diffusion in plagioclase at 750 °C timescales are accurate (within 20%) above 10, 100, 2,600, and 71,000 years at 0.3, 1, 5, and 25 mm spatial resolution, respectively. For Sr diffusion in plagioclase at 750 °C, timescales are accurate above 0.02, 0.2, 4, and 120 years at the same spatial resolutions. Our results highlight the importance of selecting appropriate analytical techniques to estimate accurate diffusion-based timescales.

  2. Analytical techniques for the study of some parameters of multispectral scanner systems for remote sensing

    NASA Technical Reports Server (NTRS)

    Wiswell, E. R.; Cooper, G. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.

  3. Regarding on the prototype solutions for the nonlinear fractional-order biological population model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskonus, Haci Mehmet, E-mail: hmbaskonus@gmail.com; Bulut, Hasan

    2016-06-08

    In this study, we have submitted to literature a method newly extended which is called as Improved Bernoulli sub-equation function method based on the Bernoulli Sub-ODE method. The proposed analytical scheme has been expressed with steps. We have obtained some new analytical solutions to the nonlinear fractional-order biological population model by using this technique. Two and three dimensional surfaces of analytical solutions have been drawn by wolfram Mathematica 9. Finally, a conclusion has been submitted by mentioning important acquisitions founded in this study.

  4. Methods and Techniques for Clinical Text Modeling and Analytics

    ERIC Educational Resources Information Center

    Ling, Yuan

    2017-01-01

    This study focuses on developing and applying methods/techniques in different aspects of the system for clinical text understanding, at both corpus and document level. We deal with two major research questions: First, we explore the question of "How to model the underlying relationships from clinical notes at corpus level?" Documents…

  5. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  6. Residential Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.

  7. ULTRASONIC STUDIES OF THE FUNDAMENTAL MECHANISMS OF RECRYSTALLIZATION AND SINTERING OF METALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TURNER, JOSEPH A.

    2005-11-30

    The purpose of this project was to develop a fundamental understanding of the interaction of an ultrasonic wave with complex media, with specific emphases on recrystallization and sintering of metals. A combined analytical, numerical, and experimental research program was implemented. Theoretical models of elastic wave propagation through these complex materials were developed using stochastic wave field techniques. The numerical simulations focused on finite element wave propagation solutions through complex media. The experimental efforts were focused on corroboration of the models developed and on the development of new experimental techniques. The analytical and numerical research allows the experimental results to bemore » interpreted quantitatively.« less

  8. Blade loss transient dynamics analysis, volume 1. Task 1: Survey and perspective. [aircraft gas turbine engines

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    An analytical technique was developed to predict the behavior of a rotor system subjected to sudden unbalance. The technique is implemented in the Turbine Engine Transient Rotor Analysis (TETRA) computer program using the component element method. The analysis was particularly aimed toward blade-loss phenomena in gas turbine engines. A dual-rotor, casing, and pylon structure can be modeled by the computer program. Blade tip rubs, Coriolis forces, and mechanical clearances are included. The analytical system was verified by modeling and simulating actual test conditions for a rig test as well as a full-engine, blade-release demonstration.

  9. Residential Saudi load forecasting using analytical model and Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Al-Harbi, Ahmad Abdulaziz

    In recent years, load forecasting has become one of the main fields of study and research. Short Term Load Forecasting (STLF) is an important part of electrical power system operation and planning. This work investigates the applicability of different approaches; Artificial Neural Networks (ANNs) and hybrid analytical models to forecast residential load in Kingdom of Saudi Arabia (KSA). These two techniques are based on model human modes behavior formulation. These human modes represent social, religious, official occasions and environmental parameters impact. The analysis is carried out on residential areas for three regions in two countries exposed to distinct people activities and weather conditions. The collected data are for Al-Khubar and Yanbu industrial city in KSA, in addition to Seattle, USA to show the validity of the proposed models applied on residential load. For each region, two models are proposed. First model is next hour load forecasting while second model is next day load forecasting. Both models are analyzed using the two techniques. The obtained results for ANN next hour models yield very accurate results for all areas while relatively reasonable results are achieved when using hybrid analytical model. For next day load forecasting, the two approaches yield satisfactory results. Comparative studies were conducted to prove the effectiveness of the models proposed.

  10. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management

    PubMed Central

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-01-01

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713

  11. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management.

    PubMed

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-12-15

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  12. The Wide-Field Imaging Interferometry Testbed: Enabling Techniques for High Angular Resolution Astronomy

    NASA Technical Reports Server (NTRS)

    Rinehart, S. A.; Armstrong, T.; Frey, Bradley J.; Jung, J.; Kirk, J.; Leisawitz, David T.; Leviton, Douglas B.; Lyon, R.; Maher, Stephen; Martino, Anthony J.; hide

    2007-01-01

    The Wide-Field Imaging Interferometry Testbed (WIIT) was designed to develop techniques for wide-field of view imaging interferometry, using "double-Fourier" methods. These techniques will be important for a wide range of future spacebased interferometry missions. We have provided simple demonstrations of the methodology already, and continuing development of the testbed will lead to higher data rates, improved data quality, and refined algorithms for image reconstruction. At present, the testbed effort includes five lines of development; automation of the testbed, operation in an improved environment, acquisition of large high-quality datasets, development of image reconstruction algorithms, and analytical modeling of the testbed. We discuss the progress made towards the first four of these goals; the analytical modeling is discussed in a separate paper within this conference.

  13. An analytical-numerical approach for parameter determination of a five-parameter single-diode model of photovoltaic cells and modules

    NASA Astrophysics Data System (ADS)

    Hejri, Mohammad; Mokhtari, Hossein; Azizian, Mohammad Reza; Söder, Lennart

    2016-04-01

    Parameter extraction of the five-parameter single-diode model of solar cells and modules from experimental data is a challenging problem. These parameters are evaluated from a set of nonlinear equations that cannot be solved analytically. On the other hand, a numerical solution of such equations needs a suitable initial guess to converge to a solution. This paper presents a new set of approximate analytical solutions for the parameters of a five-parameter single-diode model of photovoltaic (PV) cells and modules. The proposed solutions provide a good initial point which guarantees numerical analysis convergence. The proposed technique needs only a few data from the PV current-voltage characteristics, i.e. open circuit voltage Voc, short circuit current Isc and maximum power point current and voltage Im; Vm making it a fast and low cost parameter determination technique. The accuracy of the presented theoretical I-V curves is verified by experimental data.

  14. Analytical model for real time, noninvasive estimation of blood glucose level.

    PubMed

    Adhyapak, Anoop; Sidley, Matthew; Venkataraman, Jayanti

    2014-01-01

    The paper presents an analytical model to estimate blood glucose level from measurements made non-invasively and in real time by an antenna strapped to a patient's wrist. Some promising success has been shown by the RIT ETA Lab research group that an antenna's resonant frequency can track, in real time, changes in glucose concentration. Based on an in-vitro study of blood samples of diabetic patients, the paper presents a modified Cole-Cole model that incorporates a factor to represent the change in glucose level. A calibration technique using the input impedance technique is discussed and the results show a good estimation as compared to the glucose meter readings. An alternate calibration methodology has been developed that is based on the shift in the antenna resonant frequency using an equivalent circuit model containing a shunt capacitor to represent the shift in resonant frequency with changing glucose levels. Work under progress is the optimization of the technique with a larger sample of patients.

  15. An Analytical Technique to Elucidate Field Impurities From Manufacturing Uncertainties of an Double Pancake Type HTS Insert for High Field LTS/HTS NMR Magnets

    PubMed Central

    Hahn, Seung-yong; Ahn, Min Cheol; Bobrov, Emanuel Saul; Bascuñán, Juan; Iwasa, Yukikazu

    2010-01-01

    This paper addresses adverse effects of dimensional uncertainties of an HTS insert assembled with double-pancake coils on spatial field homogeneity. Each DP coil was wound with Bi2223 tapes having dimensional tolerances larger than one order of magnitude of those accepted for LTS wires used in conventional NMR magnets. The paper presents: 1) dimensional variations measured in two LTS/HTS NMR magnets, 350 MHz (LH350) and 700 MHz (LH700), both built and operated at the Francis Bitter Magnet Laboratory; and 2) an analytical technique and its application to elucidate the field impurities measured with the two LTS/HTS magnets. Field impurities computed with the analytical model and those measured with the two LTS/HTS magnets agree quite well, demonstrating that this analytical technique is applicable to design a DP-assembled HTS insert with an improved field homogeneity for a high-field LTS/HTS NMR magnet. PMID:20407595

  16. World Energy Projection System Plus Model Documentation: Coal Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  17. World Energy Projection System Plus Model Documentation: Transportation Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  18. World Energy Projection System Plus Model Documentation: Residential Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  19. World Energy Projection System Plus Model Documentation: Refinery Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  20. World Energy Projection System Plus Model Documentation: Main Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  1. World Energy Projection System Plus Model Documentation: Electricity Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  2. Aberration measurement technique based on an analytical linear model of a through-focus aerial image.

    PubMed

    Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas

    2014-03-10

    We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.

  3. Two Analyte Calibration From The Transient Response Of Potentiometric Sensors Employed With The SIA Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cartas, Raul; Mimendia, Aitor; Valle, Manel del

    2009-05-23

    Calibration models for multi-analyte electronic tongues have been commonly built using a set of sensors, at least one per analyte under study. Complex signals recorded with these systems are formed by the sensors' responses to the analytes of interest plus interferents, from which a multivariate response model is then developed. This work describes a data treatment method for the simultaneous quantification of two species in solution employing the signal from a single sensor. The approach used here takes advantage of the complex information recorded with one electrode's transient after insertion of sample for building the calibration models for both analytes.more » The departure information from the electrode was firstly processed by discrete wavelet for transforming the signals to extract useful information and reduce its length, and then by artificial neural networks for fitting a model. Two different potentiometric sensors were used as study case for simultaneously corroborating the effectiveness of the approach.« less

  4. Human Factors in Field Experimentation Design and Analysis of Analytical Suppression Model

    DTIC Science & Technology

    1978-09-01

    men in uf"an-dachine- Systems " supports the development of new doctrines, design of weapon systems as well as training programs for trQops. One...Experimentation Design -Master’s thesis: and Analysis.of an Analytical Suppression.Spebr17 Model PR@~w 3.RPR 7. AUTHOR(@) COT RIETeo 31AN? wijMu~aw...influences to suppression. Techniques are examined for including. the suppre.ssive effects of weapon systems in Lanchester-type combat m~odels, whir~h may be

  5. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  6. World Energy Projection System Plus Model Documentation: Greenhouse Gases Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  7. World Energy Projection System Plus Model Documentation: Natural Gas Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  8. Assessing and reducing hydrogeologic model uncertainty

    USDA-ARS?s Scientific Manuscript database

    NRC is sponsoring research that couples model abstraction techniques with model uncertainty assessment methods. Insights and information from this program will be useful in decision making by NRC staff, licensees and stakeholders in their assessment of subsurface radionuclide transport. All analytic...

  9. World Energy Projection System Plus Model Documentation: District Heat Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  10. World Energy Projection System Plus Model Documentation: Industrial Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  11. Modelling a suitable location for Urban Solid Waste Management using AHP method and GIS -A geospatial approach and MCDM Model

    NASA Astrophysics Data System (ADS)

    Iqbal, M.; Islam, A.; Hossain, A.; Mustaque, S.

    2016-12-01

    Multi-Criteria Decision Making(MCDM) is advanced analytical method to evaluate appropriate result or decision from multiple criterion environment. Present time in advanced research, MCDM technique is progressive analytical process to evaluate a logical decision from various conflict. In addition, Present day Geospatial approach (e.g. Remote sensing and GIS) also another advanced technical approach in a research to collect, process and analyze various spatial data at a time. GIS and Remote sensing together with the MCDM technique could be the best platform to solve a complex decision making process. These two latest process combined very effectively used in site selection for solid waste management in urban policy. The most popular MCDM technique is Weighted Linear Method (WLC) where Analytical Hierarchy Process (AHP) is another popular and consistent techniques used in worldwide as dependable decision making. Consequently, the main objective of this study is improving a AHP model as MCDM technique with Geographic Information System (GIS) to select a suitable landfill site for urban solid waste management. Here AHP technique used as a MCDM tool to select the best suitable landfill location for urban solid waste management. To protect the urban environment in a sustainable way municipal waste needs an appropriate landfill site considering environmental, geological, social and technical aspect of the region. A MCDM model generate from five class related which related to environmental, geological, social and technical using AHP method and input the result set in GIS for final model location for urban solid waste management. The final suitable location comes out that 12.2% of the area corresponds to 22.89 km2 considering the total study area. In this study, Keraniganj sub-district of Dhaka district in Bangladesh is consider as study area which is densely populated city currently undergoes an unmanaged waste management system especially the suitable landfill sites for waste dumping site.

  12. An Examination of Sampling Characteristics of Some Analytic Factor Transformation Techniques.

    ERIC Educational Resources Information Center

    Skakun, Ernest N.; Hakstian, A. Ralph

    Two population raw data matrices were constructed by computer simulation techniques. Each consisted of 10,000 subjects and 12 variables, and each was constructed according to an underlying factorial model consisting of four major common factors, eight minor common factors, and 12 unique factors. The computer simulation techniques were employed to…

  13. The space shuttle payload planning working groups. Volume 8: Earth and ocean physics

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The findings and recommendations of the Earth and Ocean Physics working group of the space shuttle payload planning activity are presented. The requirements for the space shuttle mission are defined as: (1) precision measurement for earth and ocean physics experiments, (2) development and demonstration of new and improved sensors and analytical techniques, (3) acquisition of surface truth data for evaluation of new measurement techniques, (4) conduct of critical experiments to validate geophysical phenomena and instrumental results, and (5) development and validation of analytical/experimental models for global ocean dynamics and solid earth dynamics/earthquake prediction. Tables of data are presented to show the flight schedule estimated costs, and the mission model.

  14. Implementing Operational Analytics using Big Data Technologies to Detect and Predict Sensor Anomalies

    NASA Astrophysics Data System (ADS)

    Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.

    2016-09-01

    Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.

  15. Recent Methodology in Ginseng Analysis

    PubMed Central

    Baek, Seung-Hoon; Bae, Ok-Nam; Park, Jeong Hill

    2012-01-01

    As much as the popularity of ginseng in herbal prescriptions or remedies, ginseng has become the focus of research in many scientific fields. Analytical methodologies for ginseng, referred to as ginseng analysis hereafter, have been developed for bioactive component discovery, phytochemical profiling, quality control, and pharmacokinetic studies. This review summarizes the most recent advances in ginseng analysis in the past half-decade including emerging techniques and analytical trends. Ginseng analysis includes all of the leading analytical tools and serves as a representative model for the analytical research of herbal medicines. PMID:23717112

  16. Transient Thermal Model and Analysis of the Lunar Surface and Regolith for Cryogenic Fluid Storage

    NASA Technical Reports Server (NTRS)

    Christie, Robert J.; Plachta, David W.; Yasan, Mohammad M.

    2008-01-01

    A transient thermal model of the lunar surface and regolith was developed along with analytical techniques which will be used to evaluate the storage of cryogenic fluids at equatorial and polar landing sites. The model can provide lunar surface and subsurface temperatures as a function of latitude and time throughout the lunar cycle and season. It also accounts for the presence of or lack of the undisturbed fluff layer on the lunar surface. The model was validated with Apollo 15 and Clementine data and shows good agreement with other analytical models.

  17. A pilot modeling technique for handling-qualities research

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  18. Determination of aluminium in groundwater samples by GF-AAS, ICP-AES, ICP-MS and modelling of inorganic aluminium complexes.

    PubMed

    Frankowski, Marcin; Zioła-Frankowska, Anetta; Kurzyca, Iwona; Novotný, Karel; Vaculovič, Tomas; Kanický, Viktor; Siepak, Marcin; Siepak, Jerzy

    2011-11-01

    The paper presents the results of aluminium determinations in ground water samples of the Miocene aquifer from the area of the city of Poznań (Poland). The determined aluminium content amounted from <0.0001 to 752.7 μg L(-1). The aluminium determinations were performed using three analytical techniques: graphite furnace atomic absorption spectrometry (GF-AAS), inductively coupled plasma atomic emission spectrometry (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS). The results of aluminium determinations in groundwater samples for particular analytical techniques were compared. The results were used to identify the ascent of ground water from the Mesozoic aquifer to the Miocene aquifer in the area of the fault graben. Using the Mineql+ program, the modelling of the occurrence of aluminium and the following aluminium complexes: hydroxy, with fluorides and sulphates was performed. The paper presents the results of aluminium determinations in ground water using different analytical techniques as well as the chemical modelling in the Mineql+ program, which was performed for the first time and which enabled the identification of aluminium complexes in the investigated samples. The study confirms the occurrence of aluminium hydroxy complexes and aluminium fluoride complexes in the analysed groundwater samples. Despite the dominance of sulphates and organic matter in the sample, major participation of the complexes with these ligands was not stated based on the modelling.

  19. Advances in analytical chemistry

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.

    1991-01-01

    Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.

  20. Transient well flow in vertically heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Hemker, C. J.

    1999-11-01

    A solution for the general problem of computing well flow in vertically heterogeneous aquifers is found by an integration of both analytical and numerical techniques. The radial component of flow is treated analytically; the drawdown is a continuous function of the distance to the well. The finite-difference technique is used for the vertical flow component only. The aquifer is discretized in the vertical dimension and the heterogeneous aquifer is considered to be a layered (stratified) formation with a finite number of homogeneous sublayers, where each sublayer may have different properties. The transient part of the differential equation is solved with Stehfest's algorithm, a numerical inversion technique of the Laplace transform. The well is of constant discharge and penetrates one or more of the sublayers. The effect of wellbore storage on early drawdown data is taken into account. In this way drawdowns are found for a finite number of sublayers as a continuous function of radial distance to the well and of time since the pumping started. The model is verified by comparing results with published analytical and numerical solutions for well flow in homogeneous and heterogeneous, confined and unconfined aquifers. Instantaneous and delayed drainage of water from above the water table are considered, combined with the effects of partially penetrating and finite-diameter wells. The model is applied to demonstrate that the transient effects of wellbore storage in unconfined aquifers are less pronounced than previous numerical experiments suggest. Other applications of the presented solution technique are given for partially penetrating wells in heterogeneous formations, including a demonstration of the effect of decreasing specific storage values with depth in an otherwise homogeneous aquifer. The presented solution can be a powerful tool for the analysis of drawdown from pumping tests, because hydraulic properties of layered heterogeneous aquifer systems with partially penetrating wells may be estimated without the need to construct transient numerical models. A computer program based on the hybrid analytical-numerical technique is available from the author.

  1. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1978-01-01

    The development of system models that can provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer are described. Specific topics covered include: system models; performability evaluation; capability and functional dependence; computation of trajectory set probabilities; and hierarchical modeling of an air transport mission.

  2. Characterization of Regular Wave, Irregular Wave, and Large-Amplitude Wave Group Kinematics in an Experimental Basin

    DTIC Science & Technology

    2011-02-01

    seakeeping was the transient wave technique, developed analytically by Davis and Zarnick (1964). At the David Taylor Model Basin, Davis and Zarnick, and...Gersten and Johnson (1969) applied the transient wave technique to regular wave model experiments for heave and pitch, at zero forward speed. These...tests demonstrated a potential reduction by an order of magnitude of the total necessary testing time. The transient wave technique was also applied to

  3. Toward rigorous idiographic research in prevention science: comparison between three analytic strategies for testing preventive intervention in very small samples.

    PubMed

    Ridenour, Ty A; Pineo, Thomas Z; Maldonado Molina, Mildred M; Hassmiller Lich, Kristen

    2013-06-01

    Psychosocial prevention research lacks evidence from intensive within-person lines of research to understand idiographic processes related to development and response to intervention. Such data could be used to fill gaps in the literature and expand the study design options for prevention researchers, including lower-cost yet rigorous studies (e.g., for program evaluations), pilot studies, designs to test programs for low prevalence outcomes, selective/indicated/adaptive intervention research, and understanding of differential response to programs. This study compared three competing analytic strategies designed for this type of research: autoregressive moving average, mixed model trajectory analysis, and P-technique. Illustrative time series data were from a pilot study of an intervention for nursing home residents with diabetes (N = 4) designed to improve control of blood glucose. A within-person, intermittent baseline design was used. Intervention effects were detected using each strategy for the aggregated sample and for individual patients. The P-technique model most closely replicated observed glucose levels. ARIMA and P-technique models were most similar in terms of estimated intervention effects and modeled glucose levels. However, ARIMA and P-technique also were more sensitive to missing data, outliers and number of observations. Statistical testing suggested that results generalize both to other persons as well as to idiographic, longitudinal processes. This study demonstrated the potential contributions of idiographic research in prevention science as well as the need for simulation studies to delineate the research circumstances when each analytic approach is optimal for deriving the correct parameter estimates.

  4. Toward Rigorous Idiographic Research in Prevention Science: Comparison Between Three Analytic Strategies for Testing Preventive Intervention in Very Small Samples

    PubMed Central

    Pineo, Thomas Z.; Maldonado Molina, Mildred M.; Lich, Kristen Hassmiller

    2013-01-01

    Psychosocial prevention research lacks evidence from intensive within-person lines of research to understand idiographic processes related to development and response to intervention. Such data could be used to fill gaps in the literature and expand the study design options for prevention researchers, including lower-cost yet rigorous studies (e.g., for program evaluations), pilot studies, designs to test programs for low prevalence outcomes, selective/indicated/ adaptive intervention research, and understanding of differential response to programs. This study compared three competing analytic strategies designed for this type of research: autoregressive moving average, mixed model trajectory analysis, and P-technique. Illustrative time series data were from a pilot study of an intervention for nursing home residents with diabetes (N=4) designed to improve control of blood glucose. A within-person, intermittent baseline design was used. Intervention effects were detected using each strategy for the aggregated sample and for individual patients. The P-technique model most closely replicated observed glucose levels. ARIMA and P-technique models were most similar in terms of estimated intervention effects and modeled glucose levels. However, ARIMA and P-technique also were more sensitive to missing data, outliers and number of observations. Statistical testing suggested that results generalize both to other persons as well as to idiographic, longitudinal processes. This study demonstrated the potential contributions of idiographic research in prevention science as well as the need for simulation studies to delineate the research circumstances when each analytic approach is optimal for deriving the correct parameter estimates. PMID:23299558

  5. Analytical solutions for benchmarking cold regions subsurface water flow and energy transport models: one-dimensional soil thaw with conduction and advection

    USGS Publications Warehouse

    Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.

    2014-01-01

    Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.

  6. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.

  7. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  8. Mechanical Properties of Additively Manufactured Thick Honeycombs.

    PubMed

    Hedayati, Reza; Sadighi, Mojtaba; Mohammadi Aghdam, Mohammad; Zadpoor, Amir Abbas

    2016-07-23

    Honeycombs resemble the structure of a number of natural and biological materials such as cancellous bone, wood, and cork. Thick honeycomb could be also used for energy absorption applications. Moreover, studying the mechanical behavior of honeycombs under in-plane loading could help understanding the mechanical behavior of more complex 3D tessellated structures such as porous biomaterials. In this paper, we study the mechanical behavior of thick honeycombs made using additive manufacturing techniques that allow for fabrication of honeycombs with arbitrary and precisely controlled thickness. Thick honeycombs with different wall thicknesses were produced from polylactic acid (PLA) using fused deposition modelling, i.e., an additive manufacturing technique. The samples were mechanically tested in-plane under compression to determine their mechanical properties. We also obtained exact analytical solutions for the stiffness matrix of thick hexagonal honeycombs using both Euler-Bernoulli and Timoshenko beam theories. The stiffness matrix was then used to derive analytical relationships that describe the elastic modulus, yield stress, and Poisson's ratio of thick honeycombs. Finite element models were also built for computational analysis of the mechanical behavior of thick honeycombs under compression. The mechanical properties obtained using our analytical relationships were compared with experimental observations and computational results as well as with analytical solutions available in the literature. It was found that the analytical solutions presented here are in good agreement with experimental and computational results even for very thick honeycombs, whereas the analytical solutions available in the literature show a large deviation from experimental observation, computational results, and our analytical solutions.

  9. EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES

    EPA Science Inventory

    An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...

  10. Grabbing the Air Force by the Tail: Applying Strategic Cost Analytics to Understand and Manage Indirect Cost Behavior

    DTIC Science & Technology

    2015-09-17

    impact influenced by its internal and external supply chain activities. This starts with understanding how we currently apply advanced analytic techniques... minimalistic model provides for sufficient degrees of freedom to guard against overfitting. Second, to guard against the possibility of an over-trained model in... starting with 1*. 201* All Military compensation EEICs starting with 201*. Facility Sustainment 52* & 56* All facility maintenance, repair and minor

  11. Modeling Lexical Borrowability.

    ERIC Educational Resources Information Center

    van Hout, Roeland; Muysken, Pieter

    1994-01-01

    Develops analytical techniques to determine "borrowability," the ease with which a lexical item or category of lexical items can be borrowed by one language from another. These techniques are then applied to Spanish borrowings in Bolivian Quechua on the basis of a set of bilingual texts. (29 references) (MDM)

  12. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  13. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  14. Pulsed plane wave analytic solutions for generic shapes and the validation of Maxwell's equations solvers

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Vastano, John A.; Lomax, Harvard

    1992-01-01

    Generic shapes are subjected to pulsed plane waves of arbitrary shape. The resulting scattered electromagnetic fields are determined analytically. These fields are then computed efficiently at field locations for which numerically determined EM fields are required. Of particular interest are the pulsed waveform shapes typically utilized by radar systems. The results can be used to validate the accuracy of finite difference time domain Maxwell's equations solvers. A two-dimensional solver which is second- and fourth-order accurate in space and fourth-order accurate in time is examined. Dielectric media properties are modeled by a ramping technique which simplifies the associated gridding of body shapes. The attributes of the ramping technique are evaluated by comparison with the analytic solutions.

  15. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    A drone aircraft equipped with an active flutter suppression system is considered with emphasis on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are given for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. The mathematical models are included and existing analytical techniques are described as well as an alternative analytical technique for obtaining closed-loop results.

  16. Required, Practical, or Unnecessary? An Examination and Demonstration of Propensity Score Matching Using Longitudinal Secondary Data

    ERIC Educational Resources Information Center

    Padgett, Ryan D.; Salisbury, Mark H.; An, Brian P.; Pascarella, Ernest T.

    2010-01-01

    The sophisticated analytical techniques available to institutional researchers give them an array of procedures to estimate a causal effect using observational data. But as many quantitative researchers have discovered, access to a wider selection of statistical tools does not necessarily ensure construction of a better analytical model. Moreover,…

  17. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  18. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography

    PubMed Central

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.

    2016-01-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  19. Effect of different analyte diffusion/adsorption protocols on SERS signals

    NASA Astrophysics Data System (ADS)

    Li, Ruoping; Petschek, Rolfe G.; Han, Junhe; Huang, Mingju

    2018-07-01

    The effect of different analyte diffusion/adsorption protocols was studied which is often overlooked in surface-enhanced Raman scattering (SERS) technique. Three protocols: highly concentrated dilution (HCD) protocol, half-half dilution (HHD) protocol and layered adsorption (LA) protocol were studied and the SERS substrates were monolayer films of 80 nm Ag nanoparticles (NPs) which were modified by polyvinylpyrrolidone. The diffusion/adsorption mechanisms were modelled using the diffusion equation and the electromagnetic field distribution of two adjacent Ag NPs was simulated by the finite-different time-domain method. All experimental data and theoretical analysis suggest that different diffusion/adsorption behaviour of analytes will cause different SERS signal enhancements. HHD protocol could produce the most uniform and reproducible samples, and the corresponding signal intensity of the analyte is the strongest. This study will help to understand and promote the use of SERS technique in quantitative analysis.

  20. Theory and practical understanding of the migration behavior of proteins and peptides in CE and related techniques.

    PubMed

    Freitag, Ruth; Hilbrig, Frank

    2007-07-01

    CEC is defined as an analytical method, where the analytes are separated on a chromatographic column in the presence of an applied voltage. The separation of charged analytes in CEC is complex, since chromatographic interaction, electroosmosis and electrophoresis contribute to the experimentally observed behavior. The putative contribution of effects such as surface electrodiffusion has been suggested. A sound theoretical treatment incorporating all effects is currently not available. The question of whether the different effects contribute in an independent or an interdependent manner is still under discussion. In this contribution, the state-of-the-art in the theoretical description of the individual contributions as well as models for the retention behavior and in particular possible dimensionless 'retention factors' is discussed, together with the experimental database for the separation of charged analytes, in particular proteins and peptides, by CEC and related techniques.

  1. Tutorial in medical decision modeling incorporating waiting lines and queues using discrete event simulation.

    PubMed

    Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter

    2010-01-01

    In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example.

  2. Concept for a Satellite-Based Advanced Air Traffic Management System : Volume 9. System and Subsystem Performance Models.

    DOT National Transportation Integrated Search

    1973-02-01

    The volume presents the models used to analyze basic features of the system, establish feasibility of techniques, and evaluate system performance. The models use analytical expressions and computer simulations to represent the relationship between sy...

  3. Commercial Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  4. Carrier recovery techniques on satellite mobile channels

    NASA Technical Reports Server (NTRS)

    Vucetic, B.; Du, J.

    1990-01-01

    An analytical method and a stored channel model were used to evaluate error performance of uncoded quadrature phase shift keying (QPSK) and M-ary phase shift keying (MPSK) trellis coded modulation (TCM) over shadowed satellite mobile channels in the presence of phase jitter for various carrier recovery techniques.

  5. Numerical modeling of pollutant transport using a Lagrangian marker particle technique

    NASA Technical Reports Server (NTRS)

    Spaulding, M.

    1976-01-01

    A derivation and code were developed for the three-dimensional mass transport equation, using a particle-in-cell solution technique, to solve coastal zone waste discharge problems where particles are a major component of the waste. Improvements in the particle movement techniques are suggested and typical examples illustrated. Preliminary model comparisons with analytic solutions for an instantaneous point release in a uniform flow show good results in resolving the waste motion. The findings to date indicate that this computational model will provide a useful technique to study the motion of sediment, dredged spoils, and other particulate waste commonly deposited in coastal waters.

  6. Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial.

    PubMed

    Oliveri, Paolo

    2017-08-22

    Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Technique for experimental determination of radiation interchange factors in solar wavelengths

    NASA Technical Reports Server (NTRS)

    Bobco, R. P.; Nolte, L. J.; Wensley, J. R.

    1971-01-01

    Process obtains solar heating data which support analytical design. Process yields quantitative information on local solar exposure of models which are geometrically and reflectively similar to prototypes under study. Models are tested in a shirtsleeve environment.

  8. Torque Transient of Magnetically Drive Flow for Viscosity Measurement

    NASA Technical Reports Server (NTRS)

    Ban, Heng; Li, Chao; Su, Ching-Hua; Lin, Bochuan; Scripa, Rosalia N.; Lehoczky, Sandor L.

    2004-01-01

    Viscosity is a good indicator of structural changes for complex liquids, such as semiconductor melts with chain or ring structures. This paper discusses the theoretical and experimental results of the transient torque technique for non-intrusive viscosity measurement. Such a technique is essential for the high temperature viscosity measurement of high pressure and toxic semiconductor melts. In this paper, our previous work on oscillating cup technique was expanded to the transient process of a magnetically driven melt flow in a damped oscillation system. Based on the analytical solution for the fluid flow and cup oscillation, a semi-empirical model was established to extract the fluid viscosity. The analytical and experimental results indicated that such a technique has the advantage of short measurement time and straight forward data analysis procedures

  9. Ratio of sequential chromatograms for quantitative analysis and peak deconvolution: Application to standard addition method and process monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.

    1990-08-01

    This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less

  10. Dual nozzle aerodynamic and cooling analysis study

    NASA Technical Reports Server (NTRS)

    Meagher, G. M.

    1981-01-01

    Analytical models to predict performance and operating characteristics of dual nozzle concepts were developed and improved. Aerodynamic models are available to define flow characteristics and bleed requirements for both the dual throat and dual expander concepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow, boundary layer, and shock effects within dual nozzle engines. Thermal analyses were performed to define cooling requirements for baseline configurations, and special studies of unique dual nozzle cooling problems defined feasible means of achieving adequate cooling.

  11. Application of differential transformation method for solving dengue transmission mathematical model

    NASA Astrophysics Data System (ADS)

    Ndii, Meksianis Z.; Anggriani, Nursanti; Supriatna, Asep K.

    2018-03-01

    The differential transformation method (DTM) is a semi-analytical numerical technique which depends on Taylor series and has application in many areas including Biomathematics. The aim of this paper is to employ the differential transformation method (DTM) to solve system of non-linear differential equations for dengue transmission mathematical model. Analytical and numerical solutions are determined and the results are compared to that of Runge-Kutta method. We found a good agreement between DTM and Runge-Kutta method.

  12. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vartio, Eric; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott,Robert C.

    2007-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  13. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott, Robert C.

    2006-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  14. A survey of nested grid techniques and their potential for use within the MASS weather prediction model

    NASA Technical Reports Server (NTRS)

    Koch, Steven E.; Mcqueen, Jeffery T.

    1987-01-01

    A survey of various one- and two-way interactive nested grid techniques used in hydrostatic numerical weather prediction models is presented and the advantages and disadvantages of each method are discussed. The techniques for specifying the lateral boundary conditions for each nested grid scheme are described in detail. Averaging and interpolation techniques used when applying the coarse mesh grid (CMG) and fine mesh grid (FMG) interface conditions during two-way nesting are discussed separately. The survey shows that errors are commonly generated at the boundary between the CMG and FMG due to boundary formulation or specification discrepancies. Methods used to control this noise include application of smoothers, enhanced diffusion, or damping-type time integration schemes to model variables. The results from this survey provide the information needed to decide which one-way and two-way nested grid schemes merit future testing with the Mesoscale Atmospheric Simulation System (MASS) model. An analytically specified baroclinic wave will be used to conduct systematic tests of the chosen schemes since this will allow for objective determination of the interfacial noise in the kind of meteorological setting for which MASS is designed. Sample diagnostic plots from initial tests using the analytic wave are presented to illustrate how the model-generated noise is ascertained. These plots will be used to compare the accuracy of the various nesting schemes when incorporated into the MASS model.

  15. Determination of a Limited Scope Network's Lightning Detection Efficiency

    NASA Technical Reports Server (NTRS)

    Rompala, John T.; Blakeslee, R.

    2008-01-01

    This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.

  16. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  17. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1991-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  18. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.

    1992-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  19. A dynamic mechanical analysis technique for porous media

    PubMed Central

    Pattison, Adam J; McGarry, Matthew; Weaver, John B; Paulsen, Keith D

    2015-01-01

    Dynamic mechanical analysis (DMA) is a common way to measure the mechanical properties of materials as functions of frequency. Traditionally, a viscoelastic mechanical model is applied and current DMA techniques fit an analytical approximation to measured dynamic motion data by neglecting inertial forces and adding empirical correction factors to account for transverse boundary displacements. Here, a finite element (FE) approach to processing DMA data was developed to estimate poroelastic material properties. Frequency-dependent inertial forces, which are significant in soft media and often neglected in DMA, were included in the FE model. The technique applies a constitutive relation to the DMA measurements and exploits a non-linear inversion to estimate the material properties in the model that best fit the model response to the DMA data. A viscoelastic version of this approach was developed to validate the approach by comparing complex modulus estimates to the direct DMA results. Both analytical and FE poroelastic models were also developed to explore their behavior in the DMA testing environment. All of the models were applied to tofu as a representative soft poroelastic material that is a common phantom in elastography imaging studies. Five samples of three different stiffnesses were tested from 1 – 14 Hz with rough platens placed on the top and bottom surfaces of the material specimen under test to restrict transverse displacements and promote fluid-solid interaction. The viscoelastic models were identical in the static case, and nearly the same at frequency with inertial forces accounting for some of the discrepancy. The poroelastic analytical method was not sufficient when the relevant physical boundary constraints were applied, whereas the poroelastic FE approach produced high quality estimates of shear modulus and hydraulic conductivity. These results illustrated appropriate shear modulus contrast between tofu samples and yielded a consistent contrast in hydraulic conductivity as well. PMID:25248170

  20. The comparative evaluation of expanded national immunization policies in Korea using an analytic hierarchy process.

    PubMed

    Shin, Taeksoo; Kim, Chun-Bae; Ahn, Yang-Heui; Kim, Hyo-Youl; Cha, Byung Ho; Uh, Young; Lee, Joo-Heon; Hyun, Sook-Jung; Lee, Dong-Han; Go, Un-Yeong

    2009-01-29

    The purpose of this paper is to propose new evaluation criteria and an analytic hierarchy process (AHP) model to assess the expanded national immunization programs (ENIPs) and to evaluate two alternative health care policies. One of the alternative policies is that private clinics and hospitals would offer free vaccination services to children and the other of them is that public health centers would offer these free vaccination services. Our model to evaluate the ENIPs was developed using brainstorming, Delphi techniques, and the AHP model. We first used the brainstorming and Delphi techniques, as well as literature reviews, to determine 25 criteria with which to evaluate the national immunization policy; we then proposed a hierarchical structure of the AHP model to assess ENIPs. By applying the proposed AHP model to the assessment of ENIPs for Korean immunization policies, we show that free vaccination services should be provided by private clinics and hospitals rather than public health centers.

  1. Terrain modeling for microwave landing system

    NASA Technical Reports Server (NTRS)

    Poulose, M. M.

    1991-01-01

    A powerful analytical approach for evaluating the terrain effects on a microwave landing system (MLS) is presented. The approach combines a multiplate model with a powerful and exhaustive ray tracing technique and an accurate formulation for estimating the electromagnetic fields due to the antenna array in the presence of terrain. Both uniform theory of diffraction (UTD) and impedance UTD techniques have been employed to evaluate these fields. Innovative techniques are introduced at each stage to make the model versatile to handle most general terrain contours and also to reduce the computational requirement to a minimum. The model is applied to several terrain geometries, and the results are discussed.

  2. Modeling of switching regulator power stages with and without zero-inductor-current dwell time

    NASA Technical Reports Server (NTRS)

    Lee, F. C. Y.; Yu, Y.

    1979-01-01

    State-space techniques are employed to derive accurate models for the three basic switching converter power stages: buck, boost, and buck/boost operating with and without zero-inductor-current dwell time. A generalized procedure is developed which treats the continuous-inductor-current mode without dwell time as a special case of the discontinuous-current mode when the dwell time vanishes. Abrupt changes of system behavior, including a reduction of the system order when the dwell time appears, are shown both analytically and experimentally. Merits resulting from the present modeling technique in comparison with existing modeling techniques are illustrated.

  3. Novel concept of washing for microfluidic paper-based analytical devices based on capillary force of paper substrates.

    PubMed

    Mohammadi, Saeed; Busa, Lori Shayne Alamo; Maeki, Masatoshi; Mohamadi, Reza M; Ishida, Akihiko; Tani, Hirofumi; Tokeshi, Manabu

    2016-11-01

    A novel washing technique for microfluidic paper-based analytical devices (μPADs) that is based on the spontaneous capillary action of paper and eliminates unbound antigen and antibody in a sandwich immunoassay is reported. Liquids can flow through a porous medium (such as paper) in the absence of external pressure as a result of capillary action. Uniform results were achieved when washing a paper substrate in a PDMS holder which was integrated with a cartridge absorber acting as a porous medium. Our study demonstrated that applying this washing technique would allow μPADs to become the least expensive microfluidic device platform with high reproducibility and sensitivity. In a model μPAD assay that utilized this novel washing technique, C-reactive protein (CRP) was detected with a limit of detection (LOD) of 5 μg mL -1 . Graphical Abstract A novel washing technique for microfluidic paper-based analytical devices (μPADs) that is based on the spontaneous capillary action of paper and eliminates unbound antigen and antibody in a sandwich immunoassay is reported.

  4. Models for randomly distributed nanoscopic domains on spherical vesicles

    NASA Astrophysics Data System (ADS)

    Anghel, Vinicius N. P.; Bolmatov, Dima; Katsaras, John

    2018-06-01

    The existence of lipid domains in the plasma membrane of biological systems has proven controversial, primarily due to their nanoscopic size—a length scale difficult to interrogate with most commonly used experimental techniques. Scattering techniques have recently proven capable of studying nanoscopic lipid domains populating spherical vesicles. However, the development of analytical methods able of predicting and analyzing domain pair correlations from such experiments has not kept pace. Here, we developed models for the random distribution of monodisperse, circular nanoscopic domains averaged on the surface of a spherical vesicle. Specifically, the models take into account (i) intradomain correlations corresponding to form factors and interdomain correlations corresponding to pair distribution functions, and (ii) the analytical computation of interdomain correlations for cases of two and three domains on a spherical vesicle. In the case of more than three domains, these correlations are treated either by Monte Carlo simulations or by spherical analogs of the Ornstein-Zernike and Percus-Yevick (PY) equations. Importantly, the spherical analog of the PY equation works best in the case of nanoscopic size domains, a length scale that is mostly inaccessible by experimental approaches such as, for example, fluorescent techniques and optical microscopies. The analytical form factors and structure factors of nanoscopic domains populating a spherical vesicle provide a new and important framework for the quantitative analysis of experimental data from commonly studied phase-separated vesicles used in a wide range of biophysical studies.

  5. Mechanical Properties of Additively Manufactured Thick Honeycombs

    PubMed Central

    Hedayati, Reza; Sadighi, Mojtaba; Mohammadi Aghdam, Mohammad; Zadpoor, Amir Abbas

    2016-01-01

    Honeycombs resemble the structure of a number of natural and biological materials such as cancellous bone, wood, and cork. Thick honeycomb could be also used for energy absorption applications. Moreover, studying the mechanical behavior of honeycombs under in-plane loading could help understanding the mechanical behavior of more complex 3D tessellated structures such as porous biomaterials. In this paper, we study the mechanical behavior of thick honeycombs made using additive manufacturing techniques that allow for fabrication of honeycombs with arbitrary and precisely controlled thickness. Thick honeycombs with different wall thicknesses were produced from polylactic acid (PLA) using fused deposition modelling, i.e., an additive manufacturing technique. The samples were mechanically tested in-plane under compression to determine their mechanical properties. We also obtained exact analytical solutions for the stiffness matrix of thick hexagonal honeycombs using both Euler-Bernoulli and Timoshenko beam theories. The stiffness matrix was then used to derive analytical relationships that describe the elastic modulus, yield stress, and Poisson’s ratio of thick honeycombs. Finite element models were also built for computational analysis of the mechanical behavior of thick honeycombs under compression. The mechanical properties obtained using our analytical relationships were compared with experimental observations and computational results as well as with analytical solutions available in the literature. It was found that the analytical solutions presented here are in good agreement with experimental and computational results even for very thick honeycombs, whereas the analytical solutions available in the literature show a large deviation from experimental observation, computational results, and our analytical solutions. PMID:28773735

  6. Performance modeling of automated manufacturing systems

    NASA Astrophysics Data System (ADS)

    Viswanadham, N.; Narahari, Y.

    A unified and systematic treatment is presented of modeling methodologies and analysis techniques for performance evaluation of automated manufacturing systems. The book is the first treatment of the mathematical modeling of manufacturing systems. Automated manufacturing systems are surveyed and three principal analytical modeling paradigms are discussed: Markov chains, queues and queueing networks, and Petri nets.

  7. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  8. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  9. Investigation of crew restraint system biomechanics. Report for May 79-Mar 81

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, N.S.; Thomson, R.A.; Fiscus, I.B.

    1982-05-01

    Experimental data were collected and analyses were performed to study the influence of the dynamic mechanical properties of restraint system components on human response to impact and restraint system haulback. Tests were accomplished to isolate the characteristics of the restraint system and the human body. Three restraint webbing materials were studied at varied strain rates. A pyrotechnically powered inertia reel was tested, but could not be analytically modeled successfully. Analytical models of the human and restraint system were used to study the influence of restraint material properties changes on human response parameters. An analytical model of a rhesus monkey wasmore » also used to study the efficacy of animal tests and scaling techniques to evaluate restraint systems for human use applications.« less

  10. Relativistic algorithm for time transfer in Mars missions under IAU Resolutions: an analytic approach

    NASA Astrophysics Data System (ADS)

    Pan, Jun-Yang; Xie, Yi

    2015-02-01

    With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model.

  11. Accommodating subject and instrument variations in spectroscopic determinations

    DOEpatents

    Haas, Michael J [Albuquerque, NM; Rowe, Robert K [Corrales, NM; Thomas, Edward V [Albuquerque, NM

    2006-08-29

    A method and apparatus for measuring a biological attribute, such as the concentration of an analyte, particularly a blood analyte in tissue such as glucose. The method utilizes spectrographic techniques in conjunction with an improved instrument-tailored or subject-tailored calibration model. In a calibration phase, calibration model data is modified to reduce or eliminate instrument-specific attributes, resulting in a calibration data set modeling intra-instrument or intra-subject variation. In a prediction phase, the prediction process is tailored for each target instrument separately using a minimal number of spectral measurements from each instrument or subject.

  12. The examination of headache activity using time-series research designs.

    PubMed

    Houle, Timothy T; Remble, Thomas A; Houle, Thomas A

    2005-05-01

    The majority of research conducted on headache has utilized cross-sectional designs which preclude the examination of dynamic factors and principally rely on group-level effects. The present article describes the application of an individual-oriented process model using time-series analytical techniques. The blending of a time-series approach with an interactive process model allows consideration of the relationships of intra-individual dynamic processes, while not precluding the researcher to examine inter-individual differences. The authors explore the nature of time-series data and present two necessary assumptions underlying the time-series approach. The concept of shock and its contribution to headache activity is also presented. The time-series approach is not without its problems and two such problems are specifically reported: autocorrelation and the distribution of daily observations. The article concludes with the presentation of several analytical techniques suited to examine the time-series interactive process model.

  13. Monitoring by forward scatter radar techniques: an improved second-order analytical model

    NASA Astrophysics Data System (ADS)

    Falconi, Marta Tecla; Comite, Davide; Galli, Alessandro; Marzano, Frank S.; Pastina, Debora; Lombardo, Pierfrancesco

    2017-10-01

    In this work, a second-order phase approximation is introduced to provide an improved analytical model of the signal received in forward scatter radar systems. A typical configuration with a rectangular metallic object illuminated while crossing the baseline, in far- or near-field conditions, is considered. An improved second-order model is compared with a simplified one already proposed by the authors and based on a paraxial approximation. A phase error analysis is carried out to investigate benefits and limitations of the second-order modeling. The results are validated by developing full-wave numerical simulations implementing the relevant scattering problem on a commercial tool.

  14. Numerical and analytical simulation of the production process of ZrO2 hollow particles

    NASA Astrophysics Data System (ADS)

    Safaei, Hadi; Emami, Mohsen Davazdah

    2017-12-01

    In this paper, the production process of hollow particles from the agglomerated particles is addressed analytically and numerically. The important parameters affecting this process, in particular, the initial porosity level of particles and the plasma gun types are investigated. The analytical model adopts a combination of quasi-steady thermal equilibrium and mechanical balance. In the analytical model, the possibility of a solid core existing in agglomerated particles is examined. In this model, a range of particle diameters (50μm ≤ D_{p0} ≤ 160 μ m) and various initial porosities ( 0.2 ≤ p ≤ 0.7) are considered. The numerical model employs the VOF technique for two-phase compressible flows. The production process of hollow particles from the agglomerated particles is simulated, considering an initial diameter of D_{p0} = 60 μm and initial porosity of p = 0.3, p = 0.5, and p = 0.7. Simulation results of the analytical model indicate that the solid core diameter is independent of the initial porosity, whereas the thickness of the particle shell strongly depends on the initial porosity. In both models, a hollow particle may hardly develop at small initial porosity values ( p < 0.3), while the particle disintegrates at high initial porosity values ( p > 0.6.

  15. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    NASA Astrophysics Data System (ADS)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  16. Extensions of the Johnson-Neyman Technique to Linear Models with Curvilinear Effects: Derivations and Analytical Tools

    ERIC Educational Resources Information Center

    Miller, Jason W.; Stromeyer, William R.; Schwieterman, Matthew A.

    2013-01-01

    The past decade has witnessed renewed interest in the use of the Johnson-Neyman (J-N) technique for calculating the regions of significance for the simple slope of a focal predictor on an outcome variable across the range of a second, continuous independent variable. Although tools have been developed to apply this technique to probe 2- and 3-way…

  17. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    NASA Astrophysics Data System (ADS)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-05-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  18. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    NASA Technical Reports Server (NTRS)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-01-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  19. Analytical techniques for steroid estrogens in water samples - A review.

    PubMed

    Fang, Ting Yien; Praveena, Sarva Mangala; deBurbure, Claire; Aris, Ahmad Zaharin; Ismail, Sharifah Norkhadijah Syed; Rasdi, Irniza

    2016-12-01

    In recent years, environmental concerns over ultra-trace levels of steroid estrogens concentrations in water samples have increased because of their adverse effects on human and animal life. Special attention to the analytical techniques used to quantify steroid estrogens in water samples is therefore increasingly important. The objective of this review was to present an overview of both instrumental and non-instrumental analytical techniques available for the determination of steroid estrogens in water samples, evidencing their respective potential advantages and limitations using the Need, Approach, Benefit, and Competition (NABC) approach. The analytical techniques highlighted in this review were instrumental and non-instrumental analytical techniques namely gas chromatography mass spectrometry (GC-MS), liquid chromatography mass spectrometry (LC-MS), enzyme-linked immuno sorbent assay (ELISA), radio immuno assay (RIA), yeast estrogen screen (YES) assay, and human breast cancer cell line proliferation (E-screen) assay. The complexity of water samples and their low estrogenic concentrations necessitates the use of highly sensitive instrumental analytical techniques (GC-MS and LC-MS) and non-instrumental analytical techniques (ELISA, RIA, YES assay and E-screen assay) to quantify steroid estrogens. Both instrumental and non-instrumental analytical techniques have their own advantages and limitations. However, the non-instrumental ELISA analytical techniques, thanks to its lower detection limit and simplicity, its rapidity and cost-effectiveness, currently appears to be the most reliable for determining steroid estrogens in water samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Multilevel Latent Class Analysis: Parametric and Nonparametric Models

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2014-01-01

    Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…

  1. Bioinformatics Symposium of the Analytical Division of the American Chemical Society Meeting. Final Technical Report from 03/15/2000 to 03/14/2001 [sample pages of agenda, abstracts, index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, Robert T.

    Sparked by the Human Genome Project, biological and biomedical research has become an information science. Information tools are now being generated for proteins, cell modeling, and genomics. The opportunity for analytical chemistry in this new environment is profound. New analytical techniques that can provide the information on genes, SNPs, proteins, protein modifications, cells, and cell chemistry are required. In this symposium, we brought together both informatics experts and leading analytical chemists to discuss this interface. Over 200 people attended this highly successful symposium.

  2. A Lightweight I/O Scheme to Facilitate Spatial and Temporal Queries of Scientific Data Analytics

    NASA Technical Reports Server (NTRS)

    Tian, Yuan; Liu, Zhuo; Klasky, Scott; Wang, Bin; Abbasi, Hasan; Zhou, Shujia; Podhorszki, Norbert; Clune, Tom; Logan, Jeremy; Yu, Weikuan

    2013-01-01

    In the era of petascale computing, more scientific applications are being deployed on leadership scale computing platforms to enhance the scientific productivity. Many I/O techniques have been designed to address the growing I/O bottleneck on large-scale systems by handling massive scientific data in a holistic manner. While such techniques have been leveraged in a wide range of applications, they have not been shown as adequate for many mission critical applications, particularly in data post-processing stage. One of the examples is that some scientific applications generate datasets composed of a vast amount of small data elements that are organized along many spatial and temporal dimensions but require sophisticated data analytics on one or more dimensions. Including such dimensional knowledge into data organization can be beneficial to the efficiency of data post-processing, which is often missing from exiting I/O techniques. In this study, we propose a novel I/O scheme named STAR (Spatial and Temporal AggRegation) to enable high performance data queries for scientific analytics. STAR is able to dive into the massive data, identify the spatial and temporal relationships among data variables, and accordingly organize them into an optimized multi-dimensional data structure before storing to the storage. This technique not only facilitates the common access patterns of data analytics, but also further reduces the application turnaround time. In particular, STAR is able to enable efficient data queries along the time dimension, a practice common in scientific analytics but not yet supported by existing I/O techniques. In our case study with a critical climate modeling application GEOS-5, the experimental results on Jaguar supercomputer demonstrate an improvement up to 73 times for the read performance compared to the original I/O method.

  3. Interactive Management and Updating of Spatial Data Bases

    NASA Technical Reports Server (NTRS)

    French, P.; Taylor, M.

    1982-01-01

    The decision making process, whether for power plant siting, load forecasting or energy resource planning, invariably involves a blend of analytical methods and judgement. Management decisions can be improved by the implementation of techniques which permit an increased comprehension of results from analytical models. Even where analytical procedures are not required, decisions can be aided by improving the methods used to examine spatially and temporally variant data. How the use of computer aided planning (CAP) programs and the selection of a predominant data structure, can improve the decision making process is discussed.

  4. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-07

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Analytic Methods for Adjusting Subjective Rating Schemes.

    ERIC Educational Resources Information Center

    Cooper, Richard V. L.; Nelson, Gary R.

    Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…

  6. Analytical and numerical solutions for mass diffusion in a composite cylindrical body

    NASA Astrophysics Data System (ADS)

    Kumar, A.

    1980-12-01

    The analytical and numerical solution techniques were investigated to study moisture diffusion problems in cylindrical bodies that are assumed to be composed of a finite number of layers of different materials. A generalized diffusion model for an n-layer cylindrical body with discontinuous moisture content at the interfaces was developed and the formal solutions were obtained. The model is to be used for describing mass transfer rates of any composite body, such as an ear of corn which could be assumed of consisting two different layers: the inner core represents the woody cob and the outer cylinder represents the kernel layer. Data describing the fully exposed drying characteristics of ear corn at high air velocity were obtained under different drying conditions. Ear corns were modeled as homogeneous bodies since composite model did not improve the fit substantially. A computer program using multidimensional optimization technique showed that diffusivity was an exponential function of moisture content and an arrhenius function of temperature of drying air.

  7. Mixed Initiative Visual Analytics Using Task-Driven Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Cramer, Nicholas O.; Israel, David

    2015-12-07

    Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less

  8. Off-diagonal series expansion for quantum partition functions

    NASA Astrophysics Data System (ADS)

    Hen, Itay

    2018-05-01

    We derive an integral-free thermodynamic perturbation series expansion for quantum partition functions which enables an analytical term-by-term calculation of the series. The expansion is carried out around the partition function of the classical component of the Hamiltonian with the expansion parameter being the strength of the off-diagonal, or quantum, portion. To demonstrate the usefulness of the technique we analytically compute to third order the partition functions of the 1D Ising model with longitudinal and transverse fields, and the quantum 1D Heisenberg model.

  9. Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  10. Application of a voltammetric electronic tongue and near infrared spectroscopy for a rapid umami taste assessment.

    PubMed

    Bagnasco, Lucia; Cosulich, M Elisabetta; Speranza, Giovanna; Medini, Luca; Oliveri, Paolo; Lanteri, Silvia

    2014-08-15

    The relationships between sensory attribute and analytical measurements, performed by electronic tongue (ET) and near-infrared spectroscopy (NIRS), were investigated in order to develop a rapid method for the assessment of umami taste. Commercially available umami products and some aminoacids were submitted to sensory analysis. Results were analysed in comparison with the outcomes of analytical measurements. Multivariate exploratory analysis was performed by principal component analysis (PCA). Calibration models for prediction of the umami taste on the basis of ET and NIR signals were obtained using partial least squares (PLS) regression. Different approaches for merging data from the two different analytical instruments were considered. Both of the techniques demonstrated to provide information related with umami taste. In particular, ET signals showed the higher correlation with umami attribute. Data fusion was found to be slightly beneficial - not so significantly as to justify the coupled use of the two analytical techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coletti, Chiara, E-mail: chiara.coletti@studenti.u

    During the firing of bricks, mineralogical and textural transformations produce an artificial aggregate characterised by significant porosity. Particularly as regards pore-size distribution and the interconnection model, porosity is an important parameter to evaluate and predict the durability of bricks. The pore system is in fact the main element, which correlates building materials and their environment (especially in cases of aggressive weathering, e.g., salt crystallisation and freeze-thaw cycles) and determines their durability. Four industrial bricks with differing compositions and firing temperatures were analysed with “direct” and “indirect” techniques, traditional methods (mercury intrusion porosimetry, hydric tests, nitrogen adsorption) and new analytical approachesmore » based on digital image reconstruction of 2D and 3D models (back-scattered electrons and computerised X-ray micro-Tomography, respectively). The comparison of results from different analytical methods in the “overlapping ranges” of porosity and the careful reconstruction of a cumulative curve, allowed overcoming their specific limitations and achieving better knowledge on the pore system of bricks. - Highlights: •Pore-size distribution and structure of the pore system in four commercial bricks •A multi-analytical approach combining “direct” and “indirect” techniques •Traditional methods vs. new approaches based on 2D/3D digital image reconstruction •The use of “overlapping ranges” to overcome the limitations of various techniques.« less

  12. Elements of analytic style: Bion's clinical seminars.

    PubMed

    Ogden, Thomas H

    2007-10-01

    The author finds that the idea of analytic style better describes significant aspects of the way he practices psychoanalysis than does the notion of analytic technique. The latter is comprised to a large extent of principles of practice developed by previous generations of analysts. By contrast, the concept of analytic style, though it presupposes the analyst's thorough knowledge of analytic theory and technique, emphasizes (1) the analyst's use of his unique personality as reflected in his individual ways of thinking, listening, and speaking, his own particular use of metaphor, humor, irony, and so on; (2) the analyst's drawing on his personal experience, for example, as an analyst, an analysand, a parent, a child, a spouse, a teacher, and a student; (3) the analyst's capacity to think in a way that draws on, but is independent of, the ideas of his colleagues, his teachers, his analyst, and his analytic ancestors; and (4) the responsibility of the analyst to invent psychoanalysis freshly for each patient. Close readings of three of Bion's 'Clinical seminars' are presented in order to articulate some of the elements of Bion's analytic style. Bion's style is not presented as a model for others to emulate or, worse yet, imitate; rather, it is described in an effort to help the reader consider from a different vantage point (provided by the concept of analytic style) the way in which he, the reader, practices psychoanalysis.

  13. A General Multidimensional Model for the Measurement of Cultural Differences.

    ERIC Educational Resources Information Center

    Olmedo, Esteban L.; Martinez, Sergio R.

    A multidimensional model for measuring cultural differences (MCD) based on factor analytic theory and techniques is proposed. The model assumes that a cultural space may be defined by means of a relatively small number of orthogonal dimensions which are linear combinations of a much larger number of cultural variables. Once a suitable,…

  14. Bayes Nets in Educational Assessment: Where Do the Numbers Come from? CSE Technical Report.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Almond, Russell G.; Yan, Duanli; Steinberg, Linda S.

    Educational assessments that exploit advances in technology and cognitive psychology can produce observations and pose student models that outstrip familiar test-theoretic models and analytic methods. Bayesian inference networks (BINs), which include familiar models and techniques as special cases, can be used to manage belief about students'…

  15. Development and application of a three dimensional numerical model for predicting pollutant and sediment transport using an Eulerian-Lagrangian marker particle technique

    NASA Technical Reports Server (NTRS)

    Pavish, D. L.; Spaulding, M. L.

    1977-01-01

    A computer coded Lagrangian marker particle in Eulerian finite difference cell solution to the three dimensional incompressible mass transport equation, Water Advective Particle in Cell Technique, WAPIC, was developed, verified against analytic solutions, and subsequently applied in the prediction of long term transport of a suspended sediment cloud resulting from an instantaneous dredge spoil release. Numerical results from WAPIC were verified against analytic solutions to the three dimensional incompressible mass transport equation for turbulent diffusion and advection of Gaussian dye releases in unbounded uniform and uniformly sheared uni-directional flow, and for steady-uniform plug channel flow. WAPIC was utilized to simulate an analytic solution for non-equilibrium sediment dropout from an initially vertically uniform particle distribution in one dimensional turbulent channel flow.

  16. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    This paper presents a comparison of analysis and flight test data for a drone aircraft equipped with an active flutter suppression system. Emphasis is placed on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are presented for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. In addition to presenting the mathematical models and a brief description of existing analytical techniques, an alternative analytical technique for obtaining closed-loop results is presented.

  17. Model and Analytic Processes for Export License Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determinemore » which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An approach to developing testable hypotheses for the macro-level assessment methodologies is provided. The outcome of this works suggests that we should develop a Bayes Net for micro-level analysis and continue to focus on Bayes Net, System Dynamics and Economic Input/Output models for assessing macro-level problems. Simultaneously, we need to develop metrics for assessing intent in export control, including the risks and consequences associated with all aspects of export control.« less

  18. Hierarchy of simulation models for a turbofan gas engine

    NASA Technical Reports Server (NTRS)

    Longenbaker, W. E.; Leake, R. J.

    1977-01-01

    Steady-state and transient performance of an F-100-like turbofan gas engine are modeled by a computer program, DYNGEN, developed by NASA. The model employs block data maps and includes about 25 states. Low-order nonlinear analytical and linear techniques are described in terms of their application to the model. Experimental comparisons illustrating the accuracy of each model are presented.

  19. Qualitative dynamical analysis of chaotic plasma perturbations model

    NASA Astrophysics Data System (ADS)

    Elsadany, A. A.; Elsonbaty, Amr; Agiza, H. N.

    2018-06-01

    In this work, an analytical framework to understand nonlinear dynamics of plasma perturbations model is introduced. In particular, we analyze the model presented by Constantinescu et al. [20] which consists of three coupled ODEs and contains three parameters. The basic dynamical properties of the system are first investigated by the ways of bifurcation diagrams, phase portraits and Lyapunov exponents. Then, the normal form technique and perturbation methods are applied so as to the different types of bifurcations that exist in the model are investigated. It is proved that pitcfork, Bogdanov-Takens, Andronov-Hopf bifurcations, degenerate Hopf and homoclinic bifurcation can occur in phase space of the model. Also, the model can exhibit quasiperiodicity and chaotic behavior. Numerical simulations confirm our theoretical analytical results.

  20. Thermodynamic analysis and subscale modeling of space-based orbit transfer vehicle cryogenic propellant resupply

    NASA Technical Reports Server (NTRS)

    Defelice, David M.; Aydelott, John C.

    1987-01-01

    The resupply of the cryogenic propellants is an enabling technology for spacebased orbit transfer vehicles. As part of the NASA Lewis ongoing efforts in microgravity fluid management, thermodynamic analysis and subscale modeling techniques were developed to support an on-orbit test bed for cryogenic fluid management technologies. Analytical results have shown that subscale experimental modeling of liquid resupply can be used to validate analytical models when the appropriate target temperature is selected to relate the model to its prototype system. Further analyses were used to develop a thermodynamic model of the tank chilldown process which is required prior to the no-vent fill operation. These efforts were incorporated into two FORTRAN programs which were used to present preliminary analyticl results.

  1. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    NASA Astrophysics Data System (ADS)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.

  2. Specialized data analysis of SSME and advanced propulsion system vibration measurements

    NASA Technical Reports Server (NTRS)

    Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi

    1993-01-01

    The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.

  3. Molecular modeling of polymer composite interactions with analytes in electronic nose sensors for environmental monitoring in International Space Station

    NASA Technical Reports Server (NTRS)

    Shevade, A. V.; Ryan, M. A.; Homer, M. L.; Manfreda, A. M.; Zhou, H.; Manatt, K.

    2002-01-01

    We report a molecular modeling study to investigate the polymer-carbon black (CB) composite-analyte interactions in resistive sensors. These sensors comprise the JPL Electronic Nose (ENose) sensing array developed for monitoring breathing air in human habitats. The polymer in the composite is modeled based on its stereisomerism and sequence isomerism, while the CB is modeled as uncharged naphthalene rings (with no hydrogens). The Dreiding 2.21 force field is used for the polymer and solvent molecules and graphite parameters are assigned to the carbon black atoms. A combination of molecular mechanics (MM) and molecular dynamics (NPT-MD and NVT-MD) techniques are used to obtain the equilibrium composite structure by inserting naphthalene rings in the polymer matrix. Polymers considered for this work include poly(4- vinylphenol), polyethylene oxide, and ethyl cellulose. Analytes studied are representative of both inorganic (ammonia) and organic (methanol, toluene, hydrazine) compounds. The results are analyzed for the composite microstructure by calculating the radial distribution profiles as well as for the sensor response by predicting the interaction energies of the analytes with the composites.

  4. Molecular modeling of polymer composite-analyte interactions in electronic nose sensors

    NASA Technical Reports Server (NTRS)

    Shevade, A. V.; Ryan, M. A.; Homer, M. L.; Manfreda, A. M.; Zhou, H.; Manatt, K. S.

    2003-01-01

    We report a molecular modeling study to investigate the polymer-carbon black (CB) composite-analyte interactions in resistive sensors. These sensors comprise the JPL electronic nose (ENose) sensing array developed for monitoring breathing air in human habitats. The polymer in the composite is modeled based on its stereoisomerism and sequence isomerism, while the CB is modeled as uncharged naphthalene rings with no hydrogens. The Dreiding 2.21 force field is used for the polymer, solvent molecules and graphite parameters are assigned to the carbon black atoms. A combination of molecular mechanics (MM) and molecular dynamics (NPT-MD and NVT-MD) techniques are used to obtain the equilibrium composite structure by inserting naphthalene rings in the polymer matrix. Polymers considered for this work include poly(4-vinylphenol), polyethylene oxide, and ethyl cellulose. Analytes studied are representative of both inorganic and organic compounds. The results are analyzed for the composite microstructure by calculating the radial distribution profiles as well as for the sensor response by predicting the interaction energies of the analytes with the composites. c2003 Elsevier Science B.V. All rights reserved.

  5. Analytical concepts for health management systems of liquid rocket engines

    NASA Technical Reports Server (NTRS)

    Williams, Richard; Tulpule, Sharayu; Hawman, Michael

    1990-01-01

    Substantial improvement in health management systems performance can be realized by implementing advanced analytical methods of processing existing liquid rocket engine sensor data. In this paper, such techniques ranging from time series analysis to multisensor pattern recognition to expert systems to fault isolation models are examined and contrasted. The performance of several of these methods is evaluated using data from test firings of the Space Shuttle main engines.

  6. Energy distributions and radiation transport in uranium plasmas

    NASA Technical Reports Server (NTRS)

    Miley, G. H.; Bathke, C.; Maceda, E.; Choi, C.

    1976-01-01

    An approximate analytic model, based on continuous electron slowing, has been used for survey calculations. Where more accuracy is required, a Monte Carlo technique is used which combines an analytic representation of Coulombic collisions with a random walk treatment of inelastic collisions. The calculated electron distributions have been incorporated into another code that evaluates both the excited atomic state densities within the plasma and the radiative flux emitted from the plasma.

  7. Estimation of discrimination errors in the technique for determining the geographic origin of onions by mineral composition: interlaboratory study.

    PubMed

    Ariyama, Kaoru; Kadokura, Masashi; Suzuki, Tadanao

    2008-01-01

    Techniques to determine the geographic origin of foods have been developed for various agricultural and fishery products, and they have used various principles. Some of these techniques are already in use for checking the authenticity of the labeling. Many are based on multielement analysis and chemometrics. We have developed such a technique to determine the geographic origin of onions (Allium cepa L.). This technique, which determines whether an onion is from outside Japan, is designed for onions labeled as having a geographic origin of Hokkaido, Hyogo, or Saga, the main onion production areas in Japan. However, estimations of discrimination errors for this technique have not been fully conducted; they have been limited to those for discrimination models and do not include analytical errors. Interlaboratory studies were conducted to estimate the analytical errors of the technique. Four collaborators each determined 11 elements (Na, Mg, P, Mn, Zn, Rb, Sr, Mo, Cd, Cs, and Ba) in 4 test materials of fresh and dried onions. Discrimination errors in this technique were estimated by summing (1) individual differences within lots, (2) variations between lots from the same production area, and (3) analytical errors. The discrimination errors for onions from Hokkaido, Hyogo, and Saga were estimated to be 2.3, 9.5, and 8.0%, respectively. Those for onions from abroad in determinations targeting Hokkaido, Hyogo, and Saga were estimated to be 28.2, 21.6, and 21.9%, respectively.

  8. Field Telemetry of Blade-rotor Coupled Torsional Vibration at Matuura Power Station Number 1 Unit

    NASA Technical Reports Server (NTRS)

    Isii, Kuniyoshi; Murakami, Hideaki; Otawara, Yasuhiko; Okabe, Akira

    1991-01-01

    The quasi-modal reduction technique and finite element model (FEM) were used to construct an analytical model for the blade-rotor coupled torsional vibration of a steam turbine generator of the Matuura Power Station. A single rotor test was executed in order to evaluate umbrella vibration characteristics. Based on the single rotor test results and the quasi-modal procedure, the total rotor system was analyzed to predict coupled torsional frequencies. Finally, field measurement of the vibration of the last stage buckets was made, which confirmed that the double synchronous resonance was 124.2 Hz, meaning that the machine can be safely operated. The measured eigen values are very close to the predicted value. The single rotor test and this analytical procedure thus proved to be a valid technique to estimate coupled torsional vibration.

  9. Creating Synthetic Coronal Observational Data From MHD Models: The Forward Technique

    NASA Technical Reports Server (NTRS)

    Rachmeler, Laurel A.; Gibson, Sarah E.; Dove, James; Kucera, Therese Ann

    2010-01-01

    We present a generalized forward code for creating simulated corona) observables off the limb from numerical and analytical MHD models. This generalized forward model is capable of creating emission maps in various wavelengths for instruments such as SXT, EIT, EIS, and coronagraphs, as well as spectropolari metric images and line profiles. The inputs to our code can be analytic models (of which four come with the code) or 2.5D and 3D numerical datacubes. We present some examples of the observable data created with our code as well as its functional capabilities. This code is currently available for beta-testing (contact authors), with the ultimate goal of release as a SolarSoft package

  10. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  11. Air Force and Army Corps of Engineers Improperly Managed the Award of Contracts for the Blue Devil Block 2 Persistent Surveillance System

    DTIC Science & Technology

    2013-09-19

    environments. This can include the development of new and/or improved analytical and numerical models, rapid data-processing techniques, and new subsurface ... imaging techniques that include active and passive sensor modalities in a variety of rural and urban terrains. Of particular interest is the broadband

  12. The role of analytical chemistry in Niger Delta petroleum exploration: a review.

    PubMed

    Akinlua, Akinsehinwa

    2012-06-12

    Petroleum and organic matter from which the petroleum is derived are composed of organic compounds with some trace elements. These compounds give an insight into the origin, thermal maturity and paleoenvironmental history of petroleum, which are essential elements in petroleum exploration. The main tool to acquire the geochemical data is analytical techniques. Due to progress in the development of new analytical techniques, many hitherto petroleum exploration problems have been resolved. Analytical chemistry has played a significant role in the development of petroleum resources of Niger Delta. Various analytical techniques that have aided the success of petroleum exploration in the Niger Delta are discussed. The analytical techniques that have helped to understand the petroleum system of the basin are also described. Recent and emerging analytical methodologies including green analytical methods as applicable to petroleum exploration particularly Niger Delta petroleum province are discussed in this paper. Analytical chemistry is an invaluable tool in finding the Niger Delta oils. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Analytical techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.

  14. Computer modeling of the mechanical behavior of composites -- Interfacial cracks in fiber-reinforced materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmauder, S.; Haake, S.; Mueller, W.H.

    Computer modeling of materials and especially modeling the mechanical behavior of composites became increasingly popular in the past few years. Among them are examples of micromechanical modeling of real structures as well as idealized model structures of linear elastic and elasto-plastic material response. In this paper, Erdogan`s Integral Equation Method (IEM) is chosen as an example for a powerful method providing principle insight into elastic fracture mechanical situations. IEM or, alternatively, complex function techniques sometimes even allow for deriving analytical solutions such as in the case of a circumferential crack along a fiber/matrix interface. The analytical formulae of this interfacemore » crack will be analyzed numerically and typical results will be presented graphically.« less

  15. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  16. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  17. Subjective education in analytic training: drawing on values from the art academy.

    PubMed

    Dougherty, Mary

    2008-11-01

    Kernberg and others have observed that psychoanalytic education has tended to promote the acquisition of theoretical knowledge and clinical technique within an atmosphere of indoctrination rather than of exploration. As a corrective, he proposed four models that correspond to values in psychoanalytic education: the art academy, the technical trade school, the religious seminary and the university. He commended models of the university and art academy to our collective attention because of their combined effectiveness in providing for the objective and subjective education of candidates: the university model for its capacity to provide a critical sense of a wide range of theories in an atmosphere tolerating debate and difference, and the art academy model for its capacity to facilitate the expression of individual creativity. In this paper, I will explore the art academy model for correspondences between artistic and analytic trainings that can enhance the development of the creative subjectivity of psychoanalytic candidates. I will draw additional correspondences between analytic and artistic learning that can enhance psychoanalytic education.

  18. Big Data Analytics for Prostate Radiotherapy.

    PubMed

    Coates, James; Souhami, Luis; El Naqa, Issam

    2016-01-01

    Radiation therapy is a first-line treatment option for localized prostate cancer and radiation-induced normal tissue damage are often the main limiting factor for modern radiotherapy regimens. Conversely, under-dosing of target volumes in an attempt to spare adjacent healthy tissues limits the likelihood of achieving local, long-term control. Thus, the ability to generate personalized data-driven risk profiles for radiotherapy outcomes would provide valuable prognostic information to help guide both clinicians and patients alike. Big data applied to radiation oncology promises to deliver better understanding of outcomes by harvesting and integrating heterogeneous data types, including patient-specific clinical parameters, treatment-related dose-volume metrics, and biological risk factors. When taken together, such variables make up the basis for a multi-dimensional space (the "RadoncSpace") in which the presented modeling techniques search in order to identify significant predictors. Herein, we review outcome modeling and big data-mining techniques for both tumor control and radiotherapy-induced normal tissue effects. We apply many of the presented modeling approaches onto a cohort of hypofractionated prostate cancer patients taking into account different data types and a large heterogeneous mix of physical and biological parameters. Cross-validation techniques are also reviewed for the refinement of the proposed framework architecture and checking individual model performance. We conclude by considering advanced modeling techniques that borrow concepts from big data analytics, such as machine learning and artificial intelligence, before discussing the potential future impact of systems radiobiology approaches.

  19. Big Data Analytics for Prostate Radiotherapy

    PubMed Central

    Coates, James; Souhami, Luis; El Naqa, Issam

    2016-01-01

    Radiation therapy is a first-line treatment option for localized prostate cancer and radiation-induced normal tissue damage are often the main limiting factor for modern radiotherapy regimens. Conversely, under-dosing of target volumes in an attempt to spare adjacent healthy tissues limits the likelihood of achieving local, long-term control. Thus, the ability to generate personalized data-driven risk profiles for radiotherapy outcomes would provide valuable prognostic information to help guide both clinicians and patients alike. Big data applied to radiation oncology promises to deliver better understanding of outcomes by harvesting and integrating heterogeneous data types, including patient-specific clinical parameters, treatment-related dose–volume metrics, and biological risk factors. When taken together, such variables make up the basis for a multi-dimensional space (the “RadoncSpace”) in which the presented modeling techniques search in order to identify significant predictors. Herein, we review outcome modeling and big data-mining techniques for both tumor control and radiotherapy-induced normal tissue effects. We apply many of the presented modeling approaches onto a cohort of hypofractionated prostate cancer patients taking into account different data types and a large heterogeneous mix of physical and biological parameters. Cross-validation techniques are also reviewed for the refinement of the proposed framework architecture and checking individual model performance. We conclude by considering advanced modeling techniques that borrow concepts from big data analytics, such as machine learning and artificial intelligence, before discussing the potential future impact of systems radiobiology approaches. PMID:27379211

  20. Analytical Model of the Nonlinear Dynamics of Cantilever Tip-Sample Surface Interactions for Various Acoustic-Atomic Force Microscopies

    NASA Technical Reports Server (NTRS)

    Cantrell, John H., Jr.; Cantrell, Sean A.

    2008-01-01

    A comprehensive analytical model of the interaction of the cantilever tip of the atomic force microscope (AFM) with the sample surface is developed that accounts for the nonlinearity of the tip-surface interaction force. The interaction is modeled as a nonlinear spring coupled at opposite ends to linear springs representing cantilever and sample surface oscillators. The model leads to a pair of coupled nonlinear differential equations that are solved analytically using a standard iteration procedure. Solutions are obtained for the phase and amplitude signals generated by various acoustic-atomic force microscope (A-AFM) techniques including force modulation microscopy, atomic force acoustic microscopy, ultrasonic force microscopy, heterodyne force microscopy, resonant difference-frequency atomic force ultrasonic microscopy (RDF-AFUM), and the commonly used intermittent contact mode (TappingMode) generally available on AFMs. The solutions are used to obtain a quantitative measure of image contrast resulting from variations in the Young modulus of the sample for the amplitude and phase images generated by the A-AFM techniques. Application of the model to RDF-AFUM and intermittent soft contact phase images of LaRC-cp2 polyimide polymer is discussed. The model predicts variations in the Young modulus of the material of 24 percent from the RDF-AFUM image and 18 percent from the intermittent soft contact image. Both predictions are in good agreement with the literature value of 21 percent obtained from independent, macroscopic measurements of sheet polymer material.

  1. A Grammar-based Approach for Modeling User Interactions and Generating Suggestions During the Data Exploration Process.

    PubMed

    Dabek, Filip; Caban, Jesus J

    2017-01-01

    Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.

  2. Dynamic N -occupancy models: estimating demographic rates and local abundance from detection-nondetection data

    Treesearch

    Sam Rossman; Charles B. Yackulic; Sarah P. Saunders; Janice Reid; Ray Davis; Elise F. Zipkin

    2016-01-01

    Occupancy modeling is a widely used analytical technique for assessing species distributions and range dynamics. However, occupancy analyses frequently ignore variation in abundance of occupied sites, even though site abundances affect many of the parameters being estimated (e.g., extinction, colonization, detection probability). We introduce a new model (“dynamic

  3. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  4. Declining Enrollment: Why? What Are the Trends and Implications?; and Enrollment Forecasting Techniques.

    ERIC Educational Resources Information Center

    Finch, Harold L.; Tatham, Elaine L.

    This document presents a modified cohort survival model which can be of use in making enrollment projections. The model begins by analytically profiling an area's residents. Each person's demographic characteristics--sex, age, place of residence--are recorded in the computer memory. Four major input variables are then incorporated into the model:…

  5. Employment of adaptive learning techniques for the discrimination of acoustic emissions

    NASA Astrophysics Data System (ADS)

    Erkes, J. W.; McDonald, J. F.; Scarton, H. A.; Tam, K. C.; Kraft, R. P.

    1983-11-01

    The following aspects of this study on the discrimination of acoustic emissions (AE) were examined: (1) The analytical development and assessment of digital signal processing techniques for AE signal dereverberation, noise reduction, and source characterization; (2) The modeling and verification of some aspects of key selected techniques through a computer-based simulation; and (3) The study of signal propagation physics and their effect on received signal characteristics for relevant physical situations.

  6. Local Modelling of Groundwater Flow Using Analytic Element Method Three-dimensional Transient Unconfined Groundwater Flow With Partially Penetrating Wells and Ellipsoidal Inhomogeneites

    NASA Astrophysics Data System (ADS)

    Jankovic, I.; Barnes, R. J.; Soule, R.

    2001-12-01

    The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.

  7. Summary of investigations of engine response to distorted inlet conditions

    NASA Technical Reports Server (NTRS)

    Biesiadny, T. J.; Braithwaite, W. M.; Soeder, R. H.; Abdelwahab, M.

    1986-01-01

    A survey is presented of experimental and analytical experience of the NASA Lewis Research Center in engine response to inlet temperature and pressure distortions. This includes a description of the hardware and techniques employed, and a summary of the highlights of experimental investigations and analytical modeling. Distortion devices successfully simulated inlet distortion, and knowledge was gained about compression system response to different types of distortion. A list of NASA research references is included.

  8. Analytical learning and term-rewriting systems

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Gamble, Evan

    1990-01-01

    Analytical learning is a set of machine learning techniques for revising the representation of a theory based on a small set of examples of that theory. When the representation of the theory is correct and complete but perhaps inefficient, an important objective of such analysis is to improve the computational efficiency of the representation. Several algorithms with this purpose have been suggested, most of which are closely tied to a first order logical language and are variants of goal regression, such as the familiar explanation based generalization (EBG) procedure. But because predicate calculus is a poor representation for some domains, these learning algorithms are extended to apply to other computational models. It is shown that the goal regression technique applies to a large family of programming languages, all based on a kind of term rewriting system. Included in this family are three language families of importance to artificial intelligence: logic programming, such as Prolog; lambda calculus, such as LISP; and combinatorial based languages, such as FP. A new analytical learning algorithm, AL-2, is exhibited that learns from success but is otherwise quite different from EBG. These results suggest that term rewriting systems are a good framework for analytical learning research in general, and that further research should be directed toward developing new techniques.

  9. Recent Developments in the Speciation and Determination of Mercury Using Various Analytical Techniques

    PubMed Central

    Suvarapu, Lakshmi Narayana; Baek, Sung-Ok

    2015-01-01

    This paper reviews the speciation and determination of mercury by various analytical techniques such as atomic absorption spectrometry, voltammetry, inductively coupled plasma techniques, spectrophotometry, spectrofluorometry, high performance liquid chromatography, and gas chromatography. Approximately 126 research papers on the speciation and determination of mercury by various analytical techniques published in international journals since 2013 are reviewed. PMID:26236539

  10. Computer modeling of lung cancer diagnosis-to-treatment process

    PubMed Central

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick

    2015-01-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181

  11. Analysing uncertainties of supply and demand in the future use of hydrogen as an energy vector

    NASA Astrophysics Data System (ADS)

    Lenel, U. R.; Davies, D. G. S.; Moore, M. A.

    An analytical technique (Analysis with Uncertain Qualities), developed at Fulmer, is being used to examine the sensitivity of the outcome to uncertainties in input quantities in order to highlight which input quantities critically affect the potential role of hydrogen. The work presented here includes an outline of the model and the analysis technique, along with basic considerations of the input quantities to the model (demand, supply and constraints). Some examples are given of probabilistic estimates of input quantities.

  12. Measurement of Plastic Stress and Strain for Analytical Method Verification (MSFC Center Director's Discretionary Fund Project No. 93-08)

    NASA Technical Reports Server (NTRS)

    Price, J. M.; Steeve, B. E.; Swanson, G. R.

    1999-01-01

    The analytical prediction of stress, strain, and fatigue life at locations experiencing local plasticity is full of uncertainties. Much of this uncertainty arises from the material models and their use in the numerical techniques used to solve plasticity problems. Experimental measurements of actual plastic strains would allow the validity of these models and solutions to be tested. This memorandum describes how experimental plastic residual strain measurements were used to verify the results of a thermally induced plastic fatigue failure analysis of a space shuttle main engine fuel pump component.

  13. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae) habitat and population densities

    PubMed Central

    Kwan, Paul; Welch, Mitchell

    2017-01-01

    In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae) as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus. An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops. PMID:28875085

  14. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae) habitat and population densities.

    PubMed

    Al-Kindi, Khalifa M; Kwan, Paul; R Andrew, Nigel; Welch, Mitchell

    2017-01-01

    In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae) as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus . An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.

  15. The acoustics of ducted propellers

    NASA Astrophysics Data System (ADS)

    Ali, Sherif F.

    The return of the propeller to the long haul commercial service may be rapidly approaching in the form of advanced "prop fans". It is believed that the advanced turboprop will considerably reduce the operational cost. However, such aircraft will come into general use only if their noise levels meet the standards of community acceptability currently applied to existing aircraft. In this work a time-marching boundary-element technique is developed, and used to study the acoustics of ducted propeller. The numerical technique is developed in this work eliminated the inherent instability suffered by conventional approaches. The methodology is validated against other numerical and analytical results. The results show excellent agreement with the analytical solution and show no indication of unstable behavior. For the ducted propeller problem, the propeller is modeled by a rotating source-sink pairs, and the duct is modeled by rigid annular body of elliptical cross-section. Using the model and the developed technique, the effect of different parameters on the acoustic field is predicted and analyzed. This includes the effect of duct length, propeller axial location, and source Mach number. The results of this study show that installing a short duct around the propeller can reduce the noise that reaches an observer on a side line.

  16. A Comparison of the Glass Meta-Analytic Technique with the Hunter-Schmidt Meta-Analytic Technique on Three Studies from the Education Literature.

    ERIC Educational Resources Information Center

    Hough, Susan L.; Hall, Bruce W.

    The meta-analytic techniques of G. V. Glass (1976) and J. E. Hunter and F. L. Schmidt (1977) were compared through their application to three meta-analytic studies from education literature. The following hypotheses were explored: (1) the overall mean effect size would be larger in a Hunter-Schmidt meta-analysis (HSMA) than in a Glass…

  17. Turbomachinery noise

    NASA Astrophysics Data System (ADS)

    Groeneweg, John F.; Sofrin, Thomas G.; Rice, Edward J.; Gliebe, Phillip R.

    1991-08-01

    Summarized here are key advances in experimental techniques and theoretical applications which point the way to a broad understanding and control of turbomachinery noise. On the experimental side, the development of effective inflow control techniques makes it possible to conduct, in ground based facilities, definitive experiments in internally controlled blade row interactions. Results can now be valid indicators of flight behavior and can provide a firm base for comparison with analytical results. Inflow control coupled with detailed diagnostic tools such as blade pressure measurements can be used to uncover the more subtle mechanisms such as rotor strut interaction, which can set tone levels for some engine configurations. Initial mappings of rotor wake-vortex flow fields have provided a data base for a first generation semiempirical flow disturbance model. Laser velocimetry offers a nonintrusive method for validating and improving the model. Digital data systems and signal processing algorithms are bringing mode measurement closer to a working tool that can be frequently applied to a real machine such as a turbofan engine. On the analytical side, models of most of the links in the chain from turbomachine blade source to far field observation point have been formulated. Three dimensional lifting surface theory for blade rows, including source noncompactness and cascade effects, blade row transmission models incorporating mode and frequency scattering, and modal radiation calculations, including hybrid numerical-analytical approaches, are tools which await further application.

  18. Modeling of ion acceleration through drift and diffusion at interplanetary shocks

    NASA Technical Reports Server (NTRS)

    Decker, R. B.; Vlahos, L.

    1986-01-01

    A test particle simulation designed to model ion acceleration through drift and diffusion at interplanetary shocks is described. The technique consists of integrating along exact particle orbits in a system where the angle between the shock normal and mean upstream magnetic field, the level of magnetic fluctuations, and the energy of injected particles can assume a range of values. The technique makes it possible to study time-dependent shock acceleration under conditions not amenable to analytical techniques. To illustrate the capability of the numerical model, proton acceleration was considered under conditions appropriate for interplanetary shocks at 1 AU, including large-amplitude transverse magnetic fluctuations derived from power spectra of both ambient and shock-associated MHD waves.

  19. Measurement and Modelling: Sequential Use of Analytical Techniques in a Study of Risk-Taking in Decision-Making by School Principals

    ERIC Educational Resources Information Center

    Trimmer, Karen

    2016-01-01

    This paper investigates reasoned risk-taking in decision-making by school principals using a methodology that combines sequential use of psychometric and traditional measurement techniques. Risk-taking is defined as when decisions are made that are not compliant with the regulatory framework, the primary governance mechanism for public schools in…

  20. Analysis and Experimental Investigation of Optimum Design of Thermoelectric Cooling/Heating System for Car Seat Climate Control (CSCC)

    NASA Astrophysics Data System (ADS)

    Elarusi, Abdulmunaem; Attar, Alaa; Lee, HoSung

    2018-02-01

    The optimum design of a thermoelectric system for application in car seat climate control has been modeled and its performance evaluated experimentally. The optimum design of the thermoelectric device combining two heat exchangers was obtained by using a newly developed optimization method based on the dimensional technique. Based on the analytical optimum design results, commercial thermoelectric cooler and heat sinks were selected to design and construct the climate control heat pump. This work focuses on testing the system performance in both cooling and heating modes to ensure accurate analytical modeling. Although the analytical performance was calculated using the simple ideal thermoelectric equations with effective thermoelectric material properties, it showed very good agreement with experiment for most operating conditions.

  1. Tidal analysis of Met rocket wind data

    NASA Technical Reports Server (NTRS)

    Bedinger, J. F.; Constantinides, E.

    1976-01-01

    A method of analyzing Met Rocket wind data is described. Modern tidal theory and specialized analytical techniques were used to resolve specific tidal modes and prevailing components in observed wind data. A representation of the wind which is continuous in both space and time was formulated. Such a representation allows direct comparison with theory, allows the derivation of other quantities such as temperature and pressure which in turn may be compared with observed values, and allows the formation of a wind model which extends over a broader range of space and time. Significant diurnal tidal modes with wavelengths of 10 and 7 km were present in the data and were resolved by the analytical technique.

  2. Analytical techniques and instrumentation: A compilation. [analytical instrumentation, materials performance, and systems analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical information is presented covering the areas of: (1) analytical instrumentation useful in the analysis of physical phenomena; (2) analytical techniques used to determine the performance of materials; and (3) systems and component analyses for design and quality control.

  3. Electrolyte system strategies for anionic ITP with ESI-MS detection. 3. The ITP spacer technique in moving-boundary systems and configurations with two self-maintained ITP subsystems.

    PubMed

    Gebauer, Petr; Malá, Zdena; Boček, Petr

    2014-03-01

    This contribution is the third part of the project on strategies used in the selection and tuning of electrolyte systems for anionic ITP with ESI-MS detection. The strategy presented here is based on the creation of self-maintained ITP subsystems in moving-boundary systems and describes two new principal approaches offering physical separation of analyte zones from their common ITP stack and/or simultaneous selective stacking of two different analyte groups. Both strategic directions are based on extending the number of components forming the electrolyte system by adding a third suitable anion. The first method is the application of the spacer technique to moving-boundary anionic ITP systems, the second method is a technique utilizing a moving-boundary ITP system in which two ITP subsystems exist and move with mutually different velocities. It is essential for ESI detection that both methods can be based on electrolyte systems containing only several simple chemicals, such as simple volatile organic acids (formic and acetic) and their ammonium salts. The properties of both techniques are defined theoretically and discussed from the viewpoint of their applicability to trace analysis by ITP-ESI-MS. Examples of system design for selected model separations of preservatives and pharmaceuticals illustrate the validity of the theoretical model and application potential of the proposed techniques by both computer simulations and experiments. Both new methods enhance the application range of ITP-MS and may be beneficial particularly for complex multicomponent samples or for analytes with identical molecular mass. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Three-dimensional circulation dynamics of along-channel flow in stratified estuaries

    NASA Astrophysics Data System (ADS)

    Musiak, Jeffery Daniel

    Estuaries are vital because they are the major interface between humans and the oceans and provide valuable habitat for a wide range of organisms. Therefore it is important to model estuarine circulation to gain a better comprehension of the mechanics involved and how people effect estuaries. To this end, this dissertation combines analysis of data collected in the Columbia River estuary (CRE) with novel data processing and modeling techniques to further the understanding of estuaries that are strongly forced by riverflow and tides. The primary hypothesis tested in this work is that the three- dimensional (3-D) variability in along-channel currents in a strongly forced estuary can be largely accounted for by including the lateral variations in density and bathymetry but neglecting the secondary, or lateral, flow. Of course, the forcing must also include riverflow and oceanic tides. Incorporating this simplification and the modeling ideas put forth by others with new modeling techniques and new ideas on estuarine circulation will allow me to create a semi-analytical quasi 3-D profile model. This approach was chosen because it is of intermediate complexity to purely analytical models, that, if tractable, are too simple to be useful, and 3-D numerical models which can have excellent resolution but require large amounts of time, computer memory and computing power. Validation of the model will be accomplished using velocity and density data collected in the Columbia River Estuary and by comparison to analytical solutions. Components of the modeling developed here include: (1) development of a 1-D barotropic model for tidal wave propagation in frictionally dominated systems with strong topography. This model can have multiple tidal constituents and multiply connected channels. (2) Development and verification of a new quasi 3-D semi-analytical velocity profile model applicable to estuarine systems which are strongly forced by both oceanic tides and riverflow. This model includes diurnal and semi-diurnal tidal and non- linearly generated overtide circulation and residual circulation driven by riverflow, baroclinic forcing, surface wind stress and non-linear tidal forcing. (3) Demonstration that much of the lateral variation in along-channel currents is caused by variations in along- channel density forcing and bathymetry.

  5. 3D surface pressure measurement with single light-field camera and pressure-sensitive paint

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth

    2018-05-01

    A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.

  6. Elliptic-cylindrical analytical flux-rope model for ICMEs

    NASA Astrophysics Data System (ADS)

    Nieves-Chinchilla, T.; Linton, M.; Hidalgo, M. A. U.; Vourlidas, A.

    2016-12-01

    We present an analytical flux-rope model for realistic magnetic structures embedded in Interplanetary Coronal Mass Ejections. The framework of this model was established by Nieves-Chinchilla et al. (2016) with the circular-cylindrical analytical flux rope model and under the concept developed by Hidalgo et al. (2002). Elliptic-cylindrical geometry establishes the first-grade of complexity of a series of models. The model attempts to describe the magnetic flux rope topology with distorted cross-section as a possible consequence of the interaction with the solar wind. In this model, the flux rope is completely described in the non-euclidean geometry. The Maxwell equations are solved using tensor calculus consistently with the geometry chosen, invariance along the axial component, and with the only assumption of no radial current density. The model is generalized in terms of the radial dependence of the poloidal current density component and axial current density component. The misalignment between current density and magnetic field is studied in detail for the individual cases of different pairs of indexes for the axial and poloidal current density components. This theoretical analysis provides a map of the force distribution inside of the flux-rope. The reconstruction technique has been adapted to the model and compared with in situ ICME set of events with different in situ signatures. The successful result is limited to some cases with clear in-situ signatures of distortion. However, the model adds a piece in the puzzle of the physical-analytical representation of these magnetic structures. Other effects such as axial curvature, expansion and/or interaction could be incorporated in the future to fully understand the magnetic structure. Finally, the mathematical formulation of this model opens the door to the next model: toroidal flux rope analytical model.

  7. Mobley et al. Turnover Model Reanalysis and Review of Existing Data.

    ERIC Educational Resources Information Center

    Dalessio, Anthony; And Others

    Job satisfaction has been identified as one of the most important antecedents of turnover, although it rarely accounts for more than 16% of the variance in employee withdrawal. Several data sets collected on the Mobley, Horner, and Hollingsworth (1978) model of turnover were reanalyzed with path analytic techniques. Data analyses revealed support…

  8. The Effects of Mathematical Modelling on Students' Achievement-Meta-Analysis of Research

    ERIC Educational Resources Information Center

    Sokolowski, Andrzej

    2015-01-01

    Using meta-analytic techniques this study examined the effects of applying mathematical modelling to support student math knowledge acquisition at the high school and college levels. The research encompassed experimental studies published in peer-reviewed journals between January 1, 2000, and February 27, 2013. Such formulated orientation called…

  9. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  10. Modeling of oceanic vortices

    NASA Astrophysics Data System (ADS)

    Cushman-Roisin, B.

    Following on a tradition of biannual meetings, the 5th Colloquium on the Modeling of Oceanic Vortices was held May 21-23, 1990, at the Thayer School of Engineering at Dartmouth College, Hanover, N.H. The colloquium series, sponsored by the Office of Naval Research, is intended to gather oceanographers who contribute to our understanding of oceanic mesoscale vortices via analytical, numerical and experimental modeling techniques.

  11. High Technology Service Value Maximization through an MCDM-Based Innovative e-Business Model

    NASA Astrophysics Data System (ADS)

    Huang, Chi-Yo; Tzeng, Gwo-Hshiung; Ho, Wen-Rong; Chuang, Hsiu-Tyan; Lue, Yeou-Feng

    The emergence of the Internet has changed the high technology marketing channels thoroughly in the past decade while E-commerce has already become one of the most efficient channels which high technology firms may skip the intermediaries and reach end customers directly. However, defining appropriate e-business models for commercializing new high technology products or services through Internet are not that easy. To overcome the above mentioned problems, a novel analytic framework based on the concept of high technology customers’ competence set expansion by leveraging high technology service firms’ capabilities and resources as well as novel multiple criteria decision making (MCDM) techniques, will be proposed in order to define an appropriate e-business model. An empirical example study of a silicon intellectual property (SIP) commercialization e-business model based on MCDM techniques will be provided for verifying the effectiveness of this novel analytic framework. The analysis successful assisted a Taiwanese IC design service firm to define an e-business model for maximizing its customer’s SIP transactions. In the future, the novel MCDM framework can be applied successful to novel business model definitions in the high technology industry.

  12. Automated Solid Phase Extraction (SPE) LC/NMR Applied to the Structural Analysis of Extractable Compounds from a Pharmaceutical Packaging Material of Construction.

    PubMed

    Norwood, Daniel L; Mullis, James O; Davis, Mark; Pennino, Scott; Egert, Thomas; Gonnella, Nina C

    2013-01-01

    The structural analysis (i.e., identification) of organic chemical entities leached into drug product formulations has traditionally been accomplished with techniques involving the combination of chromatography with mass spectrometry. These include gas chromatography/mass spectrometry (GC/MS) for volatile and semi-volatile compounds, and various forms of liquid chromatography/mass spectrometry (LC/MS or HPLC/MS) for semi-volatile and relatively non-volatile compounds. GC/MS and LC/MS techniques are complementary for structural analysis of leachables and potentially leachable organic compounds produced via laboratory extraction of pharmaceutical container closure/delivery system components and corresponding materials of construction. Both hyphenated analytical techniques possess the separating capability, compound specific detection attributes, and sensitivity required to effectively analyze complex mixtures of trace level organic compounds. However, hyphenated techniques based on mass spectrometry are limited by the inability to determine complete bond connectivity, the inability to distinguish between many types of structural isomers, and the inability to unambiguously determine aromatic substitution patterns. Nuclear magnetic resonance spectroscopy (NMR) does not have these limitations; hence it can serve as a complement to mass spectrometry. However, NMR technology is inherently insensitive and its ability to interface with chromatography has been historically challenging. This article describes the application of NMR coupled with liquid chromatography and automated solid phase extraction (SPE-LC/NMR) to the structural analysis of extractable organic compounds from a pharmaceutical packaging material of construction. The SPE-LC/NMR technology combined with micro-cryoprobe technology afforded the sensitivity and sample mass required for full structure elucidation. Optimization of the SPE-LC/NMR analytical method was achieved using a series of model compounds representing the chemical diversity of extractables. This study demonstrates the complementary nature of SPE-LC/NMR with LC/MS for this particular pharmaceutical application. The identification of impurities leached into drugs from the components and materials associated with pharmaceutical containers, packaging components, and materials has historically been done using laboratory techniques based on the combination of chromatography with mass spectrometry. Such analytical techniques are widely recognized as having the selectivity and sensitivity required to separate the complex mixtures of impurities often encountered in such identification studies, including both the identification of leachable impurities as well as potential leachable impurities produced by laboratory extraction of packaging components and materials. However, while mass spectrometry-based analytical techniques have limitations for this application, newer analytical techniques based on the combination of chromatography with nuclear magnetic resonance spectroscopy provide an added dimension of structural definition. This article describes the development, optimization, and application of an analytical technique based on the combination of chromatography and nuclear magnetic resonance spectroscopy to the identification of potential leachable impurities from a pharmaceutical packaging material. The complementary nature of the analytical techniques for this particular pharmaceutical application is demonstrated.

  13. Deriving Earth Science Data Analytics Tools/Techniques Requirements

    NASA Astrophysics Data System (ADS)

    Kempler, S. J.

    2015-12-01

    Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists. Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics tools/techniques requirements that would support specific ESDA type goals. Representative existing data analytics tools/techniques relevant to ESDA will also be addressed.

  14. Green analytical chemistry--theory and practice.

    PubMed

    Tobiszewski, Marek; Mechlińska, Agata; Namieśnik, Jacek

    2010-08-01

    This tutorial review summarises the current state of green analytical chemistry with special emphasis on environmentally friendly sample preparation techniques. Green analytical chemistry is a part of the sustainable development concept; its history and origins are described. Miniaturisation of analytical devices and shortening the time elapsing between performing analysis and obtaining reliable analytical results are important aspects of green analytical chemistry. Solventless extraction techniques, the application of alternative solvents and assisted extractions are considered to be the main approaches complying with green analytical chemistry principles.

  15. A microstructural lattice model for strain oriented problems: A combined Monte Carlo finite element technique

    NASA Technical Reports Server (NTRS)

    Gayda, J.; Srolovitz, D. J.

    1987-01-01

    A specialized, microstructural lattice model, termed MCFET for combined Monte Carlo Finite Element Technique, was developed which simulates microstructural evolution in material systems where modulated phases occur and the directionality of the modulation is influenced by internal and external stresses. In this approach, the microstructure is discretized onto a fine lattice. Each element in the lattice is labelled in accordance with its microstructural identity. Diffusion of material at elevated temperatures is simulated by allowing exchanges of neighboring elements if the exchange lowers the total energy of the system. A Monte Carlo approach is used to select the exchange site while the change in energy associated with stress fields is computed using a finite element technique. The MCFET analysis was validated by comparing this approach with a closed form, analytical method for stress assisted, shape changes of a single particle in an infinite matrix. Sample MCFET analytical for multiparticle problems were also run and in general the resulting microstructural changes associated with the application of an external stress are similar to that observed in Ni-Al-Cr alloys at elevated temperature.

  16. From Data to Knowledge – Promising Analytical Tools and Techniques for Capture and Reuse of Corporate Knowledge and to Aid in the State Evaluation Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danielson, Gary R.; Augustenborg, Elsa C.; Beck, Andrew E.

    2010-10-29

    The IAEA is challenged with limited availability of human resources for inspection and data analysis while proliferation threats increase. PNNL has a variety of IT solutions and techniques (at varying levels of maturity and development) that take raw data closer to useful knowledge, thereby assisting with and standardizing the analytical processes. This paper highlights some PNNL tools and techniques which are applicable to the international safeguards community, including: • Intelligent in-situ triage of data prior to reliable transmission to an analysis center resulting in the transmission of smaller and more relevant data sets • Capture of expert knowledge in re-usablemore » search strings tailored to specific mission outcomes • Image based searching fused with text based searching • Use of gaming to discover unexpected proliferation scenarios • Process modeling (e.g. Physical Model) as the basis for an information integration portal, which links to data storage locations along with analyst annotations, categorizations, geographic data, search strings and visualization outputs.« less

  17. Application of Semi-analytical Satellite Theory orbit propagator to orbit determination for space object catalog maintenance

    NASA Astrophysics Data System (ADS)

    Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke

    2016-05-01

    Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.

  18. Value of Earth Observations: Key principles and techniques of socioeconomic benefits analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Friedl, L.; Macauley, M.; Bernknopf, R.

    2013-12-01

    Internationally, multiple organizations are placing greater emphasis on the societal benefits that governments, businesses, and NGOs can derive from applications of Earth-observing satellite observations, research, and models. A growing set of qualitative, anecdotal examples on the uses of Earth observations across a range of sectors can be complemented by the quantitative substantiation of the socioeconomic benefits. In turn, the expanding breadth of environmental data available and the awareness of their beneficial applications to inform decisions can support new products and services by companies, agencies, and civil society. There are, however, significant efforts needed to bridge the Earth sciences and social and economic sciences fields to build capacity, develop case studies, and refine analytic techniques in quantifying socioeconomic benefits from the use of Earth observations. Some government programs, such as the NASA Earth Science Division's Applied Sciences Program have initiated activities in recent years to quantify the socioeconomic benefits from applications of Earth observations research, and to develop multidisciplinary models for organizations' decision-making activities. A community of practice has conducted workshops, developed impact analysis reports, published a book, developed a primer, and pursued other activities to advance analytic methodologies and build capacity. This paper will present an overview of measuring socioeconomic impacts of Earth observations and how the measures can be translated into a value of Earth observation information. It will address key terms, techniques, principles and applications of socioeconomic impact analyses. It will also discuss activities to pursue a research agenda on analytic techniques, develop a body of knowledge, and promote broader skills and capabilities.

  19. Researching Mental Health Disorders in the Era of Social Media: Systematic Review

    PubMed Central

    Vadillo, Miguel A; Curcin, Vasa

    2017-01-01

    Background Mental illness is quickly becoming one of the most prevalent public health problems worldwide. Social network platforms, where users can express their emotions, feelings, and thoughts, are a valuable source of data for researching mental health, and techniques based on machine learning are increasingly used for this purpose. Objective The objective of this review was to explore the scope and limits of cutting-edge techniques that researchers are using for predictive analytics in mental health and to review associated issues, such as ethical concerns, in this area of research. Methods We performed a systematic literature review in March 2017, using keywords to search articles on data mining of social network data in the context of common mental health disorders, published between 2010 and March 8, 2017 in medical and computer science journals. Results The initial search returned a total of 5386 articles. Following a careful analysis of the titles, abstracts, and main texts, we selected 48 articles for review. We coded the articles according to key characteristics, techniques used for data collection, data preprocessing, feature extraction, feature selection, model construction, and model verification. The most common analytical method was text analysis, with several studies using different flavors of image analysis and social interaction graph analysis. Conclusions Despite an increasing number of studies investigating mental health issues using social network data, some common problems persist. Assembling large, high-quality datasets of social media users with mental disorder is problematic, not only due to biases associated with the collection methods, but also with regard to managing consent and selecting appropriate analytics techniques. PMID:28663166

  20. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    NASA Astrophysics Data System (ADS)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  1. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  2. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  3. BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.

    PubMed

    White, B J; Amrine, D E; Larson, R L

    2018-04-14

    Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.

  4. Fall Velocities of Hydrometeors in the Atmosphere: Refinements to a Continuous Analytical Power Law.

    NASA Astrophysics Data System (ADS)

    Khvorostyanov, Vitaly I.; Curry, Judith A.

    2005-12-01

    This paper extends the previous research of the authors on the unified representation of fall velocities for both liquid and crystalline particles as a power law over the entire size range of hydrometeors observed in the atmosphere. The power-law coefficients are determined as continuous analytical functions of the Best or Reynolds number or of the particle size. Here, analytical expressions are formulated for the turbulent corrections to the Reynolds number and to the power-law coefficients that describe the continuous transition from the laminar to the turbulent flow around a falling particle. A simple analytical expression is found for the correction of fall velocities for temperature and pressure. These expressions and the resulting fall velocities are compared with observations and other calculations for a range of ice crystal habits and sizes. This approach provides a continuous analytical power-law description of the terminal velocities of liquid and crystalline hydrometeors with sufficiently high accuracy and can be directly used in bin-resolving models or incorporated into parameterizations for cloud- and large-scale models and remote sensing techniques.

  5. A Modular GIS-Based Software Architecture for Model Parameter Estimation using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.

    2012-12-01

    The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.

  6. 40 CFR 79.11 - Information and assurances to be provided by the fuel manufacturer.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... description (or identification, in the case of a generally accepted method) of a suitable analytical technique... sold, offered for sale, or introduced into commerce for use in motor vehicles manufactured after model...

  7. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  8. Scale-model charge-transfer technique for measuring enhancement factors

    NASA Technical Reports Server (NTRS)

    Kositsky, J.; Nanevicz, J. E.

    1991-01-01

    Determination of aircraft electric field enhancement factors is crucial when using airborne field mill (ABFM) systems to accurately measure electric fields aloft. SRI used the scale model charge transfer technique to determine enhancement factors of several canonical shapes and a scale model Learjet 36A. The measured values for the canonical shapes agreed with known analytic solutions within about 6 percent. The laboratory determined enhancement factors for the aircraft were compared with those derived from in-flight data gathered by a Learjet 36A outfitted with eight field mills. The values agreed to within experimental error (approx. 15 percent).

  9. Model Analytical Development for Physical, Chemical, and Biological Characterization of Momordica charantia Vegetable Drug

    PubMed Central

    Guimarães, Geovani Pereira; Santos, Ravely Lucena; Júnior, Fernando José de Lima Ramos; da Silva, Karla Monik Alves; de Souza, Fabio Santos

    2016-01-01

    Momordica charantia is a species cultivated throughout the world and widely used in folk medicine, and its medicinal benefits are well documented, especially its pharmacological properties, including antimicrobial activities. Analytical methods have been used to aid in the characterization of compounds derived from plant drug extracts and their products. This paper developed a methodological model to evaluate the integrity of the vegetable drug M. charantia in different particle sizes, using different analytical methods. M. charantia was collected in the semiarid region of Paraíba, Brazil. The herbal medicine raw material derived from the leaves and fruits in different particle sizes was analyzed using thermoanalytical techniques as thermogravimetry (TG) and differential thermal analysis (DTA), pyrolysis coupled to gas chromatography/mass spectrometry (PYR-GC/MS), and nuclear magnetic resonance (1H NMR), in addition to the determination of antimicrobial activity. The different particle surface area among the samples was differentiated by the techniques. DTA and TG were used for assessing thermal and kinetic parameters and PYR-GC/MS was used for degradation products chromatographic identification through the pyrograms. The infusions obtained from the fruit and leaves of Momordica charantia presented antimicrobial activity. PMID:27579215

  10. Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    2016-01-01

    Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

  11. Applications Of Measurement Techniques To Develop Small-Diameter, Undersea Fiber Optic Cables

    NASA Astrophysics Data System (ADS)

    Kamikawa, Neil T.; Nakagawa, Arthur T.

    1984-12-01

    Attenuation, strain, and optical time domain reflectometer (OTDR) measurement techniques were applied successfully in the development of a minimum-diameter, electro-optic sea floor cable. Temperature and pressure models for excess attenuation in polymer coated, graded-index fibers were investigated analytically and experimentally using these techniques in the laboratory. The results were used to select a suitable fiber for the cable. Measurements also were performed on these cables during predeployment and sea-trial testing to verify laboratory results. Application of the measurement techniques and results are summarized in this paper.

  12. Analytical Electrochemistry: Methodology and Applications of Dynamic Techniques.

    ERIC Educational Resources Information Center

    Heineman, William R.; Kissinger, Peter T.

    1980-01-01

    Reports developments involving the experimental aspects of finite and current analytical electrochemistry including electrode materials (97 cited references), hydrodynamic techniques (56), spectroelectrochemistry (62), stripping voltammetry (70), voltammetric techniques (27), polarographic techniques (59), and miscellany (12). (CS)

  13. IT vendor selection model by using structural equation model & analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  14. Development of flexural vibration inspection techniques to rapidly assess the structural health of timber bridge systems

    Treesearch

    Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw; Robert Vatalaro

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  15. A workflow learning model to improve geovisual analytics utility

    PubMed Central

    Roth, Robert E; MacEachren, Alan M; McCabe, Craig A

    2011-01-01

    Introduction This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. Objectives The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. Methodology The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the concept of scientific workflows. Second, we implemented an interface in the G-EX Portal Learn Module to demonstrate the workflow learning model. The workflow interface allows users to drag learning artifacts uploaded to the G-EX Portal onto a central whiteboard and then annotate the workflow using text and drawing tools. Once completed, users can visit the assembled workflow to get an idea of the kind, number, and scale of analysis steps, view individual learning artifacts associated with each node in the workflow, and ask questions about the overall workflow or individual learning artifacts through the associated forums. An example learning workflow in the domain of epidemiology is provided to demonstrate the effectiveness of the approach. Results/Conclusions In the context of geovisual analytics, GIScientists are not only responsible for developing software to facilitate visually-mediated reasoning about large and complex spatiotemporal information, but also for ensuring that this software works. The workflow learning model discussed in this paper and demonstrated in the G-EX Portal Learn Module is one approach to improving the utility of geovisual analytics software. While development of the G-EX Portal Learn Module is ongoing, we expect to release the G-EX Portal Learn Module by Summer 2009. PMID:21983545

  16. A workflow learning model to improve geovisual analytics utility.

    PubMed

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the concept of scientific workflows. Second, we implemented an interface in the G-EX Portal Learn Module to demonstrate the workflow learning model. The workflow interface allows users to drag learning artifacts uploaded to the G-EX Portal onto a central whiteboard and then annotate the workflow using text and drawing tools. Once completed, users can visit the assembled workflow to get an idea of the kind, number, and scale of analysis steps, view individual learning artifacts associated with each node in the workflow, and ask questions about the overall workflow or individual learning artifacts through the associated forums. An example learning workflow in the domain of epidemiology is provided to demonstrate the effectiveness of the approach. RESULTS/CONCLUSIONS: In the context of geovisual analytics, GIScientists are not only responsible for developing software to facilitate visually-mediated reasoning about large and complex spatiotemporal information, but also for ensuring that this software works. The workflow learning model discussed in this paper and demonstrated in the G-EX Portal Learn Module is one approach to improving the utility of geovisual analytics software. While development of the G-EX Portal Learn Module is ongoing, we expect to release the G-EX Portal Learn Module by Summer 2009.

  17. An analytical approach for the calculation of stress-intensity factors in transformation-toughened ceramics

    NASA Astrophysics Data System (ADS)

    Müller, W. H.

    1990-12-01

    Stress-induced transformation toughening in Zirconia-containing ceramics is described analytically by means of a quantitative model: A Griffith crack which interacts with a transformed, circular Zirconia inclusion. Due to its volume expansion, a ZrO2-particle compresses its flanks, whereas a particle in front of the crack opens the flanks such that the crack will be attracted and finally absorbed. Erdogan's integral equation technique is applied to calculate the dislocation functions and the stress-intensity-factors which correspond to these situations. In order to derive analytical expressions, the elastic constants of the inclusion and the matrix are assumed to be equal.

  18. Analytical and experimental studies of graphite-epoxy and boron-epoxy angle ply laminates in compression

    NASA Technical Reports Server (NTRS)

    Weller, T.

    1977-01-01

    The applicability and adequacy of several computer techniques in predicting satisfactorily the nonlinear/inelastic response of angle ply laminates were evaluated. The analytical predictions were correlated with the results of a test program on the inelastic response under axial compression of a large variety of graphite-epoxy and boron-epoxy angle ply laminates. These comparison studies indicate that neither of the abovementioned analyses can satisfactorily predict either the mode of response or the ultimate stress value corresponding to a particular angle ply laminate configuration. Consequently, also the simple failure mechanisms assumed in the analytical models were not verified.

  19. Control system design for flexible structures using data models

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Frazier, W. Garth; Mitchell, Jerrel R.; Medina, Enrique A.; Bukley, Angelia P.

    1993-01-01

    The dynamics and control of flexible aerospace structures exercises many of the engineering disciplines. In recent years there has been considerable research in the developing and tailoring of control system design techniques for these structures. This problem involves designing a control system for a multi-input, multi-output (MIMO) system that satisfies various performance criteria, such as vibration suppression, disturbance and noise rejection, attitude control and slewing control. Considerable progress has been made and demonstrated in control system design techniques for these structures. The key to designing control systems for these structures that meet stringent performance requirements is an accurate model. It has become apparent that theoretically and finite-element generated models do not provide the needed accuracy; almost all successful demonstrations of control system design techniques have involved using test results for fine-tuning a model or for extracting a model using system ID techniques. This paper describes past and ongoing efforts at Ohio University and NASA MSFC to design controllers using 'data models.' The basic philosophy of this approach is to start with a stabilizing controller and frequency response data that describes the plant; then, iteratively vary the free parameters of the controller so that performance measures become closer to satisfying design specifications. The frequency response data can be either experimentally derived or analytically derived. One 'design-with-data' algorithm presented in this paper is called the Compensator Improvement Program (CIP). The current CIP designs controllers for MIMO systems so that classical gain, phase, and attenuation margins are achieved. The center-piece of the CIP algorithm is the constraint improvement technique which is used to calculate a parameter change vector that guarantees an improvement in all unsatisfied, feasible performance metrics from iteration to iteration. The paper also presents a recently demonstrated CIP-type algorithm, called the Model and Data Oriented Computer-Aided Design System (MADCADS), developed for achieving H(sub infinity) type design specifications using data models. Control system design for the NASA/MSFC Single Structure Control Facility are demonstrated for both CIP and MADCADS. Advantages of design-with-data algorithms over techniques that require analytical plant models are also presented.

  20. In-orbit evaluation of the control system/structural mode interactions of the OSO-8 spacecraft

    NASA Technical Reports Server (NTRS)

    Slafer, L. I.

    1979-01-01

    The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. The paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments, and have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system.

  1. FDTD subcell graphene model beyond the thin-film approximation

    NASA Astrophysics Data System (ADS)

    Valuev, Ilya; Belousov, Sergei; Bogdanova, Maria; Kotov, Oleg; Lozovik, Yurii

    2017-01-01

    A subcell technique for calculation of optical properties of graphene with the finite-difference time-domain (FDTD) method is presented. The technique takes into account the surface conductivity of graphene which allows the correct calculation of its dispersive response for arbitrarily polarized incident waves interacting with the graphene. The developed technique is verified for a planar graphene sheet configuration against the exact analytical solution. Based on the same test case scenario, we also show that the subcell technique demonstrates a superior accuracy and numerical efficiency with respect to the widely used thin-film FDTD approach for modeling graphene. We further apply our technique to the simulations of a graphene metamaterial containing periodically spaced graphene strips (graphene strip-grating) and demonstrate good agreement with the available theoretical results.

  2. Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield

    NASA Technical Reports Server (NTRS)

    Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)

    2001-01-01

    New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.

  3. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  4. An analytical approach for predicting pilot induced oscillations

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  5. Comparison of a two-dimensional adaptive-wall technique with analytical wall interference correction techniques

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.

    1992-01-01

    A two dimensional airfoil model was tested in the adaptive wall test section of the NASA Langley 0.3 meter Transonic Cryogenic Tunnel (TCT) and in the ventilated test section of the National Aeronautical Establishment Two Dimensional High Reynold Number Facility (HRNF). The primary goal of the tests was to compare different techniques (adaptive test section walls and classical, analytical corrections) to account for wall interference. Tests were conducted over a Mach number range from 0.3 to 0.8 at chord Reynolds numbers of 10 x 10(exp 6), 15 x 10(exp 6), and 20 x 10(exp 6). The angle of attack was varied from about 12 degrees up to stall. Movement of the top and bottom test section walls was used to account for the wall interference in the HRNF tests. The test results are in good agreement.

  6. Structural analyses for the modification and verification of the Viking aeroshell

    NASA Technical Reports Server (NTRS)

    Stephens, W. B.; Anderson, M. S.

    1976-01-01

    The Viking aeroshell is an extremely lightweight flexible shell structure that has undergone thorough buckling analyses in the course of its development. The analytical tools and modeling technique required to reveal the structural behavior are presented. Significant results are given which illustrate the complex failure modes not usually observed in simple models and analyses. Both shell-of-revolution analysis for the pressure loads and thermal loads during entry and a general shell analysis for concentrated tank loads during launch were used. In many cases fixes or alterations to the structure were required, and the role of the analytical results in determining these modifications is indicated.

  7. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    PubMed

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  8. Comparison of commercial analytical techniques for measuring chlorine dioxide in urban desalinated drinking water.

    PubMed

    Ammar, T A; Abid, K Y; El-Bindary, A A; El-Sonbati, A Z

    2015-12-01

    Most drinking water industries are closely examining options to maintain a certain level of disinfectant residual through the entire distribution system. Chlorine dioxide is one of the promising disinfectants that is usually used as a secondary disinfectant, whereas the selection of the proper monitoring analytical technique to ensure disinfection and regulatory compliance has been debated within the industry. This research endeavored to objectively compare the performance of commercially available analytical techniques used for chlorine dioxide measurements (namely, chronoamperometry, DPD (N,N-diethyl-p-phenylenediamine), Lissamine Green B (LGB WET) and amperometric titration), to determine the superior technique. The commonly available commercial analytical techniques were evaluated over a wide range of chlorine dioxide concentrations. In reference to pre-defined criteria, the superior analytical technique was determined. To discern the effectiveness of such superior technique, various factors, such as sample temperature, high ionic strength, and other interferences that might influence the performance were examined. Among the four techniques, chronoamperometry technique indicates a significant level of accuracy and precision. Furthermore, the various influencing factors studied did not diminish the technique's performance where it was fairly adequate in all matrices. This study is a step towards proper disinfection monitoring and it confidently assists engineers with chlorine dioxide disinfection system planning and management.

  9. Modelling the Penetration of Salicylates through Skin Using a Silicone Membrane

    ERIC Educational Resources Information Center

    Wilkins, Andrew; Parmenter, Emily

    2012-01-01

    A diffusion cell to model the permeation of salicylate drugs through the skin using low-cost materials and a sensitive colorimetric analytical technique is described. The diffusion apparatus has been used at a further education college by a student for her AS-level Extended Project to investigate the permeation rates of salicylic acid…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Vay, J. -L.

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  11. Subsynchronous instability of a geared centrifugal compressor of overhung design

    NASA Technical Reports Server (NTRS)

    Hudson, J. H.; Wittman, L. J.

    1980-01-01

    The original design analysis and shop test data are presented for a three stage (poster) air compressor with impellers mounted on the extensions of a twin pinion gear, and driven by an 8000 hp synchronous motor. Also included are field test data, subsequent rotor dynamics analysis, modifications, and final rotor behavior. A subsynchronous instability existed on a geared, overhung rotor. State-of-the-art rotor dynamics analysis techniques provided a reasonable analytical model of the rotor. A bearing modification arrived at analytically eliminated the instability.

  12. Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-01-01

    Thanks to recent developments in additive manufacturing techniques, it is now possible to fabricate porous biomaterials with arbitrarily complex micro-architectures. Micro-architectures of such biomaterials determine their physical and biological properties, meaning that one could potentially improve the performance of such biomaterials through rational design of micro-architecture. The relationship between the micro-architecture of porous biomaterials and their physical and biological properties has therefore received increasing attention recently. In this paper, we studied the mechanical properties of porous biomaterials made from a relatively unexplored unit cell, namely rhombicuboctahedron. We derived analytical relationships that relate the micro-architecture of such porous biomaterials, i.e. the dimensions of the rhombicuboctahedron unit cell, to their elastic modulus, Poisson's ratio, and yield stress. Finite element models were also developed to validate the analytical solutions. Analytical and numerical results were compared with experimental data from one of our recent studies. It was found that analytical solutions and numerical results show a very good agreement particularly for smaller values of apparent density. The elastic moduli predicted by analytical and numerical models were in very good agreement with experimental observations too. While in excellent agreement with each other, analytical and numerical models somewhat over-predicted the yield stress of the porous structures as compared to experimental data. As the ratio of the vertical struts to the inclined struts, α, approaches zero and infinity, the rhombicuboctahedron unit cell respectively approaches the octahedron (or truncated cube) and cube unit cells. For those limits, the analytical solutions presented here were found to approach the analytic solutions obtained for the octahedron, truncated cube, and cube unit cells, meaning that the presented solutions are generalizations of the analytical solutions obtained for several other types of porous biomaterials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Simulation of a model nanopore sensor: Ion competition underlies device behavior.

    PubMed

    Mádai, Eszter; Valiskó, Mónika; Dallos, András; Boda, Dezső

    2017-12-28

    We study a model nanopore sensor with which a very low concentration of analyte molecules can be detected on the basis of the selective binding of the analyte molecules to the binding sites on the pore wall. The bound analyte ions partially replace the current-carrier cations in a thermodynamic competition. This competition depends both on the properties of the nanopore and the concentrations of the competing ions (through their chemical potentials). The output signal given by the device is the current reduction caused by the presence of the analyte ions. The concentration of the analyte ions can be determined through calibration curves. We model the binding site with the square-well potential and the electrolyte as charged hard spheres in an implicit background solvent. We study the system with a hybrid method in which we compute the ion flux with the Nernst-Planck (NP) equation coupled with the Local Equilibrium Monte Carlo (LEMC) simulation technique. The resulting NP+LEMC method is able to handle both strong ionic correlations inside the pore (including finite size of ions) and bulk concentrations as low as micromolar. We analyze the effect of bulk ion concentrations, pore parameters, binding site parameters, electrolyte properties, and voltage on the behavior of the device.

  14. Simulation of a model nanopore sensor: Ion competition underlies device behavior

    NASA Astrophysics Data System (ADS)

    Mádai, Eszter; Valiskó, Mónika; Dallos, András; Boda, Dezső

    2017-12-01

    We study a model nanopore sensor with which a very low concentration of analyte molecules can be detected on the basis of the selective binding of the analyte molecules to the binding sites on the pore wall. The bound analyte ions partially replace the current-carrier cations in a thermodynamic competition. This competition depends both on the properties of the nanopore and the concentrations of the competing ions (through their chemical potentials). The output signal given by the device is the current reduction caused by the presence of the analyte ions. The concentration of the analyte ions can be determined through calibration curves. We model the binding site with the square-well potential and the electrolyte as charged hard spheres in an implicit background solvent. We study the system with a hybrid method in which we compute the ion flux with the Nernst-Planck (NP) equation coupled with the Local Equilibrium Monte Carlo (LEMC) simulation technique. The resulting NP+LEMC method is able to handle both strong ionic correlations inside the pore (including finite size of ions) and bulk concentrations as low as micromolar. We analyze the effect of bulk ion concentrations, pore parameters, binding site parameters, electrolyte properties, and voltage on the behavior of the device.

  15. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  16. An iterative analytical technique for the design of interplanetary direct transfer trajectories including perturbations

    NASA Astrophysics Data System (ADS)

    Parvathi, S. P.; Ramanan, R. V.

    2018-06-01

    An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.

  17. Model reduction by trimming for a class of semi-Markov reliability models and the corresponding error bound

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Daniel L.

    1991-01-01

    Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.

  18. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. On the Use of Machine Learning Techniques for the Mechanical Characterization of Soft Biological Tissues.

    PubMed

    Cilla, M; Pérez-Rey, I; Martínez, M A; Peña, Estefania; Martínez, Javier

    2018-06-23

    Motivated by the search for new strategies for fitting a material model, a new approach is explored in the present work. The use of numerical and complex algorithms based on machine learning techniques such as support vector machines for regression, bagged decision trees and artificial neural networks is proposed for solving the parameter identification of constitutive laws for soft biological tissues. First, the mathematical tools were trained with analytical uniaxial data (circumferential and longitudinal directions) as inputs, and their corresponding material parameters of the Gasser, Ogden and Holzapfel strain energy function as outputs. The train and test errors show great efficiency during the training process in finding correlations between inputs and outputs; besides, the correlation coefficients were very close to 1. Second, the tool was validated with unseen observations of analytical circumferential and longitudinal uniaxial data. The results show an excellent agreement between the prediction of the material parameters of the SEF and the analytical curves. Finally, data from real circumferential and longitudinal uniaxial tests on different cardiovascular tissues were fitted, thus the material model of these tissues was predicted. We found that the method was able to consistently identify model parameters, and we believe that the use of these numerical tools could lead to an improvement in the characterization of soft biological tissues. This article is protected by copyright. All rights reserved.

  20. Testing a path-analytic mediation model of how motivational enhancement physiotherapy improves physical functioning in pain patients.

    PubMed

    Cheing, Gladys; Vong, Sinfia; Chan, Fong; Ditchman, Nicole; Brooks, Jessica; Chan, Chetwyn

    2014-12-01

    Pain is a complex phenomenon not easily discerned from psychological, social, and environmental characteristics and is an oft cited barrier to return to work for people experiencing low back pain (LBP). The purpose of this study was to evaluate a path-analytic mediation model to examine how motivational enhancement physiotherapy, which incorporates tenets of motivational interviewing, improves physical functioning of patients with chronic LBP. Seventy-six patients with chronic LBP were recruited from the outpatient physiotherapy department of a government hospital in Hong Kong. The re-specified path-analytic model fit the data very well, χ (2)(3, N = 76) = 3.86, p = .57; comparative fit index = 1.00; and the root mean square error of approximation = 0.00. Specifically, results indicated that (a) using motivational interviewing techniques in physiotherapy was associated with increased working alliance with patients, (b) working alliance increased patients' outcome expectancy and (c) greater outcome expectancy resulted in a reduction of subjective pain intensity and improvement in physical functioning. Change in pain intensity also directly influenced improvement in physical functioning. The effect of motivational enhancement therapy on physical functioning can be explained by social-cognitive factors such as motivation, outcome expectancy, and working alliance. The use of motivational interviewing techniques to increase outcome expectancy of patients and improve working alliance could further strengthen the impact of physiotherapy on rehabilitation outcomes of patients with chronic LBP.

  1. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  2. Dynamical analysis of the avian-human influenza epidemic model using the semi-analytical method

    NASA Astrophysics Data System (ADS)

    Jabbari, Azizeh; Kheiri, Hossein; Bekir, Ahmet

    2015-03-01

    In this work, we present a dynamic behavior of the avian-human influenza epidemic model by using efficient computational algorithm, namely the multistage differential transform method(MsDTM). The MsDTM is used here as an algorithm for approximating the solutions of the avian-human influenza epidemic model in a sequence of time intervals. In order to show the efficiency of the method, the obtained numerical results are compared with the fourth-order Runge-Kutta method (RK4M) and differential transform method(DTM) solutions. It is shown that the MsDTM has the advantage of giving an analytical form of the solution within each time interval which is not possible in purely numerical techniques like RK4M.

  3. Gaussian closure technique applied to the hysteretic Bouc model with non-zero mean white noise excitation

    NASA Astrophysics Data System (ADS)

    Waubke, Holger; Kasess, Christian H.

    2016-11-01

    Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.

  4. Near-infrared spectroscopy for the detection and quantification of bacterial contaminations in pharmaceutical products.

    PubMed

    Quintelas, Cristina; Mesquita, Daniela P; Lopes, João A; Ferreira, Eugénio C; Sousa, Clara

    2015-08-15

    Accurate detection and quantification of microbiological contaminations remains an issue mainly due the lack of rapid and precise analytical techniques. Standard methods are expensive and time-consuming being associated to high economic losses and public health threats. In the context of pharmaceutical industry, the development of fast analytical techniques able to overcome these limitations is crucial and spectroscopic techniques might constitute a reliable alternative. In this work we proved the ability of Fourier transform near infrared spectroscopy (FT-NIRS) to detect and quantify bacteria (Bacillus subtilis, Escherichia coli, Pseudomonas fluorescens, Salmonella enterica, Staphylococcus epidermidis) from 10 to 10(8) CFUs/mL in sterile saline solutions (NaCl 0.9%). Partial least squares discriminant analysis (PLSDA) models showed that FT-NIRS was able to discriminate between sterile and contaminated solutions for all bacteria as well as to identify the contaminant bacteria. Partial least squares (PLS) models allowed bacterial quantification with limits of detection ranging from 5.1 to 9 CFU/mL for E. coli and B. subtilis, respectively. This methodology was successfully validated in three pharmaceutical preparations (contact lens solution, cough syrup and topic anti-inflammatory solution) proving that this technique possess a high potential to be routinely used for the detection and quantification of bacterial contaminations. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Screening-level estimates of mass discharge uncertainty from point measurement methods

    EPA Science Inventory

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...

  6. Macroeconomic Activity Module - NEMS Documentation

    EIA Publications

    2016-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code

  7. An explicit closed-form analytical solution for European options under the CGMY model

    NASA Astrophysics Data System (ADS)

    Chen, Wenting; Du, Meiyu; Xu, Xiang

    2017-01-01

    In this paper, we consider the analytical pricing of European path-independent options under the CGMY model, which is a particular type of pure jump Le´vy process, and agrees well with many observed properties of the real market data by allowing the diffusions and jumps to have both finite and infinite activity and variation. It is shown that, under this model, the option price is governed by a fractional partial differential equation (FPDE) with both the left-side and right-side spatial-fractional derivatives. In comparison to derivatives of integer order, fractional derivatives at a point not only involve properties of the function at that particular point, but also the information of the function in a certain subset of the entire domain of definition. This ;globalness; of the fractional derivatives has added an additional degree of difficulty when either analytical methods or numerical solutions are attempted. Albeit difficult, we still have managed to derive an explicit closed-form analytical solution for European options under the CGMY model. Based on our solution, the asymptotic behaviors of the option price and the put-call parity under the CGMY model are further discussed. Practically, a reliable numerical evaluation technique for the current formula is proposed. With the numerical results, some analyses of impacts of four key parameters of the CGMY model on European option prices are also provided.

  8. Depth-resolved monitoring of analytes diffusion in ocular tissues

    NASA Astrophysics Data System (ADS)

    Larin, Kirill V.; Ghosn, Mohamad G.; Tuchin, Valery V.

    2007-02-01

    Optical coherence tomography (OCT) is a noninvasive imaging technique with high in-depth resolution. We employed OCT technique for monitoring and quantification of analyte and drug diffusion in cornea and sclera of rabbit eyes in vitro. Different analytes and drugs such as metronidazole, dexamethasone, ciprofloxacin, mannitol, and glucose solution were studied and whose permeability coefficients were calculated. Drug diffusion monitoring was performed as a function of time and as a function of depth. Obtained results suggest that OCT technique might be used for analyte diffusion studies in connective and epithelial tissues.

  9. Experimental issues related to frequency response function measurements for frequency-based substructuring

    NASA Astrophysics Data System (ADS)

    Nicgorski, Dana; Avitabile, Peter

    2010-07-01

    Frequency-based substructuring is a very popular approach for the generation of system models from component measured data. Analytically the approach has been shown to produce accurate results. However, implementation with actual test data can cause difficulties and cause problems with the system response prediction. In order to produce good results, extreme care is needed in the measurement of the drive point and transfer impedances of the structure as well as observe all the conditions for a linear time invariant system. Several studies have been conducted to show the sensitivity of the technique to small variations that often occur during typical testing of structures. These variations have been observed in actual tested configurations and have been substantiated with analytical models to replicate the problems typically encountered. The use of analytically simulated issues helps to clearly see the effects of typical measurement difficulties often observed in test data. This paper presents some of these common problems observed and provides guidance and recommendations for data to be used for this modeling approach.

  10. Rapid and automatic chemical identification of the medicinal flower buds of Lonicera plants by the benchtop and hand-held Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Jianbo; Guo, Baolin; Yan, Rui; Sun, Suqin; Zhou, Qun

    2017-07-01

    With the utilization of the hand-held equipment, Fourier transform infrared (FT-IR) spectroscopy is a promising analytical technique to minimize the time cost for the chemical identification of herbal materials. This research examines the feasibility of the hand-held FT-IR spectrometer for the on-site testing of herbal materials, using Lonicerae Japonicae Flos (LJF) and Lonicerae Flos (LF) as examples. Correlation-based linear discriminant models for LJF and LF are established based on the benchtop and hand-held FT-IR instruments. The benchtop FT-IR models can exactly recognize all articles of LJF and LF. Although a few LF articles are misjudged at the sub-class level, the hand-held FT-IR models are able to exactly discriminate LJF and LF. As a direct and label-free analytical technique, FT-IR spectroscopy has great potential in the rapid and automatic chemical identification of herbal materials either in laboratories or in fields. This is helpful to prevent the spread and use of adulterated herbal materials in time.

  11. Technique for Calculating Solution Derivatives With Respect to Geometry Parameters in a CFD Code

    NASA Technical Reports Server (NTRS)

    Mathur, Sanjay

    2011-01-01

    A solution has been developed to the challenges of computation of derivatives with respect to geometry, which is not straightforward because these are not typically direct inputs to the computational fluid dynamics (CFD) solver. To overcome these issues, a procedure has been devised that can be used without having access to the mesh generator, while still being applicable to all types of meshes. The basic approach is inspired by the mesh motion algorithms used to deform the interior mesh nodes in a smooth manner when the surface nodes, for example, are in a fluid structure interaction problem. The general idea is to model the mesh edges and nodes as constituting a spring-mass system. Changes to boundary node locations are propagated to interior nodes by allowing them to assume their new equilibrium positions, for instance, one where the forces on each node are in balance. The main advantage of the technique is that it is independent of the volumetric mesh generator, and can be applied to structured, unstructured, single- and multi-block meshes. It essentially reduces the problem down to defining the surface mesh node derivatives with respect to the geometry parameters of interest. For analytical geometries, this is quite straightforward. In the more general case, one would need to be able to interrogate the underlying parametric CAD (computer aided design) model and to evaluate the derivatives either analytically, or by a finite difference technique. Because the technique is based on a partial differential equation (PDE), it is applicable not only to forward mode problems (where derivatives of all the output quantities are computed with respect to a single input), but it could also be extended to the adjoint problem, either by using an analytical adjoint of the PDE or a discrete analog.

  12. Researching Mental Health Disorders in the Era of Social Media: Systematic Review.

    PubMed

    Wongkoblap, Akkapon; Vadillo, Miguel A; Curcin, Vasa

    2017-06-29

    Mental illness is quickly becoming one of the most prevalent public health problems worldwide. Social network platforms, where users can express their emotions, feelings, and thoughts, are a valuable source of data for researching mental health, and techniques based on machine learning are increasingly used for this purpose. The objective of this review was to explore the scope and limits of cutting-edge techniques that researchers are using for predictive analytics in mental health and to review associated issues, such as ethical concerns, in this area of research. We performed a systematic literature review in March 2017, using keywords to search articles on data mining of social network data in the context of common mental health disorders, published between 2010 and March 8, 2017 in medical and computer science journals. The initial search returned a total of 5386 articles. Following a careful analysis of the titles, abstracts, and main texts, we selected 48 articles for review. We coded the articles according to key characteristics, techniques used for data collection, data preprocessing, feature extraction, feature selection, model construction, and model verification. The most common analytical method was text analysis, with several studies using different flavors of image analysis and social interaction graph analysis. Despite an increasing number of studies investigating mental health issues using social network data, some common problems persist. Assembling large, high-quality datasets of social media users with mental disorder is problematic, not only due to biases associated with the collection methods, but also with regard to managing consent and selecting appropriate analytics techniques. ©Akkapon Wongkoblap, Miguel A Vadillo, Vasa Curcin. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.06.2017.

  13. Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages.

    PubMed

    Zhu, R; Zacharias, L; Wooding, K M; Peng, W; Mechref, Y

    2017-01-01

    Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection, while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins, while automated software tools started replacing manual processing to improve the reliability and throughput of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. © 2017 Elsevier Inc. All rights reserved.

  14. CHAPTER 7: Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages

    PubMed Central

    Zhu, Rui; Zacharias, Lauren; Wooding, Kerry M.; Peng, Wenjing; Mechref, Yehia

    2017-01-01

    Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins while automated software tools started replacing manual processing to improve the reliability and throughout of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. PMID:28109440

  15. Model updating strategy for structures with localised nonlinearities using frequency response measurements

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.

    2018-02-01

    This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.

  16. Towards Understanding Soil Forming in Santa Clotilde Critical Zone Observatory: Modelling Soil Mixing Processes in a Hillslope using Luminescence Techniques

    NASA Astrophysics Data System (ADS)

    Sanchez, A. R.; Laguna, A.; Reimann, T.; Giráldez, J. V.; Peña, A.; Wallinga, J.; Vanwalleghem, T.

    2017-12-01

    Different geomorphological processes such as bioturbation and erosion-deposition intervene in soil formation and landscape evolution. The latter processes produce the alteration and degradation of the materials that compose the rocks. The degree to which the bedrock is weathered is estimated through the fraction of the bedrock which is mixing in the soil either vertically or laterally. This study presents an analytical solution for the diffusion-advection equation to quantify bioturbation and erosion-depositions rates in profiles along a catena. The model is calibrated with age-depth data obtained from profiles using the luminescence dating based on single grain Infrared Stimulated Luminescence (IRSL). Luminescence techniques contribute to a direct measurement of the bioturbation and erosion-deposition processes. Single-grain IRSL techniques is applied to feldspar minerals of fifteen samples which were collected from four soil profiles at different depths along a catena in Santa Clotilde Critical Zone Observatory, Cordoba province, SE Spain. A sensitivity analysis is studied to know the importance of the parameters in the analytical model. An uncertainty analysis is carried out to stablish the better fit of the parameters to the measured age-depth data. The results indicate a diffusion constant at 20 cm in depth of 47 (mm2/year) in the hill-base profile and 4.8 (mm2/year) in the hilltop profile. The model has high uncertainty in the estimation of erosion and deposition rates. This study reveals the potential of luminescence single-grain techniques to quantify pedoturbation processes.

  17. Geomagnetic field models for satellite angular motion studies

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, M. Yu.; Penkov, V. I.; Roldugin, D. S.; Pichuzhkina, A. V.

    2018-03-01

    Four geomagnetic field models are discussed: IGRF, inclined, direct and simplified dipoles. Geomagnetic induction vector expressions are provided in different reference frames. Induction vector behavior is compared for different models. Models applicability for the analysis of satellite motion is studied from theoretical and engineering perspectives. Relevant satellite dynamics analysis cases using analytical and numerical techniques are provided. These cases demonstrate the benefit of a certain model for a specific dynamics study. Recommendations for models usage are summarized in the end.

  18. Theoretical and software considerations for nonlinear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1983-01-01

    In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.

  19. Simulation and statistics: Like rhythm and song

    NASA Astrophysics Data System (ADS)

    Othman, Abdul Rahman

    2013-04-01

    Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

  20. Two-Dimensional Model for Reactive-Sorption Columns of Cylindrical Geometry: Analytical Solutions and Moment Analysis.

    PubMed

    Khan, Farman U; Qamar, Shamsul

    2017-05-01

    A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Analytical Techniques and Pharmacokinetics of Gastrodia elata Blume and Its Constituents.

    PubMed

    Wu, Jinyi; Wu, Bingchu; Tang, Chunlan; Zhao, Jinshun

    2017-07-08

    Gastrodia elata Blume ( G. elata ), commonly called Tianma in Chinese, is an important and notable traditional Chinese medicine (TCM), which has been used in China as an anticonvulsant, analgesic, sedative, anti-asthma, anti-immune drug since ancient times. The aim of this review is to provide an overview of the abundant efforts of scientists in developing analytical techniques and performing pharmacokinetic studies of G. elata and its constituents, including sample pretreatment methods, analytical techniques, absorption, distribution, metabolism, excretion (ADME) and influence factors to its pharmacokinetics. Based on the reported pharmacokinetic property data of G. elata and its constituents, it is hoped that more studies will focus on the development of rapid and sensitive analytical techniques, discovering new therapeutic uses and understanding the specific in vivo mechanisms of action of G. elata and its constituents from the pharmacokinetic viewpoint in the near future. The present review discusses analytical techniques and pharmacokinetics of G. elata and its constituents reported from 1985 onwards.

  2. Exploring SMBH assembly with semi-analytic modelling

    NASA Astrophysics Data System (ADS)

    Ricarte, Angelo; Natarajan, Priyamvada

    2018-02-01

    We develop a semi-analytic model to explore different prescriptions of supermassive black hole (SMBH) fuelling. This model utilizes a merger-triggered burst mode in concert with two possible implementations of a long-lived steady mode for assembling the mass of the black hole in a galactic nucleus. We improve modelling of the galaxy-halo connection in order to more realistically determine the evolution of a halo's velocity dispersion. We use four model variants to explore a suite of observables: the M•-σ relation, mass functions of both the overall and broad-line quasar population, and luminosity functions as a function of redshift. We find that `downsizing' is a natural consequence of our improved velocity dispersion mappings, and that high-mass SMBHs assemble earlier than low-mass SMBHs. The burst mode of fuelling is sufficient to explain the assembly of SMBHs to z = 2, but an additional steady mode is required to both assemble low-mass SMBHs and reproduce the low-redshift luminosity function. We discuss in detail the trade-offs in matching various observables and the interconnected modelling components that govern them. As a result, we demonstrate the utility as well as the limitations of these semi-analytic techniques.

  3. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  4. The analytical solution for drug delivery system with nonhomogeneous moving boundary condition

    NASA Astrophysics Data System (ADS)

    Saudi, Muhamad Hakimi; Mahali, Shalela Mohd; Harun, Fatimah Noor

    2017-08-01

    This paper discusses the development and the analytical solution of a mathematical model based on drug release system from a swelling delivery device. The mathematical model is represented by a one-dimensional advection-diffusion equation with nonhomogeneous moving boundary condition. The solution procedures consist of three major steps. Firstly, the application of steady state solution method, which is used to transform the nonhomogeneous moving boundary condition to homogeneous boundary condition. Secondly, the application of the Landau transformation technique that gives a significant impact in removing the advection term in the system of equation and transforming the moving boundary condition to a fixed boundary condition. Thirdly, the used of separation of variables method to find the analytical solution for the resulted initial boundary value problem. The results show that the swelling rate of delivery device and drug release rate is influenced by value of growth factor r.

  5. Analysis and testing of high entrainment single nozzle jet pumps with variable mixing tubes

    NASA Technical Reports Server (NTRS)

    Hickman, K. E.; Hill, P. G.; Gilbert, G. B.

    1972-01-01

    An analytical model was developed to predict the performance characteristics of axisymmetric single-nozzle jet pumps with variable area mixing tubes. The primary flow may be subsonic or supersonic. The computer program uses integral techniques to calculate the velocity profiles and the wall static pressures that result from the mixing of the supersonic primary jet and the subsonic secondary flow. An experimental program was conducted to measure mixing tube wall static pressure variations, velocity profiles, and temperature profiles in a variable area mixing tube with a supersonic primary jet. Static pressure variations were measured at four different secondary flow rates. These test results were used to evaluate the analytical model. The analytical results compared well to the experimental data. Therefore, the analysis is believed to be ready for use to relate jet pump performance characteristics to mixing tube design.

  6. Assessment of analytical techniques for predicting solid propellant exhaust plumes

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.

    1977-01-01

    The calculation of solid propellant exhaust plume flow fields is addressed. Two major areas covered are: (1) the applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size and particle size distributions, and (2) thermochemical modeling of the gaseous phase of the flow field. Comparisons of experimentally measured and analytically predicted data are made. The experimental data were obtained for subscale solid propellant motors with aluminum loadings of 2, 10 and 15%. Analytical predictions were made using a fully coupled two-phase numerical solution. Data comparisons will be presented for radial distributions at plume axial stations of 5, 12, 16 and 20 diameters.

  7. Principles of operation and data reduction techniques for the loft drag disc turbine transducer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverman, S.

    An analysis of the single- and two-phase flow data applicable to the loss-of-fluid test (LOFT) is presented for the LOFT drag turbine transducer. Analytical models which were employed to correlate the experimental data are presented.

  8. Combustion Fundamentals Research

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The various physical processes that occur in the gas turbine combustor and the development of analytical models that accurately describe these processes are discussed. Aspects covered include fuel sprays; fluid mixing; combustion dynamics; radiation and chemistry and numeric techniques which can be applied to highly turbulent, recirculating, reacting flow fields.

  9. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.

  10. Surface-enhanced Raman spectroscopy for the detection of pathogenic DNA and protein in foods

    NASA Astrophysics Data System (ADS)

    Chowdhury, Mustafa H.; Atkinson, Brad; Good, Theresa; Cote, Gerard L.

    2003-07-01

    Traditional Raman spectroscopy while extremely sensitive to structure and conformation, is an ineffective tool for the detection of bioanalytes at the sub milimolar level. Surface Enhanced Raman Spectroscopy (SERS) is a technique developed more recently that has been used with applaudable success to enhance the Raman cross-section of a molecule by factors of 106 to 1014. This technique can be exploited in a nanoscale biosensor for the detection of pathogenic proteins and DNA in foods by using a biorecognition molecule to bring a target analyte in close proximity to the mental surface. This is expected to produce a SERS signal of the target analyte, thus making it possible to easily discriminate between the target analyte and possible confounders. In order for the sensor to be effective, the Raman spectra of the target analyte would have to be distinct from that of the biorecognition molecule, as both would be in close proximity to the metal surface and thus be subjected to the SERS effect. In our preliminary studies we have successfully used citrate reduced silver colloidal particles to obtain unique SERS spectra of α-helical and β-sheet bovine serum albumin (BSA) that served as models of an α helical antiobiody (biorecognition element) and a β-sheet target protein (pathogenic prion). In addition, the unique SERS spectra of double stranded and single stranded DNA were also obtained where the single stranded DNA served as the model for the biorecognition element and the double stranded DNA served as themodel for the DNA probe/target hybrid. This provides a confirmation of the feasibility of the method which opens opportunities for potentially wide spread applications in the detection of food pathogens, biowarefare agents, andother bio-analytes.

  11. Mechanical properties of additively manufactured octagonal honeycombs.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-12-01

    Honeycomb structures have found numerous applications as structural and biomedical materials due to their favourable properties such as low weight, high stiffness, and porosity. Application of additive manufacturing and 3D printing techniques allows for manufacturing of honeycombs with arbitrary shape and wall thickness, opening the way for optimizing the mechanical and physical properties for specific applications. In this study, the mechanical properties of honeycomb structures with a new geometry, called octagonal honeycomb, were investigated using analytical, numerical, and experimental approaches. An additive manufacturing technique, namely fused deposition modelling, was used to fabricate the honeycomb from polylactic acid (PLA). The honeycombs structures were then mechanically tested under compression and the mechanical properties of the structures were determined. In addition, the Euler-Bernoulli and Timoshenko beam theories were used for deriving analytical relationships for elastic modulus, yield stress, Poisson's ratio, and buckling stress of this new design of honeycomb structures. Finite element models were also created to analyse the mechanical behaviour of the honeycombs computationally. The analytical solutions obtained using Timoshenko beam theory were close to computational results in terms of elastic modulus, Poisson's ratio and yield stress, especially for relative densities smaller than 25%. The analytical solutions based on the Timoshenko analytical solution and the computational results were in good agreement with experimental observations. Finally, the elastic properties of the proposed honeycomb structure were compared to those of other honeycomb structures such as square, triangular, hexagonal, mixed, diamond, and Kagome. The octagonal honeycomb showed yield stress and elastic modulus values very close to those of regular hexagonal honeycombs and lower than the other considered honeycombs. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Mixing of two co-directional Rayleigh surface waves in a nonlinear elastic material.

    PubMed

    Morlock, Merlin B; Kim, Jin-Yeon; Jacobs, Laurence J; Qu, Jianmin

    2015-01-01

    The mixing of two co-directional, initially monochromatic Rayleigh surface waves in an isotropic, homogeneous, and nonlinear elastic solid is investigated using analytical, finite element method, and experimental approaches. The analytical investigations show that while the horizontal velocity component can form a shock wave, the vertical velocity component can form a pulse independent of the specific ratios of the fundamental frequencies and amplitudes that are mixed. This analytical model is then used to simulate the development of the fundamentals, second harmonics, and the sum and difference frequency components over the propagation distance. The analytical model is further extended to include diffraction effects in the parabolic approximation. Finally, the frequency and amplitude ratios of the fundamentals are identified which provide maximum amplitudes of the second harmonics as well as of the sum and difference frequency components, to help guide effective material characterization; this approach should make it possible to measure the acoustic nonlinearity of a solid not only with the second harmonics, but also with the sum and difference frequency components. Results of the analytical investigations are then confirmed using the finite element method and the experimental feasibility of the proposed technique is validated for an aluminum specimen.

  13. Electrodynamic balance-mass spectrometry of single particles as a new platform for atmospheric chemistry research

    NASA Astrophysics Data System (ADS)

    Birdsall, Adam W.; Krieger, Ulrich K.; Keutsch, Frank N.

    2018-01-01

    New analytical techniques are needed to improve our understanding of the intertwined physical and chemical processes that affect the composition of aerosol particles in the Earth's atmosphere, such as gas-particle partitioning and homogenous or heterogeneous chemistry, and their ultimate relation to air quality and climate. We describe a new laboratory setup that couples an electrodynamic balance (EDB) to a mass spectrometer (MS). The EDB stores a single laboratory-generated particle in an electric field under atmospheric conditions for an arbitrarily long length of time. The particle is then transferred via gas flow to an ionization region that vaporizes and ionizes the analyte molecules before MS measurement. We demonstrate the feasibility of the technique by tracking evaporation of polyethylene glycol molecules and finding agreement with a kinetic model. Fitting data to the kinetic model also allows determination of vapor pressures to within a factor of 2. This EDB-MS system can be used to study fundamental chemical and physical processes involving particles that are difficult to isolate and study with other techniques. The results of such measurements can be used to improve our understanding of atmospheric particles.

  14. Iontophoresis and Flame Photometry: A Hybrid Interdisciplinary Experiment

    ERIC Educational Resources Information Center

    Sharp, Duncan; Cottam, Linzi; Bradley, Sarah; Brannigan, Jeanie; Davis, James

    2010-01-01

    The combination of reverse iontophoresis and flame photometry provides an engaging analytical experiment that gives first-year undergraduate students a flavor of modern drug delivery and analyte extraction techniques while reinforcing core analytical concepts. The experiment provides a highly visual demonstration of the iontophoresis technique and…

  15. Development of an external ceramic insulation for the space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Tanzilli, R. A. (Editor)

    1972-01-01

    The development and evaluation of a family of reusable external insulation systems for use on the space shuttle orbiter is discussed. The material development and evaluation activities are described. Additional information is provided on the development of an analytical micromechanical model of the reusable insulation and the development of techniques for reducing the heat transfer. Design data on reusable insulation systems and test techniques used for design data generation are included.

  16. A Study on Predictive Analytics Application to Ship Machinery Maintenance

    DTIC Science & Technology

    2013-09-01

    Looking at the nature of the time series forecasting method , it would be better applied to offline analysis . The application for real- time online...other system attributes in future. Two techniques of statistical analysis , mainly time series models and cumulative sum control charts, are discussed in...statistical tool employed for the two techniques of statistical analysis . Both time series forecasting as well as CUSUM control charts are shown to be

  17. On-orbit cryogenic fluid transfer

    NASA Technical Reports Server (NTRS)

    Aydelott, J. C.; Gille, J. P.; Eberhardt, R. N.

    1984-01-01

    A number of future NASA and DOD missions have been identified that will require, or could benefit from resupply of cryogenic liquids in orbit. The most promising approach for accomplishing cryogenic fluid transfer in the weightlessness environment of space is to use the thermodynamic filling technique. This approach involves initially reducing the receiver tank temperature by using several charge hold vent cycles followed by filling the tank without venting. Martin Marietta Denver Aerospace, under contract to the NASA Lewis Research Center, is currently developing analytical models to describe the on orbit cryogenic fluid transfer process. A detailed design of a shuttle attached experimental facility, which will provide the data necessary to verify the analytical models, is also being performed.

  18. Newcomer adjustment during organizational socialization: a meta-analytic review of antecedents, outcomes, and methods.

    PubMed

    Bauer, Talya N; Bodner, Todd; Erdogan, Berrin; Truxillo, Donald M; Tucker, Jennifer S

    2007-05-01

    The authors tested a model of antecedents and outcomes of newcomer adjustment using 70 unique samples of newcomers with meta-analytic and path modeling techniques. Specifically, they proposed and tested a model in which adjustment (role clarity, self-efficacy, and social acceptance) mediated the effects of organizational socialization tactics and information seeking on socialization outcomes (job satisfaction, organizational commitment, job performance, intentions to remain, and turnover). The results generally supported this model. In addition, the authors examined the moderating effects of methodology on these relationships by coding for 3 methodological issues: data collection type (longitudinal vs. cross-sectional), sample characteristics (school-to-work vs. work-to-work transitions), and measurement of the antecedents (facet vs. composite measurement). Discussion focuses on the implications of the findings and suggestions for future research. 2007 APA, all rights reserved

  19. Frequency Response of Pressure Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Winslow, Neal A.; Carroll, Bruce F.; Setzer, Fred M.

    1996-01-01

    An experimental method for measuring the frequency response of Pressure Sensitive Paints (PSP) is presented. These results lead to the development of a dynamic correction technique for PSP measurements which is of great importance to the advancement of PSP as a measurement technique. The ability to design such a dynamic corrector is most easily formed from the frequency response of the given system. An example of this correction technique is shown. In addition to the experimental data, an analytical model for the frequency response is developed from the one dimensional mass diffusion equation.

  20. A dual-mode generalized likelihood ratio approach to self-reorganizing digital flight control system design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.

  1. Sensor Data Qualification Technique Applied to Gas Turbine Engines

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Simon, Donald L.

    2013-01-01

    This paper applies a previously developed sensor data qualification technique to a commercial aircraft engine simulation known as the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k). The sensor data qualification technique is designed to detect, isolate, and accommodate faulty sensor measurements. It features sensor networks, which group various sensors together and relies on an empirically derived analytical model to relate the sensor measurements. Relationships between all member sensors of the network are analyzed to detect and isolate any faulty sensor within the network.

  2. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1994-01-01

    The primary accomplishments of the project are as follows: (1) Using the transonic small perturbation equation as a flowfield model, the project demonstrated that the quasi-analytical method could be used to obtain aerodynamic sensitivity coefficients for airfoils at subsonic, transonic, and supersonic conditions for design variables such as Mach number, airfoil thickness, maximum camber, angle of attack, and location of maximum camber. It was established that the quasi-analytical approach was an accurate method for obtaining aerodynamic sensitivity derivatives for airfoils at transonic conditions and usually more efficient than the finite difference approach. (2) The usage of symbolic manipulation software to determine the appropriate expressions and computer coding associated with the quasi-analytical method for sensitivity derivatives was investigated. Using the three dimensional fully conservative full potential flowfield model, it was determined that symbolic manipulation along with a chain rule approach was extremely useful in developing a combined flowfield and quasi-analytical sensitivity derivative code capable of considering a large number of realistic design variables. (3) Using the three dimensional fully conservative full potential flowfield model, the quasi-analytical method was applied to swept wings (i.e. three dimensional) at transonic flow conditions. (4) The incremental iterative technique has been applied to the three dimensional transonic nonlinear small perturbation flowfield formulation, an equivalent plate deflection model, and the associated aerodynamic and structural discipline sensitivity equations; and coupled aeroelastic results for an aspect ratio three wing in transonic flow have been obtained.

  3. Determining Kinetic Parameters for Isothermal Crystallization of Glasses

    NASA Technical Reports Server (NTRS)

    Ray, C. S.; Zhang, T.; Reis, S. T.; Brow, R. K.

    2006-01-01

    Non-isothermal crystallization techniques are frequently used to determine the kinetic parameters for crystallization in glasses. These techniques are experimentally simple and quick compared to the isothermal techniques. However, the analytical models used for non-isothermal data analysis, originally developed for describing isothermal transformation kinetics, are fundamentally flawed. The present paper describes a technique for determining the kinetic parameters for isothermal crystallization in glasses, which eliminates most of the common problems that generally make the studies of isothermal crystallization laborious and time consuming. In this technique, the volume fraction of glass that is crystallized as a function of time during an isothermal hold was determined using differential thermal analysis (DTA). The crystallization parameters for the lithium-disilicate (Li2O.2SiO2) model glass were first determined and compared to the same parameters determined by other techniques to establish the accuracy and usefulness of the present technique. This technique was then used to describe the crystallization kinetics of a complex Ca-Sr-Zn-silicate glass developed for sealing solid oxide fuel cells.

  4. Analytical Modeling of Herschel-Quincke Concept Applied to Inlet Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Hallez, Raphael F.; Burdisso, Ricardo A.; Gerhold, Carl H. (Technical Monitor)

    2002-01-01

    This report summarizes the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the period from January 1999 to December 2000 on the project 'Investigation of an Adaptive Herschel-Quincke Tube Concept for the Reduction of Tonal and Broadband Noise from Turbofan Engines', funded by NASA Langley Research Center. The Herschel-Quincke (HQ) tube concept is a developing technique the consists of circumferential arrays of tubes around the duct. The analytical model is developed to provide prediction and design guidelines for application of the HQ concept to turbofan engine inlets. An infinite duct model is developed and used to provide insight into attenuation mechanisms and design strategies. Based on this early model, the NASA-developed TBIEM3D code is modified for the HQ system. This model allows for investigation of the HQ system combined with a passive liner.

  5. Modeling and simulation of charged particle beam transport in the UTA 2 meter Time of Flight Positron Annihilation Induced Auger Spectrometer

    NASA Astrophysics Data System (ADS)

    Joglekar, Prasad; Lim, Lawrence; Kalaskar, Sushant; Shastry, Karthik; Satyal, Suman; Weiss, Alexander

    2010-10-01

    Time of Flight Positron Annihilation Induced Auger Electron Spectroscopy (TOF PAES) is a surface analytical technique with high surface selectivity. Almost 95% of the PAES signal originates from the sample's topmost layer due to the trapping of positrons just above the surface in an image-potential well before annihilation. This talk presents a description of the TOF technique as the results of modeling of the charged particle transport used in the design of the 2 meter TOF-PAES system currently under construction at UTA.

  6. Dynamic characterization, monitoring and control of rotating flexible beam-mass structures via piezo-embedded techniques

    NASA Technical Reports Server (NTRS)

    Lai, Steven H.-Y.

    1992-01-01

    A variational principle and a finite element discretization technique were used to derive the dynamic equations for a high speed rotating flexible beam-mass system embedded with piezo-electric materials. The dynamic equation thus obtained allows the development of finite element models which accommodate both the original structural element and the piezoelectric element. The solutions of finite element models provide system dynamics needed to design a sensing system. The characterization of gyroscopic effect and damping capacity of smart rotating devices are addressed. Several simulation examples are presented to validate the analytical solution.

  7. The GOSSIP on the MCV V347 Pavonis

    NASA Astrophysics Data System (ADS)

    Potter, S. B.; Cropper, Mark; Hakala, P. J.

    Modelling of the polarized cyclotron emission from magnetic cataclysmic variables (MCVs) has been a powerful technique for determining the structure of the accretion zones on the white dwarf. Until now, this has been achieved by constructing emission regions (for example arcs and spots) put in by hand, in order to recover the polarized emission. These models were all inferred indirectly from arguments based on polarization and X-ray light curves. Potter, Hakala & Cropper (1998) presented a technique (Stokes imaging) which objectively and analytically models the polarized emission to recover the structure of the cyclotron emission region(s) in MCVs. We demonstrate this technique with the aid of a test case, then we apply the technique to polarimetric observations of the AM Her system V347 Pav. As the system parameters of V347 Pav (for example its inclination) have not been well determined, we describe an extension to the Stokes imaging technique which also searches the system parameter space (GOSSIP).

  8. Synchrotron based mass spectrometry to investigate the molecular properties of mineral-organic associations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Suet Yi; Kleber, Markus; Takahashi, Lynelle K.

    2013-04-01

    Soil organic matter (OM) is important because its decay drives life processes in the biosphere. Analysis of organic compounds in geological systems is difficult because of their intimate association with mineral surfaces. To date there is no procedure capable of quantitatively separating organic from mineral phases without creating artifacts or mass loss. Therefore, analytical techniques that can (a) generate information about both organic and mineral phases simultaneously and (b) allow the examination of predetermined high-interest regions of the sample as opposed to conventional bulk analytical techniques are valuable. Laser Desorption Synchrotron Postionization (synchrotron-LDPI) mass spectrometry is introduced as a novelmore » analytical tool to characterize the molecular properties of organic compounds in mineral-organic samples from terrestrial systems, and it is demonstrated that when combined with Secondary Ion Mass Spectrometry (SIMS), can provide complementary information on mineral composition. Mass spectrometry along a decomposition gradient in density fractions, verifies the consistency of our results with bulk analytical techniques. We further demonstrate that by changing laser and photoionization energies, variations in molecular stability of organic compounds associated with mineral surfaces can be determined. The combination of synchrotron-LDPI and SIMS shows that the energetic conditions involved in desorption and ionization of organic matter may be a greater determinant of mass spectral signatures than the inherent molecular structure of the organic compounds investigated. The latter has implications for molecular models of natural organic matter that are based on mass spectrometric information.« less

  9. Detailed analysis of the effects of stencil spatial variations with arbitrary high-order finite-difference Maxwell solver

    DOE PAGES

    Vincenti, H.; Vay, J. -L.

    2015-11-22

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  10. SOLUTIONS APPROXIMATING SOLUTE TRANSPORT IN A LEAKY AQUIFER RECEIVING WASTEWATER INJECTION

    EPA Science Inventory

    A mathematical model amenable to analytical solution techniques is developed for the investigation of contaminant transport from an injection well into a leaky aquifer system, which comprises a pumped and an unpumped aquifer connected to each other by an aquitard. A steady state ...

  11. Analytical solutions of the space-time fractional Telegraph and advection-diffusion equations

    NASA Astrophysics Data System (ADS)

    Tawfik, Ashraf M.; Fichtner, Horst; Schlickeiser, Reinhard; Elhanbaly, A.

    2018-02-01

    The aim of this paper is to develop a fractional derivative model of energetic particle transport for both uniform and non-uniform large-scale magnetic field by studying the fractional Telegraph equation and the fractional advection-diffusion equation. Analytical solutions of the space-time fractional Telegraph equation and space-time fractional advection-diffusion equation are obtained by use of the Caputo fractional derivative and the Laplace-Fourier technique. The solutions are given in terms of Fox's H function. As an illustration they are applied to the case of solar energetic particles.

  12. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    NASA Astrophysics Data System (ADS)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  13. Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in Internet of Bio-Nano Things

    PubMed Central

    Akan, Ozgur B.

    2018-01-01

    We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics. PMID:29415019

  14. Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in Internet of Bio-Nano Things.

    PubMed

    Kuscu, Murat; Akan, Ozgur B

    2018-01-01

    We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics.

  15. On-orbit evaluation of the control system/structural mode interactions on OSO-8

    NASA Technical Reports Server (NTRS)

    Slafer, L. I.

    1980-01-01

    The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. This paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments. The test results have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system, and also verified the approach taken to vehicle and servo ground testing.

  16. Equivalent reduced model technique development for nonlinear system dynamic response

    NASA Astrophysics Data System (ADS)

    Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet

    2013-04-01

    The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.

  17. An analytical/numerical correlation study of the multiple concentric cylinder model for the thermoplastic response of metal matrix composites

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.

    1993-01-01

    The utility of a recently developed analytical micromechanics model for the response of metal matrix composites under thermal loading is illustrated by comparison with the results generated using the finite-element approach. The model is based on the concentric cylinder assemblage consisting of an arbitrary number of elastic or elastoplastic sublayers with isotropic or orthotropic, temperature-dependent properties. The elastoplastic boundary-value problem of an arbitrarily layered concentric cylinder is solved using the local/global stiffness matrix formulation (originally developed for elastic layered media) and Mendelson's iterative technique of successive elastic solutions. These features of the model facilitate efficient investigation of the effects of various microstructural details, such as functionally graded architectures of interfacial layers, on the evolution of residual stresses during cool down. The available closed-form expressions for the field variables can readily be incorporated into an optimization algorithm in order to efficiently identify optimal configurations of graded interfaces for given applications. Comparison of residual stress distributions after cool down generated using finite-element analysis and the present micromechanics model for four composite systems with substantially different temperature-dependent elastic, plastic, and thermal properties illustrates the efficacy of the developed analytical scheme.

  18. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  19. Recent developments and future trends in solid phase microextraction techniques towards green analytical chemistry.

    PubMed

    Spietelun, Agata; Marcinkowski, Łukasz; de la Guardia, Miguel; Namieśnik, Jacek

    2013-12-20

    Solid phase microextraction find increasing applications in the sample preparation step before chromatographic determination of analytes in samples with a complex composition. These techniques allow for integrating several operations, such as sample collection, extraction, analyte enrichment above the detection limit of a given measuring instrument and the isolation of analytes from sample matrix. In this work the information about novel methodological and instrumental solutions in relation to different variants of solid phase extraction techniques, solid-phase microextraction (SPME), stir bar sorptive extraction (SBSE) and magnetic solid phase extraction (MSPE) is presented, including practical applications of these techniques and a critical discussion about their advantages and disadvantages. The proposed solutions fulfill the requirements resulting from the concept of sustainable development, and specifically from the implementation of green chemistry principles in analytical laboratories. Therefore, particular attention was paid to the description of possible uses of novel, selective stationary phases in extraction techniques, inter alia, polymeric ionic liquids, carbon nanotubes, and silica- and carbon-based sorbents. The methodological solutions, together with properly matched sampling devices for collecting analytes from samples with varying matrix composition, enable us to reduce the number of errors during the sample preparation prior to chromatographic analysis as well as to limit the negative impact of this analytical step on the natural environment and the health of laboratory employees. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Analytic derivation of the next-to-leading order proton structure function F2p(x ,Q2) based on the Laplace transformation

    NASA Astrophysics Data System (ADS)

    Khanpour, Hamzeh; Mirjalili, Abolfazl; Tehrani, S. Atashbar

    2017-03-01

    An analytical solution based on the Laplace transformation technique for the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations is presented at next-to-leading order accuracy in perturbative QCD. This technique is also applied to extract the analytical solution for the proton structure function, F2p(x ,Q2) , in the Laplace s space. We present the results for the separate parton distributions of all parton species, including valence quark densities, the antiquark and strange sea parton distribution functions (PDFs), and the gluon distribution. We successfully compare the obtained parton distribution functions and the proton structure function with the results from GJR08 [Gluck, Jimenez-Delgado, and Reya, Eur. Phys. J. C 53, 355 (2008)], 10.1140/epjc/s10052-007-0462-9 and KKT12 [Khanpour, Khorramian, and Tehrani, J. Phys. G 40, 045002 (2013)], 10.1088/0954-3899/40/4/045002 parametrization models as well as the x -space results using QCDnum code. Our calculations show a very good agreement with the available theoretical models as well as the deep inelastic scattering (DIS) experimental data throughout the small and large values of x . The use of our analytical solution to extract the parton densities and the proton structure function is discussed in detail to justify the analysis method, considering the accuracy and speed of calculations. Overall, the accuracy we obtain from the analytical solution using the inverse Laplace transform technique is found to be better than 1 part in 104 to 105. We also present a detailed QCD analysis of nonsinglet structure functions using all available DIS data to perform global QCD fits. In this regard we employ the Jacobi polynomial approach to convert the results from Laplace s space to Bjorken x space. The extracted valence quark densities are also presented and compared to the JR14, MMHT14, NNPDF, and CJ15 PDFs sets. We evaluate the numerical effects of target mass corrections (TMCs) and higher twist (HT) terms on various structure functions, and compare fits to data with and without these corrections.

  1. Prediction of pressure drop in fluid tuned mounts using analytical and computational techniques

    NASA Technical Reports Server (NTRS)

    Lasher, William C.; Khalilollahi, Amir; Mischler, John; Uhric, Tom

    1993-01-01

    A simplified model for predicting pressure drop in fluid tuned isolator mounts was developed. The model is based on an exact solution to the Navier-Stokes equations and was made more general through the use of empirical coefficients. The values of these coefficients were determined by numerical simulation of the flow using the commercial computational fluid dynamics (CFD) package FIDAP.

  2. Combustion stability with baffles, absorbers and velocity sensitive combustion. [liquid propellant rocket combustors

    NASA Technical Reports Server (NTRS)

    Mitchell, C. E.

    1980-01-01

    Analytical and computational techniques were developed to predict the stability behavior of liquid propellant rocket combustors using damping devices such as acoustic liners, slot absorbers, and injector face baffles. Models were developed to determine the frequency and decay rate of combustor oscillations, the spatial and temporal pressure waveforms, and the stability limits in terms of combustion response model parameters.

  3. One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.

    PubMed

    Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz

    2009-07-15

    The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.

  4. Comparison of extraction techniques and modeling of accelerated solvent extraction for the authentication of natural vanilla flavors.

    PubMed

    Cicchetti, Esmeralda; Chaintreau, Alain

    2009-06-01

    Accelerated solvent extraction (ASE) of vanilla beans has been optimized using ethanol as a solvent. A theoretical model is proposed to account for this multistep extraction. This allows the determination, for the first time, of the total amount of analytes initially present in the beans and thus the calculation of recoveries using ASE or any other extraction technique. As a result, ASE and Soxhlet extractions have been determined to be efficient methods, whereas recoveries are modest for maceration techniques and depend on the solvent used. Because industrial extracts are obtained by many different procedures, including maceration in various solvents, authenticating vanilla extracts using quantitative ratios between the amounts of vanilla flavor constituents appears to be unreliable. When authentication techniques based on isotopic ratios are used, ASE is a valid sample preparation technique because it does not induce isotopic fractionation.

  5. Microextraction by packed sorbent: an emerging, selective and high-throughput extraction technique in bioanalysis.

    PubMed

    Pereira, Jorge; Câmara, José S; Colmsjö, Anders; Abdel-Rehim, Mohamed

    2014-06-01

    Sample preparation is an important analytical step regarding the isolation and concentration of desired components from complex matrices and greatly influences their reliable and accurate analysis and data quality. It is the most labor-intensive and error-prone process in analytical methodology and, therefore, may influence the analytical performance of the target analytes quantification. Many conventional sample preparation methods are relatively complicated, involving time-consuming procedures and requiring large volumes of organic solvents. Recent trends in sample preparation include miniaturization, automation, high-throughput performance, on-line coupling with analytical instruments and low-cost operation through extremely low volume or no solvent consumption. Micro-extraction techniques, such as micro-extraction by packed sorbent (MEPS), have these advantages over the traditional techniques. This paper gives an overview of MEPS technique, including the role of sample preparation in bioanalysis, the MEPS description namely MEPS formats (on- and off-line), sorbents, experimental and protocols, factors that affect the MEPS performance, and the major advantages and limitations of MEPS compared with other sample preparation techniques. We also summarize MEPS recent applications in bioanalysis. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Full Flight Envelope Direct Thrust Measurement on a Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.; Sims, Robert L.

    1998-01-01

    Direct thrust measurement using strain gages offers advantages over analytically-based thrust calculation methods. For flight test applications, the direct measurement method typically uses a simpler sensor arrangement and minimal data processing compared to analytical techniques, which normally require costly engine modeling and multisensor arrangements throughout the engine. Conversely, direct thrust measurement has historically produced less than desirable accuracy because of difficulty in mounting and calibrating the strain gages and the inability to account for secondary forces that influence the thrust reading at the engine mounts. Consequently, the strain-gage technique has normally been used for simple engine arrangements and primarily in the subsonic speed range. This paper presents the results of a strain gage-based direct thrust-measurement technique developed by the NASA Dryden Flight Research Center and successfully applied to the full flight envelope of an F-15 aircraft powered by two F100-PW-229 turbofan engines. Measurements have been obtained at quasi-steady-state operating conditions at maximum non-augmented and maximum augmented power throughout the altitude range of the vehicle and to a maximum speed of Mach 2.0 and are compared against results from two analytically-based thrust calculation methods. The strain-gage installation and calibration processes are also described.

  7. Technical Development and Application of Soft Computing in Agricultural and Biological Engineering

    USDA-ARS?s Scientific Manuscript database

    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...

  8. Development of Soft Computing and Applications in Agricultural and Biological Engineering

    USDA-ARS?s Scientific Manuscript database

    Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...

  9. A Review of Meta-Analysis Packages in R

    ERIC Educational Resources Information Center

    Polanin, Joshua R.; Hennessy, Emily A.; Tanner-Smith, Emily E.

    2017-01-01

    Meta-analysis is a statistical technique that allows an analyst to synthesize effect sizes from multiple primary studies. To estimate meta-analysis models, the open-source statistical environment R is quickly becoming a popular choice. The meta-analytic community has contributed to this growth by developing numerous packages specific to…

  10. Human Modelling for Military Application (Applications militaires de la modelisation humaine)

    DTIC Science & Technology

    2010-10-01

    techniques (rooted in the mathematics-centered analytic methods arising from World War I analyses by Lanchester 2 ). Recent requirements for research and...34Dry Shooting for Airplane Gunners - Popular Science Monthly". January 1919. p. 13-14. 2 Lanchester F.W., Mathematics in Warfare in The World of

  11. 10 CFR 503.34 - Inability to comply with applicable environmental requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...

  12. 10 CFR 503.34 - Inability to comply with applicable environmental requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...

  13. Noah Pflaum | NREL

    Science.gov Websites

    | 303-384-7527 Noah joined NREL in 2017 after having worked as a consulting building energy analyst. His to smooth the integration of building energy modeling into the building design process. Noah applies a variety of analytical techniques to solve problems associated with building performance as they

  14. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development and lower operating costs. However, as those system close control loops and arbitrate resources on board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques and concrete experiments at NASA.

  15. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development, and lower operating costs. However, as those system close control loops and arbitrate resources on-board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques, and concrete experiments at NASA.

  16. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    NASA Technical Reports Server (NTRS)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  17. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation.

    PubMed

    Neufeld, E; Chavannes, N; Samaras, T; Kuster, N

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  18. Standardization of chemical analytical techniques for pyrolysis bio-oil: history, challenges, and current status of methods

    DOE PAGES

    Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.; ...

    2016-07-05

    Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less

  19. Standardization of chemical analytical techniques for pyrolysis bio-oil: history, challenges, and current status of methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.

    Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less

  20. Experimental and analytical determination of stability parameters for a balloon tethered in a wind

    NASA Technical Reports Server (NTRS)

    Redd, L. T.; Bennett, R. M.; Bland, S. R.

    1973-01-01

    Experimental and analytical techniques for determining stability parameters for a balloon tethered in a steady wind are described. These techniques are applied to a particular 7.64-meter-long balloon, and the results are presented. The stability parameters of interest appear as coefficients in linearized stability equations and are derived from the various forces and moments acting on the balloon. In several cases the results from the experimental and analytical techniques are compared and suggestions are given as to which techniques are the most practical means of determining values for the stability parameters.

  1. A complete analytical solution of the Fokker-Planck and balance equations for nucleation and growth of crystals

    NASA Astrophysics Data System (ADS)

    Makoveeva, Eugenya V.; Alexandrov, Dmitri V.

    2018-01-01

    This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  2. Application of inorganic element ratios to chemometrics for determination of the geographic origin of welsh onions.

    PubMed

    Ariyama, Kaoru; Horita, Hiroshi; Yasui, Akemi

    2004-09-22

    The composition of concentration ratios of 19 inorganic elements to Mg (hereinafter referred to as 19-element/Mg composition) was applied to chemometric techniques to determine the geographic origin (Japan or China) of Welsh onions (Allium fistulosum L.). Using a composition of element ratios has the advantage of simplified sample preparation, and it was possible to determine the geographic origin of a Welsh onion within 2 days. The classical technique based on 20 element concentrations was also used along with the new simpler one based on 19 elements/Mg in order to validate the new technique. Twenty elements, Na, P, K, Ca, Mg, Mn, Fe, Cu, Zn, Sr, Ba, Co, Ni, Rb, Mo, Cd, Cs, La, Ce, and Tl, in 244 Welsh onion samples were analyzed by flame atomic absorption spectroscopy, inductively coupled plasma atomic emission spectrometry, and inductively coupled plasma mass spectrometry. Linear discriminant analysis (LDA) on 20-element concentrations and 19-element/Mg composition was applied to these analytical data, and soft independent modeling of class analogy (SIMCA) on 19-element/Mg composition was applied to these analytical data. The results showed that techniques based on 19-element/Mg composition were effective. LDA, based on 19-element/Mg composition for classification of samples from Japan and from Shandong, Shanghai, and Fujian in China, classified 101 samples used for modeling 97% correctly and predicted another 119 samples excluding 24 nonauthentic samples 93% correctly. In discriminations by 10 times of SIMCA based on 19-element/Mg composition modeled using 101 samples, 220 samples from known production areas including samples used for modeling and excluding 24 nonauthentic samples were predicted 92% correctly.

  3. Analytical Chemistry: A Literary Approach.

    ERIC Educational Resources Information Center

    Lucy, Charles A.

    2000-01-01

    Provides an anthology of references to descriptions of analytical chemistry techniques from history, popular fiction, and film which can be used to capture student interest and frame discussions of chemical techniques. (WRM)

  4. Mechanical behavior of regular open-cell porous biomaterials made of diamond lattice unit cells.

    PubMed

    Ahmadi, S M; Campoli, G; Amin Yavari, S; Sajadi, B; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A

    2014-06-01

    Cellular structures with highly controlled micro-architectures are promising materials for orthopedic applications that require bone-substituting biomaterials or implants. The availability of additive manufacturing techniques has enabled manufacturing of biomaterials made of one or multiple types of unit cells. The diamond lattice unit cell is one of the relatively new types of unit cells that are used in manufacturing of regular porous biomaterials. As opposed to many other types of unit cells, there is currently no analytical solution that could be used for prediction of the mechanical properties of cellular structures made of the diamond lattice unit cells. In this paper, we present new analytical solutions and closed-form relationships for predicting the elastic modulus, Poisson׳s ratio, critical buckling load, and yield (plateau) stress of cellular structures made of the diamond lattice unit cell. The mechanical properties predicted using the analytical solutions are compared with those obtained using finite element models. A number of solid and porous titanium (Ti6Al4V) specimens were manufactured using selective laser melting. A series of experiments were then performed to determine the mechanical properties of the matrix material and cellular structures. The experimentally measured mechanical properties were compared with those obtained using analytical solutions and finite element (FE) models. It has been shown that, for small apparent density values, the mechanical properties obtained using analytical and numerical solutions are in agreement with each other and with experimental observations. The properties estimated using an analytical solution based on the Euler-Bernoulli theory markedly deviated from experimental results for large apparent density values. The mechanical properties estimated using FE models and another analytical solution based on the Timoshenko beam theory better matched the experimental observations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. NASA standard: Trend analysis techniques

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.

  6. Rethinking Visual Analytics for Streaming Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less

  7. An analytical and experimental study of sound propagation and attenuation in variable-area ducts. [reducing aircraft engine noise

    NASA Technical Reports Server (NTRS)

    Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, L. J.

    1978-01-01

    The performance of sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients was evaluated. A computer code based on the method of multiple scales was used to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. An attempt was made to verify the analytical model through direct comparison of experimental and computational results and the analytical determination of the influence of axial gradients on optimum liner properties. However, the analytical studies were unable to examine the influence of non-parallel ducts on the optimum linear conditions. For liner properties not close to optimum, the analytical predictions and the experimental measurements were compared. The circumferential variations of pressure amplitudes and phases at several axial positions were examined in straight and variable-area ducts, hard-wall and lined sections with and without a mean flow. Reasonable agreement between the theoretical and experimental results was obtained.

  8. Investigation of flow fields within large scale hypersonic inlet models

    NASA Technical Reports Server (NTRS)

    Gnos, A. V.; Watson, E. C.; Seebaugh, W. R.; Sanator, R. J.; Decarlo, J. P.

    1973-01-01

    Analytical and experimental investigations were conducted to determine the internal flow characteristics in model passages representative of hypersonic inlets for use at Mach numbers to about 12. The passages were large enough to permit measurements to be made in both the core flow and boundary layers. The analytical techniques for designing the internal contours and predicting the internal flow-field development accounted for coupling between the boundary layers and inviscid flow fields by means of a displacement-thickness correction. Three large-scale inlet models, each having a different internal compression ratio, were designed to provide high internal performance with an approximately uniform static-pressure distribution at the throat station. The models were tested in the Ames 3.5-Foot Hypersonic Wind Tunnel at a nominal free-stream Mach number of 7.4 and a unit free-stream Reynolds number of 8.86 X one million per meter.

  9. Image charge models for accurate construction of the electrostatic self-energy of 3D layered nanostructure devices.

    PubMed

    Barker, John R; Martinez, Antonio

    2018-04-04

    Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.

  10. Image charge models for accurate construction of the electrostatic self-energy of 3D layered nanostructure devices

    NASA Astrophysics Data System (ADS)

    Barker, John R.; Martinez, Antonio

    2018-04-01

    Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.

  11. BiSet: Semantic Edge Bundling with Biclusters for Sensemaking.

    PubMed

    Sun, Maoyuan; Mi, Peng; North, Chris; Ramakrishnan, Naren

    2016-01-01

    Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, "in-between", to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.

  12. Visualization techniques for computer network defense

    NASA Astrophysics Data System (ADS)

    Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew

    2011-06-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.

  13. Rheoencephalographic and electroencephalographic measures of cognitive workload: analytical procedures.

    PubMed

    Montgomery, L D; Montgomery, R W; Guisado, R

    1995-05-01

    This investigation demonstrates the feasibility of mental workload assessment by rheoencephalographic (REG) and multichannel electroencephalographic (EEG) monitoring. During the performance of this research, unique testing, analytical and display procedures were developed for REG and EEG monitoring that extend the current state of the art and provide valuable tools for the study of cerebral circulatory and neural activity during cognition. REG records are analyzed to provide indices of the right and left hemisphere hemodynamic changes that take place during each test sequence. The EEG data are modeled using regression techniques and mathematically transformed to provide energy-density distributions of the scalp electrostatic field. These procedures permit concurrent REG/EEG cognitive testing not possible with current techniques. The introduction of a system for recording and analysis of cognitive REG/EEG test sequences facilitates the study of learning and memory disorders, dementia and other encephalopathies.

  14. Rheoencephalographic and electroencephalographic measures of cognitive workload: analytical procedures

    NASA Technical Reports Server (NTRS)

    Montgomery, L. D.; Montgomery, R. W.; Guisado, R.

    1995-01-01

    This investigation demonstrates the feasibility of mental workload assessment by rheoencephalographic (REG) and multichannel electroencephalographic (EEG) monitoring. During the performance of this research, unique testing, analytical and display procedures were developed for REG and EEG monitoring that extend the current state of the art and provide valuable tools for the study of cerebral circulatory and neural activity during cognition. REG records are analyzed to provide indices of the right and left hemisphere hemodynamic changes that take place during each test sequence. The EEG data are modeled using regression techniques and mathematically transformed to provide energy-density distributions of the scalp electrostatic field. These procedures permit concurrent REG/EEG cognitive testing not possible with current techniques. The introduction of a system for recording and analysis of cognitive REG/EEG test sequences facilitates the study of learning and memory disorders, dementia and other encephalopathies.

  15. Modelling by partial least squares the relationship between the HPLC mobile phases and analytes on phenyl column.

    PubMed

    Markopoulou, Catherine K; Kouskoura, Maria G; Koundourellis, John E

    2011-06-01

    Twenty-five descriptors and 61 structurally different analytes have been used on a partial least squares (PLS) to latent structure technique in order to study chromatographically their interaction mechanism on a phenyl column. According to the model, 240 different retention times of the analytes, expressed as Y variable (log k), at different % MeOH mobile-phase concentrations have been correlated with their theoretical most important structural or molecular descriptors. The goodness-of-fit was estimated by the coefficient of multiple determinations r(2) (0.919), and the root mean square error of estimation (RMSEE=0.1283) values with a predictive ability (Q(2)) of 0.901. The model was further validated using cross-validation (CV), validated by 20 response permutations r(2) (0.0, 0.0146), Q(2) (0.0, -0.136) and validated by external prediction. The contribution of certain mechanism interactions between the analytes, the mobile phase and the column, proportional or counterbalancing is also studied. Trying to evaluate the influence on Y of every variable in a PLS model, VIP (variables importance in the projection) plot provides evidence that lipophilicity (expressed as Log D, Log P), polarizability, refractivity and the eluting power of the mobile phase are dominant in the retention mechanism on a phenyl column. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.

    PubMed

    Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco

    2018-01-01

    One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.

  17. Analytical Plan for Roman Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strachan, Denis M.; Buck, Edgar C.; Mueller, Karl T.

    Roman glasses that have been in the sea or underground for about 1800 years can serve as the independent “experiment” that is needed for validation of codes and models that are used in performance assessment. Two sets of Roman-era glasses have been obtained for this purpose. One set comes from the sunken vessel the Iulia Felix; the second from recently excavated glasses from a Roman villa in Aquileia, Italy. The specimens contain glass artifacts and attached sediment or soil. In the case of the Iulia Felix glasses quite a lot of analytical work has been completed at the University ofmore » Padova, but from an archaeological perspective. The glasses from Aquileia have not been so carefully analyzed, but they are similar to other Roman glasses. Both glass and sediment or soil need to be analyzed and are the subject of this analytical plan. The glasses need to be analyzed with the goal of validating the model used to describe glass dissolution. The sediment and soil need to be analyzed to determine the profile of elements released from the glass. This latter need represents a significant analytical challenge because of the trace quantities that need to be analyzed. Both pieces of information will yield important information useful in the validation of the glass dissolution model and the chemical transport code(s) used to determine the migration of elements once released from the glass. In this plan, we outline the analytical techniques that should be useful in obtaining the needed information and suggest a useful starting point for this analytical effort.« less

  18. Analytical model and error analysis of arbitrary phasing technique for bunch length measurement

    NASA Astrophysics Data System (ADS)

    Chen, Qushan; Qin, Bin; Chen, Wei; Fan, Kuanjun; Pei, Yuanji

    2018-05-01

    An analytical model of an RF phasing method using arbitrary phase scanning for bunch length measurement is reported. We set up a statistical model instead of a linear chirp approximation to analyze the energy modulation process. It is found that, assuming a short bunch (σφ / 2 π → 0) and small relative energy spread (σγ /γr → 0), the energy spread (Y =σγ 2) at the exit of the traveling wave linac has a parabolic relationship with the cosine value of the injection phase (X = cosφr|z=0), i.e., Y = AX2 + BX + C. Analogous to quadrupole strength scanning for emittance measurement, this phase scanning method can be used to obtain the bunch length by measuring the energy spread at different injection phases. The injection phases can be randomly chosen, which is significantly different from the commonly used zero-phasing method. Further, the systematic error of the reported method, such as the influence of the space charge effect, is analyzed. This technique will be especially useful at low energies when the beam quality is dramatically degraded and is hard to measure using the zero-phasing method.

  19. Quantitative Thermochronology

    NASA Astrophysics Data System (ADS)

    Braun, Jean; van der Beek, Peter; Batt, Geoffrey

    2006-05-01

    Thermochronology, the study of the thermal history of rocks, enables us to quantify the nature and timing of tectonic processes. Quantitative Thermochronology is a robust review of isotopic ages, and presents a range of numerical modeling techniques to allow the physical implications of isotopic age data to be explored. The authors provide analytical, semi-analytical, and numerical solutions to the heat transfer equation in a range of tectonic settings and under varying boundary conditions. They then illustrate their modeling approach built around a large number of case studies. The benefits of different thermochronological techniques are also described. Computer programs on an accompanying website at www.cambridge.org/9780521830577 are introduced through the text and provide a means of solving the heat transport equation in the deforming Earth to predict the ages of rocks and compare them directly to geological and geochronological data. Several short tutorials, with hints and solutions, are also included. Numerous case studies help geologists to interpret age data and relate it to Earth processes Essential background material to aid understanding and using thermochronological data Provides a thorough treatise on numerical modeling of heat transport in the Earth's crust Supported by a website hosting relevant computer programs and colour slides of figures from the book for use in teaching

  20. A strategy to determine operating parameters in tissue engineering hollow fiber bioreactors

    PubMed Central

    Shipley, RJ; Davidson, AJ; Chan, K; Chaudhuri, JB; Waters, SL; Ellis, MJ

    2011-01-01

    The development of tissue engineering hollow fiber bioreactors (HFB) requires the optimal design of the geometry and operation parameters of the system. This article provides a strategy for specifying operating conditions for the system based on mathematical models of oxygen delivery to the cell population. Analytical and numerical solutions of these models are developed based on Michaelis–Menten kinetics. Depending on the minimum oxygen concentration required to culture a functional cell population, together with the oxygen uptake kinetics, the strategy dictates the model needed to describe mass transport so that the operating conditions can be defined. If cmin ≫ Km we capture oxygen uptake using zero-order kinetics and proceed analytically. This enables operating equations to be developed that allow the user to choose the medium flow rate, lumen length, and ECS depth to provide a prescribed value of cmin. When , we use numerical techniques to solve full Michaelis–Menten kinetics and present operating data for the bioreactor. The strategy presented utilizes both analytical and numerical approaches and can be applied to any cell type with known oxygen transport properties and uptake kinetics. PMID:21370228

  1. Determination of Mechanical Properties of Spatially Heterogeneous Breast Tissue Specimens Using Contact Mode Atomic Force Microscopy (AFM)

    PubMed Central

    Roy, Rajarshi; Desai, Jaydev P.

    2016-01-01

    This paper outlines a comprehensive parametric approach for quantifying mechanical properties of spatially heterogeneous thin biological specimens such as human breast tissue using contact-mode Atomic Force Microscopy. Using inverse finite element (FE) analysis of spherical nanoindentation, the force response from hyperelastic material models is compared with the predicted force response from existing analytical contact models, and a sensitivity study is carried out to assess uniqueness of the inverse FE solution. Furthermore, an automation strategy is proposed to analyze AFM force curves with varying levels of material nonlinearity with minimal user intervention. Implementation of our approach on an elastic map acquired from raster AFM indentation of breast tissue specimens indicates that a judicious combination of analytical and numerical techniques allow more accurate interpretation of AFM indentation data compared to relying on purely analytical contact models, while keeping the computational cost associated an inverse FE solution with reasonable limits. The results reported in this study have several implications in performing unsupervised data analysis on AFM indentation measurements on a wide variety of heterogeneous biomaterials. PMID:25015130

  2. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  3. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  4. A Literature Survey and Experimental Evaluation of the State-of-the-Art in Uplift Modeling: A Stepping Stone Toward the Development of Prescriptive Analytics.

    PubMed

    Devriendt, Floris; Moldovan, Darie; Verbeke, Wouter

    2018-03-01

    Prescriptive analytics extends on predictive analytics by allowing to estimate an outcome in function of control variables, allowing as such to establish the required level of control variables for realizing a desired outcome. Uplift modeling is at the heart of prescriptive analytics and aims at estimating the net difference in an outcome resulting from a specific action or treatment that is applied. In this article, a structured and detailed literature survey on uplift modeling is provided by identifying and contrasting various groups of approaches. In addition, evaluation metrics for assessing the performance of uplift models are reviewed. An experimental evaluation on four real-world data sets provides further insight into their use. Uplift random forests are found to be consistently among the best performing techniques in terms of the Qini and Gini measures, although considerable variability in performance across the various data sets of the experiments is observed. In addition, uplift models are frequently observed to be unstable and display a strong variability in terms of performance across different folds in the cross-validation experimental setup. This potentially threatens their actual use for business applications. Moreover, it is found that the available evaluation metrics do not provide an intuitively understandable indication of the actual use and performance of a model. Specifically, existing evaluation metrics do not facilitate a comparison of uplift models and predictive models and evaluate performance either at an arbitrary cutoff or over the full spectrum of potential cutoffs. In conclusion, we highlight the instability of uplift models and the need for an application-oriented approach to assess uplift models as prime topics for further research.

  5. Line-source excited impulsive EM field response of thin plasmonic metal films

    NASA Astrophysics Data System (ADS)

    Štumpf, Martin; Vandenbosch, Guy A. E.

    2013-08-01

    In this paper, reflection against and transmission through thin plasmonic metal films, basic building blocks of many plasmonic devices, are analytically investigated directly in the time domain for an impulsive electric and magnetic line-source excitation. The electromagnetic properties of thin metallic films are modeled via the Drude model. The problem is formulated with the help of approximate thin-sheet boundary conditions and the analysis is carried out using the Cagniard-DeHoop technique. Closed-form space-time expressions are found and discussed. The obtained time-domain analytical expressions reveal the existence of the phenomenon of transient oscillatory surface effects along a plasmonic metal thin sheet. Illustrative numerical examples of transmitted/reflected pulsed fields are provided.

  6. Acoustic solitons in waveguides with Helmholtz resonators: transmission line approach.

    PubMed

    Achilleos, V; Richoux, O; Theocharis, G; Frantzeskakis, D J

    2015-02-01

    We report experimental results and study theoretically soliton formation and propagation in an air-filled acoustic waveguide side loaded with Helmholtz resonators. We propose a theoretical modeling of the system, which relies on a transmission-line approach, leading to a nonlinear dynamical lattice model. The latter allows for an analytical description of the various soliton solutions for the pressure, which are found by means of dynamical systems and multiscale expansion techniques. These solutions include Boussinesq-like and Korteweg-de Vries pulse-shaped solitons that are observed in the experiment, as well as nonlinear Schrödinger envelope solitons, that are predicted theoretically. The analytical predictions are in excellent agreement with direct numerical simulations and in qualitative agreement with the experimental observations.

  7. Mid-infrared and near-infrared spectroscopy for rapid detection of Gardeniae Fructus by a liquid-liquid extraction process.

    PubMed

    Tao, Lingyan; Lin, Zhonglin; Chen, Jiashan; Wu, Yongjiang; Liu, Xuesong

    2017-10-25

    Gardeniae Fructus is widely used in the pharmaceutical industry, and many studies have confirmed its medical and economic value. In this study, samples collected from different liquid-liquid extraction batches of Gardeniae Fructus were detected by mid-infrared (MIR) and near-infrared (NIR) spectroscopy. Seven analytes, neochlorogenic acid (5-CQA), cryptochlorogenic acid (4-CQA), chlorogenic acid (3-CQA), geniposidic acid (GEA), deacetyl-asperulosidic acid methyl ester (DAAME), genipin-gentiobioside (GGB), and gardenoside (GA), were chosen as quality property indexes of Gardeniae Fructus. The two kinds of spectra were each used to build models by single partial least squares (PLS). Additionally, both spectral data were combined and modeled by multiblock PLS. For single spectroscopy modeling results, NIR had a better prediction for high-concentration analytes (3-CQA, DAAME, GGB, and GA) whereas MIR performed better for low-concentration analytes (5-CQA, 4-CQA, and GEA). The multiblock methodology was found to be better compared to single spectroscopy models for all seven analytes. Specifically, the coefficients of determination (R 2 ) of the NIR, MIR, and multiblock PLS calibration models of all seven components were higher than 0.95. Relative standard errors of prediction (RSEP) were all less than 7%, except for models of GGB, which were 10.36%, 13.24%, and 8.15% for the NIR-PLS, MIR-PLS, and multiblock models, respectively. These results indicate that MIR and NIR spectrographic techniques could provide a new choice for quality control in industrial production of Gardeniae Fructus. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Improvement of sound insulation performance of double-glazed windows by using viscoelastic connectors

    NASA Astrophysics Data System (ADS)

    Takahashi, D.; Sawaki, S.; Mu, R.-L.

    2016-06-01

    A new method for improving the sound insulation performance of double-glazed windows is proposed. This technique uses viscoelastic materials as connectors between the two glass panels to ensure that the appropriate spacing is maintained. An analytical model that makes it possible to discuss the effects of spacing, contact area, and viscoelastic properties of the connectors on the performance in terms of sound insulation is developed. The validity of the model is verified by comparing its results with measured data. The numerical experiments using this analytical model showed the importance of the ability of the connectors to achieve the appropriate spacing and their viscoelastic properties, both of which are necessary for improving the sound insulation performance. In addition, it was shown that the most effective factor is damping: the stronger the damping, the more the insulation performance increases.

  9. Description of deformed nuclei in the sdg boson model

    NASA Astrophysics Data System (ADS)

    Li, S. C.; Kuyucak, S.

    1996-02-01

    We present a study of deformed nuclei in the framework of the sdg interacting boson model utilizing both numerical diagonalization and analytical {1}/{N} expansion techniques. The focus is on the description of high-spin states which have recently become computationally accessible through the use of computer algebra in the {1}/{N} expansion formalism. A systematic study is made of high-spin states in rare-earth and actinide nuclei.

  10. [Mobbing: a meta-analysis and integrative model of its antecedents and consequences].

    PubMed

    Topa Cantisano, Gabriela; Depolo, Marco; Morales Domínguez, J Francisco

    2007-02-01

    Although mobbing has been extensively studied, empirical research has not led to firm conclusions regarding its antecedents and consequences, both at personal and organizational levels. An extensive literature search yielded 86 empirical studies with 93 samples. The matrix correlation obtained through meta-analytic techniques was used to test a structural equation model. Results supported hypotheses regarding organizational environmental factors as main predictors of mobbing.

  11. Numerical and Analytical Modeling of Laser Deposition with Preheating (Preprint)

    DTIC Science & Technology

    2007-03-01

    temperature materials, Numerical Heat Transfer 11 (1987) 477-491. [9] L. Han, F.W. Liou, K.M. Phatk, Modeling of laser cladding with powder injection... cladding process. This laser additive manufacturing technique allows quick fabrication of fully-dense metallic components directly from Computer...1, laser deposition uses a focused laser beam as a heat source to create a melt pool on an underlying substrate. Powder material is then injected

  12. Combustion Fundamentals Research

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Increased emphasis is placed on fundamental and generic research at Lewis Research Center with less systems development efforts. This is especially true in combustion research, where the study of combustion fundamentals has grown significantly in order to better address the perceived long term technical needs of the aerospace industry. The main thrusts for this combustion fundamentals program area are as follows: analytical models of combustion processes, model verification experiments, fundamental combustion experiments, and advanced numeric techniques.

  13. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  14. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    PubMed

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  15. Thermoelectrically cooled water trap

    DOEpatents

    Micheels, Ronald H [Concord, MA

    2006-02-21

    A water trap system based on a thermoelectric cooling device is employed to remove a major fraction of the water from air samples, prior to analysis of these samples for chemical composition, by a variety of analytical techniques where water vapor interferes with the measurement process. These analytical techniques include infrared spectroscopy, mass spectrometry, ion mobility spectrometry and gas chromatography. The thermoelectric system for trapping water present in air samples can substantially improve detection sensitivity in these analytical techniques when it is necessary to measure trace analytes with concentrations in the ppm (parts per million) or ppb (parts per billion) partial pressure range. The thermoelectric trap design is compact and amenable to use in a portable gas monitoring instrumentation.

  16. Low velocity impact analysis of composite laminated plates

    NASA Astrophysics Data System (ADS)

    Zheng, Daihua

    2007-12-01

    In the past few decades polymer composites have been utilized more in structures where high strength and light weight are major concerns, e.g., aircraft, high-speed boats and sports supplies. It is well known that they are susceptible to damage resulting from lateral impact by foreign objects, such as dropped tools, hail and debris thrown up from the runway. The impact response of the structures depends not only on the material properties but also on the dynamic behavior of the impacted structure. Although commercial software is capable of analyzing such impact processes, it often requires extensive expertise and rigorous training for design and analysis. Analytical models are useful as they allow parametric studies and provide a foundation for validating the numerical results from large-scale commercial software. Therefore, it is necessary to develop analytical or semi-analytical models to better understand the behaviors of composite structures under impact and their associated failure process. In this study, several analytical models are proposed in order to analyze the impact response of composite laminated plates. Based on Meyer's Power Law, a semi-analytical model is obtained for small mass impact response of infinite composite laminates by the method of asymptotic expansion. The original nonlinear second-order ordinary differential equation is transformed into two linear ordinary differential equations. This is achieved by neglecting high-order terms in the asymptotic expansion. As a result, the semi-analytical solution of the overall impact response can be applied to contact laws with varying coefficients. Then an analytical model accounting for permanent deformation based on an elasto-plastic contact law is proposed to obtain the closed-form solutions of the wave-controlled impact responses of composite laminates. The analytical model is also used to predict the threshold velocity for delamination onset by combining with an existing quasi-static delamination criterion. The predictions are compared with experimental data and explicit finite element LS-DYNA simulation. The comparisons show reasonable agreement. Furthermore, an analytical model is developed to evaluate the combined effects of prestresses and permanent deformation based on the linearized elasto-plastic contact law and the Laplace Transform technique. It is demonstrated that prestresses do not have noticeable effects on the time history of contact force and strains, but they have significant consequences on the plate central displacement. For a impacted composite laminate with the presence of prestresses, the contact force increases with the increasing of the mass of impactor, thickness and interlaminar shear strength of the laminate. The combined analytical and numerical investigations provide validated models for elastic and elasto-plastic impact analysis of composite structures and shed light on the design of impact-resistant composite systems.

  17. Enabling Analytics on Sensitive Medical Data with Secure Multi-Party Computation.

    PubMed

    Veeningen, Meilof; Chatterjea, Supriyo; Horváth, Anna Zsófia; Spindler, Gerald; Boersma, Eric; van der Spek, Peter; van der Galiën, Onno; Gutteling, Job; Kraaij, Wessel; Veugen, Thijs

    2018-01-01

    While there is a clear need to apply data analytics in the healthcare sector, this is often difficult because it requires combining sensitive data from multiple data sources. In this paper, we show how the cryptographic technique of secure multi-party computation can enable such data analytics by performing analytics without the need to share the underlying data. We discuss the issue of compliance to European privacy legislation; report on three pilots bringing these techniques closer to practice; and discuss the main challenges ahead to make fully privacy-preserving data analytics in the medical sector commonplace.

  18. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  19. Aircraft Dynamic Modeling in Turbulence

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunninham, Kevin

    2012-01-01

    A method for accurately identifying aircraft dynamic models in turbulence was developed and demonstrated. The method uses orthogonal optimized multisine excitation inputs and an analytic method for enhancing signal-to-noise ratio for dynamic modeling in turbulence. A turbulence metric was developed to accurately characterize the turbulence level using flight measurements. The modeling technique was demonstrated in simulation, then applied to a subscale twin-engine jet transport aircraft in flight. Comparisons of modeling results obtained in turbulent air to results obtained in smooth air were used to demonstrate the effectiveness of the approach.

  20. Geodynamics for Everyone: Robust Finite-Difference Heat Transfer Models using MS Excel 2007 Spreadsheets

    NASA Astrophysics Data System (ADS)

    Grose, C. J.

    2008-05-01

    Numerical geodynamics models of heat transfer are typically thought of as specialized topics of research requiring knowledge of specialized modelling software, linux platforms, and state-of-the-art finite-element codes. I have implemented analytical and numerical finite-difference techniques with Microsoft Excel 2007 spreadsheets to solve for complex solid-earth heat transfer problems for use by students, teachers, and practicing scientists without specialty in geodynamics modelling techniques and applications. While implementation of equations for use in Excel spreadsheets is occasionally cumbersome, once case boundary structure and node equations are developed, spreadsheet manipulation becomes routine. Model experimentation by modifying parameter values, geometry, and grid resolution makes Excel a useful tool whether in the classroom at the undergraduate or graduate level or for more engaging student projects. Furthermore, the ability to incorporate complex geometries and heat-transfer characteristics makes it ideal for first and occasionally higher order geodynamics simulations to better understand and constrain the results of professional field research in a setting that does not require the constraints of state-of-the-art modelling codes. The straightforward expression and manipulation of model equations in excel can also serve as a medium to better understand the confusing notations of advanced mathematical problems. To illustrate the power and robustness of computation and visualization in spreadsheet models I focus primarily on one-dimensional analytical and two-dimensional numerical solutions to two case problems: (i) the cooling of oceanic lithosphere and (ii) temperatures within subducting slabs. Excel source documents will be made available.

  1. An analytical investigation of NO sub x control techniques for methanol fueled spark ignition engines

    NASA Technical Reports Server (NTRS)

    Browning, L. H.; Argenbright, L. A.

    1983-01-01

    A thermokinetic SI engine simulation was used to study the effects of simple nitrogen oxide control techniques on performance and emissions of a methanol fueled engine. As part of this simulation, a ring crevice storage model was formulated to predict UBF emissions. The study included spark retard, two methods of compression ratio increase and EGR. The study concludes that use of EGR in high turbulence, high compression engines will both maximize power and thermal efficiency while minimizing harmful exhaust pollutants.

  2. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  3. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  4. Q-controlled amplitude modulation atomic force microscopy in liquids: An analysis

    NASA Astrophysics Data System (ADS)

    Hölscher, H.; Schwarz, U. D.

    2006-08-01

    An analysis of amplitude modulation atomic force microscopy in liquids is presented with respect to the application of the Q-Control technique. The equation of motion is solved by numerical and analytic methods with and without Q-Control in the presence of a simple model interaction force adequate for many liquid environments. In addition, the authors give an explicit analytical formula for the tip-sample indentation showing that higher Q factors reduce the tip-sample force. It is found that Q-Control suppresses unwanted deformations of the sample surface, leading to the enhanced image quality reported in several experimental studies.

  5. [Validation of an in-house method for the determination of zinc in serum: Meeting the requirements of ISO 17025].

    PubMed

    Llorente Ballesteros, M T; Navarro Serrano, I; López Colón, J L

    2015-01-01

    The aim of this report is to propose a scheme for validation of an analytical technique according to ISO 17025. According to ISO 17025, the fundamental parameters tested were: selectivity, calibration model, precision, accuracy, uncertainty of measurement, and analytical interference. A protocol has been developed that has been applied successfully to quantify zinc in serum by atomic absorption spectrometry. It is demonstrated that our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  6. A Semi-Analytical Orbit Propagator Program for Highly Elliptical Orbits

    NASA Astrophysics Data System (ADS)

    Lara, M.; San-Juan, J. F.; Hautesserres, D.

    2016-05-01

    A semi-analytical orbit propagator to study the long-term evolution of spacecraft in Highly Elliptical Orbits is presented. The perturbation model taken into account includes the gravitational effects produced by the first nine zonal harmonics and the main tesseral harmonics affecting to the 2:1 resonance, which has an impact on Molniya orbit-types, of Earth's gravitational potential, the mass-point approximation for third body perturbations, which on ly include the Legendre polynomial of second order for the sun and the polynomials from second order to sixth order for the moon, solar radiation pressure and atmospheric drag. Hamiltonian formalism is used to model the forces of gravitational nature so as to avoid time-dependence issues the problem is formulated in the extended phase space. The solar radiation pressure is modeled as a potential and included in the Hamiltonian, whereas the atmospheric drag is added as a generalized force. The semi-analytical theory is developed using perturbation techniques based on Lie transforms. Deprit's perturbation algorithm is applied up to the second order of the second zonal harmonics, J2, including Kozay-type terms in the mean elements Hamiltonian to get "centered" elements. The transformation is developed in closed-form of the eccentricity except for tesseral resonances and the coupling between J_2 and the moon's disturbing effects are neglected. This paper describes the semi-analytical theory, the semi-analytical orbit propagator program and some of the numerical validations.

  7. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  8. Rotation of the Earth, Mars and asteroids: components, techniques and data quality

    NASA Astrophysics Data System (ADS)

    Souchay, Jean

    2004-12-01

    We explain in some detail the analytical formulations which enable to modelize both the free and the forced motion of any celestial body taken as rigid or deformable, and we show how they have been applied (with the corresponding level of precision) for the Earth, Mars and the asteroids in general

  9. Multimodal Teaching Analytics: Automated Extraction of Orchestration Graphs from Wearable Sensor Data

    ERIC Educational Resources Information Center

    Prieto, L. P.; Sharma, K.; Kidzinski, L.; Rodríguez-Triana, M. J.; Dillenbourg, P.

    2018-01-01

    The pedagogical modelling of everyday classroom practice is an interesting kind of evidence, both for educational research and teachers' own professional development. This paper explores the usage of wearable sensors and machine learning techniques to automatically extract orchestration graphs (teaching activities and their social plane over time)…

  10. Social Networks and Smoking: Exploring the Effects of Peer Influence and Smoker Popularity through Simulations

    ERIC Educational Resources Information Center

    Schaefer, David R.; adams, jimi; Haas, Steven A.

    2013-01-01

    Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw on recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention…

  11. SEASONAL AND REGIONAL VARIATIONS OF PRIMARY AND SECONDARY ORGANIC AEROSOLS OVER THE CONTINENTAL UNITED STATES: OBSERVATION-BASED ESTIMATES AND MODEL EVALUATION

    EPA Science Inventory

    Due to the lack of an analytical technique for directly quantifying the atmospheric concentrations of primary (OCpri) and secondary (OCsec) organic carbon aerosols, different indirect methods have been developed to estimate their concentrations. In this stu...

  12. Model transformations for state-space self-tuning control of multivariable stochastic systems

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Bao, Yuan L.; Coleman, Norman P.

    1988-01-01

    The design of self-tuning controllers for multivariable stochastic systems is considered analytically. A long-division technique for finding the similarity transformation matrix and transforming the estimated left MFD to the right MFD is developed; the derivation is given in detail, and the procedures involved are briefly characterized.

  13. An Application of Fuzzy Analytic Hierarchy Process (FAHP) for Evaluating Students' Project

    ERIC Educational Resources Information Center

    Çebi, Ayça; Karal, Hasan

    2017-01-01

    In recent years, artificial intelligence applications for understanding the human thinking process and transferring it to virtual environments come into prominence. The fuzzy logic which paves the way for modeling human behaviors and expressing even vague concepts mathematically, and is also regarded as an artificial intelligence technique has…

  14. Application of computer-assisted molecular modeling (CAMM) for immunoassay of low molecular weight food contaminants: A review

    USDA-ARS?s Scientific Manuscript database

    Immunoassay for low molecular weight food contaminants, such as pesticides, veterinary drugs, and mycotoxins is now a well-established technique which meets the demands for a rapid, reliable, and cost-effective analytical method. However, due to limited understanding of the fundamental aspects of i...

  15. Techniques for Forecasting Air Passenger Traffic

    NASA Technical Reports Server (NTRS)

    Taneja, N.

    1972-01-01

    The basic techniques of forecasting the air passenger traffic are outlined. These techniques can be broadly classified into four categories: judgmental, time-series analysis, market analysis and analytical. The differences between these methods exist, in part, due to the degree of formalization of the forecasting procedure. Emphasis is placed on describing the analytical method.

  16. Heat and mass transfer in combustion - Fundamental concepts and analytical techniques

    NASA Technical Reports Server (NTRS)

    Law, C. K.

    1984-01-01

    Fundamental combustion phenomena and the associated flame structures in laminar gaseous flows are discussed on physical bases within the framework of the three nondimensional parameters of interest to heat and mass transfer in chemically-reacting flows, namely the Damkoehler number, the Lewis number, and the Arrhenius number which is the ratio of the reaction activation energy to the characteristic thermal energy. The model problems selected for illustration are droplet combustion, boundary layer combustion, and the propagation, flammability, and stability of premixed flames. Fundamental concepts discussed include the flame structures for large activation energy reactions, S-curve interpretation of the ignition and extinctin states, reaction-induced local-similarity and non-similarity in boundary layer flows, the origin and removal of the cold boundary difficulty in modeling flame propagation, and effects of flame stretch and preferential diffusion on flame extinction and stability. Analytical techniques introduced include the Shvab-Zeldovich formulation, the local Shvab-Zeldovich formulation, flame-sheet approximation and the associated jump formulation, and large activation energy matched asymptotic analysis. Potentially promising research areas are suggested.

  17. A reference web architecture and patterns for real-time visual analytics on large streaming data

    NASA Astrophysics Data System (ADS)

    Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer

    2013-12-01

    Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.

  18. Four-dimensional modeling of recent vertical movements in the area of the southern California uplift

    USGS Publications Warehouse

    Vanicek, Petr; Elliot, Michael R.; Castle, Robert O.

    1979-01-01

    This paper describes an analytical technique that utilizes scattered geodetic relevelings and tide-gauge records to portray Recent vertical crustal movements that may have been characterized by spasmodic changes in velocity. The technique is based on the fitting of a time-varying algebraic surface of prescribed degree to the geodetic data treated as tilt elements and to tide-gauge readings treated as point movements. Desired variations in time can be selected as any combination of powers of vertical movement velocity and episodic events. The state of the modeled vertical displacement can be shown for any number of dates for visual display. Statistical confidence limits of the modeled displacements, derived from the density of measurements in both space and time, line length, and accuracy of input data, are also provided. The capabilities of the technique are demonstrated on selected data from the region of the southern California uplift. 

  19. Blocking performance of the hose model and the pipe model for VPN service provisioning over WDM optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Haibo; Swee Poo, Gee

    2004-08-01

    We study the provisioning of virtual private network (VPN) service over WDM optical networks. For this purpose, we investigate the blocking performance of the hose model versus the pipe model for the provisioning. Two techniques are presented: an analytical queuing model and a discrete event simulation. The queuing model is developed from the multirate reduced-load approximation technique. The simulation is done with the OPNET simulator. Several experimental situations were used. The blocking probabilities calculated from the two approaches show a close match, indicating that the multirate reduced-load approximation technique is capable of predicting the blocking performance for the pipe model and the hose model in WDM networks. A comparison of the blocking behavior of the two models shows that the hose model has superior blocking performance as compared with pipe model. By and large, the blocking probability of the hose model is better than that of the pipe model by a few orders of magnitude, particularly at low load regions. The flexibility of the hose model allowing for the sharing of resources on a link among all connections accounts for its superior performance.

  20. Particle-tracking analysis of contributing areas of public-supply wells in simple and complex flow systems, Cape Cod, Massachusetts

    USGS Publications Warehouse

    Barlow, Paul M.

    1997-01-01

    Steady-state, two- and three-dimensional, ground-water-flow models coupled with particle tracking were evaluated to determine their effectiveness in delineating contributing areas of wells pumping from stratified-drift aquifers of Cape Cod, Massachusetts. Several contributing areas delineated by use of the three-dimensional models do not conform to simple ellipsoidal shapes that are typically delineated by use of two-dimensional analytical and numerical modeling techniques and included discontinuous areas of the water table.

  1. A Monte Carlo-finite element model for strain energy controlled microstructural evolution - 'Rafting' in superalloys

    NASA Technical Reports Server (NTRS)

    Gayda, J.; Srolovitz, D. J.

    1989-01-01

    This paper presents a specialized microstructural lattice model, MCFET (Monte Carlo finite element technique), which simulates microstructural evolution in materials in which strain energy has an important role in determining morphology. The model is capable of accounting for externally applied stress, surface tension, misfit, elastic inhomogeneity, elastic anisotropy, and arbitrary temperatures. The MCFET analysis was found to compare well with the results of analytical calculations of the equilibrium morphologies of isolated particles in an infinite matrix.

  2. Disease management with ARIMA model in time series.

    PubMed

    Sato, Renato Cesar

    2013-01-01

    The evaluation of infectious and noninfectious disease management can be done through the use of a time series analysis. In this study, we expect to measure the results and prevent intervention effects on the disease. Clinical studies have benefited from the use of these techniques, particularly for the wide applicability of the ARIMA model. This study briefly presents the process of using the ARIMA model. This analytical tool offers a great contribution for researchers and healthcare managers in the evaluation of healthcare interventions in specific populations.

  3. Optimal cooperative control synthesis of active displays

    NASA Technical Reports Server (NTRS)

    Garg, S.; Schmidt, D. K.

    1985-01-01

    A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.

  4. Advanced DPSM approach for modeling ultrasonic wave scattering in an arbitrary geometry

    NASA Astrophysics Data System (ADS)

    Yadav, Susheel K.; Banerjee, Sourav; Kundu, Tribikram

    2011-04-01

    Several techniques are used to diagnose structural damages. In the ultrasonic technique structures are tested by analyzing ultrasonic signals scattered by damages. The interpretation of these signals requires a good understanding of the interaction between ultrasonic waves and structures. Therefore, researchers need analytical or numerical techniques to have a clear understanding of the interaction between ultrasonic waves and structural damage. However, modeling of wave scattering phenomenon by conventional numerical techniques such as finite element method requires very fine mesh at high frequencies necessitating heavy computational power. Distributed point source method (DPSM) is a newly developed robust mesh free technique to simulate ultrasonic, electrostatic and electromagnetic fields. In most of the previous studies the DPSM technique has been applied to model two dimensional surface geometries and simple three dimensional scatterer geometries. It was difficult to perform the analysis for complex three dimensional geometries. This technique has been extended to model wave scattering in an arbitrary geometry. In this paper a channel section idealized as a thin solid plate with several rivet holes is formulated. The simulation has been carried out with and without cracks near the rivet holes. Further, a comparison study has been also carried out to characterize the crack. A computer code has been developed in C for modeling the ultrasonic field in a solid plate with and without cracks near the rivet holes.

  5. Optimal pacing for running 400- and 800-m track races

    NASA Astrophysics Data System (ADS)

    Reardon, James

    2013-06-01

    We present a toy model of anaerobic glycolysis that utilizes appropriate physiological and mathematical consideration while remaining useful to the athlete. The toy model produces an optimal pacing strategy for 400-m and 800-m races that is analytically calculated via the Euler-Lagrange equation. The calculation of the optimum v(t) is presented in detail, with an emphasis on intuitive arguments in order to serve as a bridge between the basic techniques presented in undergraduate physics textbooks and the more advanced techniques of control theory. Observed pacing strategies in 400-m and 800-m world-record races are found to be well-fit by the toy model, which allows us to draw a new physiological interpretation for the advantages of common weight-training practices.

  6. Tire Changes, Fresh Air, and Yellow Flags: Challenges in Predictive Analytics for Professional Racing.

    PubMed

    Tulabandhula, Theja; Rudin, Cynthia

    2014-06-01

    Our goal is to design a prediction and decision system for real-time use during a professional car race. In designing a knowledge discovery process for racing, we faced several challenges that were overcome only when domain knowledge of racing was carefully infused within statistical modeling techniques. In this article, we describe how we leveraged expert knowledge of the domain to produce a real-time decision system for tire changes within a race. Our forecasts have the potential to impact how racing teams can optimize strategy by making tire-change decisions to benefit their rank position. Our work significantly expands previous research on sports analytics, as it is the only work on analytical methods for within-race prediction and decision making for professional car racing.

  7. Least Square Regression Method for Estimating Gas Concentration in an Electronic Nose System

    PubMed Central

    Khalaf, Walaa; Pace, Calogero; Gaudioso, Manlio

    2009-01-01

    We describe an Electronic Nose (ENose) system which is able to identify the type of analyte and to estimate its concentration. The system consists of seven sensors, five of them being gas sensors (supplied with different heater voltage values), the remainder being a temperature and a humidity sensor, respectively. To identify a new analyte sample and then to estimate its concentration, we use both some machine learning techniques and the least square regression principle. In fact, we apply two different training models; the first one is based on the Support Vector Machine (SVM) approach and is aimed at teaching the system how to discriminate among different gases, while the second one uses the least squares regression approach to predict the concentration of each type of analyte. PMID:22573980

  8. Efficient computational nonlinear dynamic analysis using modal modification response technique

    NASA Astrophysics Data System (ADS)

    Marinone, Timothy; Avitabile, Peter; Foley, Jason; Wolfson, Janet

    2012-08-01

    Generally, structural systems contain nonlinear characteristics in many cases. These nonlinear systems require significant computational resources for solution of the equations of motion. Much of the model, however, is linear where the nonlinearity results from discrete local elements connecting different components together. Using a component mode synthesis approach, a nonlinear model can be developed by interconnecting these linear components with highly nonlinear connection elements. The approach presented in this paper, the Modal Modification Response Technique (MMRT), is a very efficient technique that has been created to address this specific class of nonlinear problem. By utilizing a Structural Dynamics Modification (SDM) approach in conjunction with mode superposition, a significantly smaller set of matrices are required for use in the direct integration of the equations of motion. The approach will be compared to traditional analytical approaches to make evident the usefulness of the technique for a variety of test cases.

  9. NASA standard: Trend analysis techniques

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.

  10. Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred

    2015-01-01

    To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.

  11. Graphene oxide assisted electromembrane extraction with gas chromatography for the determination of methamphetamine as a model analyte in hair and urine samples.

    PubMed

    Bagheri, Hasan; Zavareh, Alireza Fakhari; Koruni, Mohammad Hossein

    2016-03-01

    In the present study, graphene oxide reinforced two-phase electromembrane extraction (EME) coupled with gas chromatography was applied for the determination of methamphetamine as a model analyte in biological samples. The presence of graphene oxide in the hollow fiber wall can increase the effective surface area, interactions with analyte and polarity of support liquid membrane that leads to an enhancement in the analyte migration. To investigate the influence of the presence of graphene oxide in the support liquid membrane on the extraction efficiency, a comparative study was performed between graphene oxide and graphene oxide/EME methods. The extraction parameters such as type of organic solvent, pH of the donor phase, stirring speed, time, voltage, salt addition and the concentration of graphene oxide were optimized. Under the optimum conditions, the proposed microextraction technique provided low limit of detection (2.4 ng/mL), high preconcentration factor (195-198) and high relative recovery (95-98.5%). Finally, the method was successfully employed for the determination of methamphetamine in urine and hair samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Investigation of finite element: ABC methods for electromagnetic field simulation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chatterjee, A.; Volakis, John L.; Nguyen, J.

    1994-01-01

    The mechanics of wave propagation in the presence of obstacles is of great interest in many branches of engineering and applied mathematics like electromagnetics, fluid dynamics, geophysics, seismology, etc. Such problems can be broadly classified into two categories: the bounded domain or the closed problem and the unbounded domain or the open problem. Analytical techniques have been derived for the simpler problems; however, the need to model complicated geometrical features, complex material coatings and fillings, and to adapt the model to changing design parameters have inevitably tilted the balance in favor of numerical techniques. The modeling of closed problems presents difficulties primarily in proper meshing of the interior region. However, problems in unbounded domains pose a unique challenge to computation, since the exterior region is inappropriate for direct implementation of numerical techniques. A large number of solutions have been proposed but only a few have stood the test of time and experiment. The goal of this thesis is to develop an efficient and reliable partial differential equation technique to model large three dimensional scattering problems in electromagnetics.

  13. Evaluation of protective shielding thickness for diagnostic radiology rooms: theory and computer simulation.

    PubMed

    Costa, Paulo R; Caldas, Linda V E

    2002-01-01

    This work presents the development and evaluation using modern techniques to calculate radiation protection barriers in clinical radiographic facilities. Our methodology uses realistic primary and scattered spectra. The primary spectra were computer simulated using a waveform generalization and a semiempirical model (the Tucker-Barnes-Chakraborty model). The scattered spectra were obtained from published data. An analytical function was used to produce attenuation curves from polychromatic radiation for specified kVp, waveform, and filtration. The results of this analytical function are given in ambient dose equivalent units. The attenuation curves were obtained by application of Archer's model to computer simulation data. The parameters for the best fit to the model using primary and secondary radiation data from different radiographic procedures were determined. They resulted in an optimized model for shielding calculation for any radiographic room. The shielding costs were about 50% lower than those calculated using the traditional method based on Report No. 49 of the National Council on Radiation Protection and Measurements.

  14. A procedure to achieve fine control in MW processing of foods

    NASA Astrophysics Data System (ADS)

    Cuccurullo, G.; Cinquanta, L.; Sorrentino, G.

    2007-01-01

    A two-dimensional analytical model for predicting the unsteady temperature field in a cylindrical shaped body affected by spatially varying heat generation is presented. The dimensionless problem is solved analytically by using both partial solutions and the variation of parameters techniques. Having in mind industrial microwave heating for food pasteurization, the easy-to-handle solution is used to confirm the intrinsic lack of spatial uniformity of such a treatment in comparison to the traditional one. From an experimental point of view, a batch pasteurization treatment was realized to compare the effect of two different control techniques both based on IR thermography readout: the former assured a classical PID control, while the latter was based on a "shadowing" technique, consisting in covering portions of the sample which are hot enough with a mobile metallic screen. A measure of the effectiveness of the two control techniques was obtained by evaluating the thermal death curves of a strain Lactobacillus plantarum submitted to pasteurization temperatures. Preliminary results showed meaningful increases in the microwave thermal inactivation of the L. plantarum and similar significant decreases in thermal inactivation time with respect to the traditional pasteurization thermal treatment.

  15. Perceived sexual harassment at work: meta-analysis and structural model of antecedents and consequences.

    PubMed

    Topa Cantisano, Gabriela; Morales Domínguez, J F; Depolo, Marco

    2008-05-01

    Although sexual harassment has been extensively studied, empirical research has not led to firm conclusions about its antecedents and consequences, both at the personal and organizational level. An extensive literature search yielded 42 empirical studies with 60 samples. The matrix correlation obtained through meta-analytic techniques was used to test a structural equation model. Results supported the hypotheses regarding organizational environmental factors as main predictors of harassment.

  16. Incorporation of RAM techniques into simulation modeling

    NASA Astrophysics Data System (ADS)

    Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.

    1995-01-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.

  17. Spectroscopic analysis of solar and cosmic X-ray spectra. 1: The nature of cosmic X-ray spectra and proposed analytical techniques

    NASA Technical Reports Server (NTRS)

    Walker, A. B. C., Jr.

    1975-01-01

    Techniques for the study of the solar corona are reviewed as an introduction to a discussion of modifications required for the study of cosmic sources. Spectroscopic analysis of individual sources and the interstellar medium is considered. The latter was studied via analysis of its effect on the spectra of selected individual sources. The effects of various characteristics of the ISM, including the presence of grains, molecules, and ionization, are first discussed, and the development of ISM models is described. The expected spectral structure of individual cosmic sources is then reviewed with emphasis on supernovae remnants and binary X-ray sources. The observational and analytical requirements imposed by the characteristics of these sources are identified, and prospects for the analysis of abundances and the study of physical parameters within them are assessed. Prospects for the spectroscopic study of other classes of X-ray sources are also discussed.

  18. Classification of smoke tainted wines using mid-infrared spectroscopy and chemometrics.

    PubMed

    Fudge, Anthea L; Wilkinson, Kerry L; Ristic, Renata; Cozzolino, Daniel

    2012-01-11

    In this study, the suitability of mid-infrared (MIR) spectroscopy, combined with principal component analysis (PCA) and linear discriminant analysis (LDA), was evaluated as a rapid analytical technique to identify smoke tainted wines. Control (i.e., unsmoked) and smoke-affected wines (260 in total) from experimental and commercial sources were analyzed by MIR spectroscopy and chemometrics. The concentrations of guaiacol and 4-methylguaiacol were also determined using gas chromatography-mass spectrometry (GC-MS), as markers of smoke taint. LDA models correctly classified 61% of control wines and 70% of smoke-affected wines. Classification rates were found to be influenced by the extent of smoke taint (based on GC-MS and informal sensory assessment), as well as qualitative differences in wine composition due to grape variety and oak maturation. Overall, the potential application of MIR spectroscopy combined with chemometrics as a rapid analytical technique for screening smoke-affected wines was demonstrated.

  19. Performance of Orbital Neutron Instruments for Spatially Resolved Hydrogen Measurements of Airless Planetary Bodies

    PubMed Central

    Elphic, Richard C.; Feldman, William C.; Funsten, Herbert O.; Prettyman, Thomas H.

    2010-01-01

    Abstract Orbital neutron spectroscopy has become a standard technique for measuring planetary surface compositions from orbit. While this technique has led to important discoveries, such as the deposits of hydrogen at the Moon and Mars, a limitation is its poor spatial resolution. For omni-directional neutron sensors, spatial resolutions are 1–1.5 times the spacecraft's altitude above the planetary surface (or 40–600 km for typical orbital altitudes). Neutron sensors with enhanced spatial resolution have been proposed, and one with a collimated field of view is scheduled to fly on a mission to measure lunar polar hydrogen. No quantitative studies or analyses have been published that evaluate in detail the detection and sensitivity limits of spatially resolved neutron measurements. Here, we describe two complementary techniques for evaluating the hydrogen sensitivity of spatially resolved neutron sensors: an analytic, closed-form expression that has been validated with Lunar Prospector neutron data, and a three-dimensional modeling technique. The analytic technique, called the Spatially resolved Neutron Analytic Sensitivity Approximation (SNASA), provides a straightforward method to evaluate spatially resolved neutron data from existing instruments as well as to plan for future mission scenarios. We conclude that the existing detector—the Lunar Exploration Neutron Detector (LEND)—scheduled to launch on the Lunar Reconnaissance Orbiter will have hydrogen sensitivities that are over an order of magnitude poorer than previously estimated. We further conclude that a sensor with a geometric factor of ∼ 100 cm2 Sr (compared to the LEND geometric factor of ∼ 10.9 cm2 Sr) could make substantially improved measurements of the lunar polar hydrogen spatial distribution. Key Words: Planetary instrumentation—Planetary science—Moon—Spacecraft experiments—Hydrogen. Astrobiology 10, 183–200. PMID:20298147

  20. Metallic Rotor Sizing and Performance Model for Flywheel Systems

    NASA Technical Reports Server (NTRS)

    Moore, Camille J.; Kraft, Thomas G.

    2012-01-01

    The NASA Glenn Research Center (GRC) is developing flywheel system requirements and designs for terrestrial and spacecraft applications. Several generations of flywheels have been designed and tested at GRC using in-house expertise in motors, magnetic bearings, controls, materials and power electronics. The maturation of a flywheel system from the concept phase to the preliminary design phase is accompanied by maturation of the Integrated Systems Performance model, where estimating relationships are replaced by physics based analytical techniques. The modeling can incorporate results from engineering model testing and emerging detail from the design process.

  1. Environmental Assessment and Monitoring with ICAMS (Image Characterization and Modeling System) Using Multiscale Remote-Sensing Data

    NASA Technical Reports Server (NTRS)

    Lam, N.; Qiu, H.-I.; Quattrochi, Dale A.; Zhao, Wei

    1997-01-01

    With the rapid increase in spatial data, especially in the NASA-EOS (Earth Observing System) era, it is necessary to develop efficient and innovative tools to handle and analyze these data so that environmental conditions can be assessed and monitored. A main difficulty facing geographers and environmental scientists in environmental assessment and measurement is that spatial analytical tools are not easily accessible. We have recently developed a remote sensing/GIS software module called Image Characterization and Modeling System (ICAMS) to provide specialized spatial analytical tools for the measurement and characterization of satellite and other forms of spatial data. ICAMS runs on both the Intergraph-MGE and Arc/info UNIX and Windows-NT platforms. The main techniques in ICAMS include fractal measurement methods, variogram analysis, spatial autocorrelation statistics, textural measures, aggregation techniques, normalized difference vegetation index (NDVI), and delineation of land/water and vegetated/non-vegetated boundaries. In this paper, we demonstrate the main applications of ICAMS on the Intergraph-MGE platform using Landsat Thematic Mapper images from the city of Lake Charles, Louisiana. While the utilities of ICAMS' spatial measurement methods (e.g., fractal indices) in assessing environmental conditions remain to be researched, making the software available to a wider scientific community can permit the techniques in ICAMS to be evaluated and used for a diversity of applications. The findings from these various studies should lead to improved algorithms and more reliable models for environmental assessment and monitoring.

  2. A semi-analytical analysis of electro-thermo-hydrodynamic stability in dielectric nanofluids using Buongiorno's mathematical model together with more realistic boundary conditions

    NASA Astrophysics Data System (ADS)

    Wakif, Abderrahim; Boulahia, Zoubair; Sehaqui, Rachid

    2018-06-01

    The main aim of the present analysis is to examine the electroconvection phenomenon that takes place in a dielectric nanofluid under the influence of a perpendicularly applied alternating electric field. In this investigation, we assume that the nanofluid has a Newtonian rheological behavior and verifies the Buongiorno's mathematical model, in which the effects of thermophoretic and Brownian diffusions are incorporated explicitly in the governing equations. Moreover, the nanofluid layer is taken to be confined horizontally between two parallel plate electrodes, heated from below and cooled from above. In a fast pulse electric field, the onset of electroconvection is due principally to the buoyancy forces and the dielectrophoretic forces. Within the framework of the Oberbeck-Boussinesq approximation and the linear stability theory, the governing stability equations are solved semi-analytically by means of the power series method for isothermal, no-slip and non-penetrability conditions. In addition, the computational implementation with the impermeability condition implies that there exists no nanoparticles mass flux on the electrodes. On the other hand, the obtained analytical solutions are validated by comparing them to those available in the literature for the limiting case of dielectric fluids. In order to check the accuracy of our semi-analytical results obtained for the case of dielectric nanofluids, we perform further numerical and semi-analytical computations by means of the Runge-Kutta-Fehlberg method, the Chebyshev-Gauss-Lobatto spectral method, the Galerkin weighted residuals technique, the polynomial collocation method and the Wakif-Galerkin weighted residuals technique. In this analysis, the electro-thermo-hydrodynamic stability of the studied nanofluid is controlled through the critical AC electric Rayleigh number Rec , whose value depends on several physical parameters. Furthermore, the effects of various pertinent parameters on the electro-thermo-hydrodynamic stability of the nanofluidic system are discussed in more detail through graphical and tabular illustrations.

  3. Introduction to the IWA task group on biofilm modeling.

    PubMed

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  4. Effect of Microscopic Damage Events on Static and Ballistic Impact Strength of Triaxial Braid Composites

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary D.; Goldberg, Robert K.

    2010-01-01

    The reliability of impact simulations for aircraft components made with triaxial-braided carbon-fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Methods to characterize the material properties used in the analytical models from a systematically obtained set of test data are also lacking. A macroscopic finite element based analytical model to analyze the impact response of these materials has been developed. The stiffness and strength properties utilized in the material model are obtained from a set of quasi-static in-plane tension, compression and shear coupon level tests. Full-field optical strain measurement techniques are applied in the testing, and the results are used to help in characterizing the model. The unit cell of the braided composite is modeled as a series of shell elements, where each element is modeled as a laminated composite. The braided architecture can thus be approximated within the analytical model. The transient dynamic finite element code LS-DYNA is utilized to conduct the finite element simulations, and an internal LS-DYNA constitutive model is utilized in the analysis. Methods to obtain the stiffness and strength properties required by the constitutive model from the available test data are developed. Simulations of quasi-static coupon tests and impact tests of a represented braided composite are conducted. Overall, the developed method shows promise, but improvements that are needed in test and analysis methods for better predictive capability are examined.

  5. Fast detection and visualization of minced lamb meat adulteration using NIR hyperspectral imaging and multivariate image analysis.

    PubMed

    Kamruzzaman, Mohammed; Sun, Da-Wen; ElMasry, Gamal; Allen, Paul

    2013-01-15

    Many studies have been carried out in developing non-destructive technologies for predicting meat adulteration, but there is still no endeavor for non-destructive detection and quantification of adulteration in minced lamb meat. The main goal of this study was to develop and optimize a rapid analytical technique based on near-infrared (NIR) hyperspectral imaging to detect the level of adulteration in minced lamb. Initial investigation was carried out using principal component analysis (PCA) to identify the most potential adulterate in minced lamb. Minced lamb meat samples were then adulterated with minced pork in the range 2-40% (w/w) at approximately 2% increments. Spectral data were used to develop a partial least squares regression (PLSR) model to predict the level of adulteration in minced lamb. Good prediction model was obtained using the whole spectral range (910-1700 nm) with a coefficient of determination (R(2)(cv)) of 0.99 and root-mean-square errors estimated by cross validation (RMSECV) of 1.37%. Four important wavelengths (940, 1067, 1144 and 1217 nm) were selected using weighted regression coefficients (Bw) and a multiple linear regression (MLR) model was then established using these important wavelengths to predict adulteration. The MLR model resulted in a coefficient of determination (R(2)(cv)) of 0.98 and RMSECV of 1.45%. The developed MLR model was then applied to each pixel in the image to obtain prediction maps to visualize the distribution of adulteration of the tested samples. The results demonstrated that the laborious and time-consuming tradition analytical techniques could be replaced by spectral data in order to provide rapid, low cost and non-destructive testing technique for adulterate detection in minced lamb meat. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Repeated applications of a transdermal patch: analytical solution and optimal control of the delivery rate.

    PubMed

    Simon, L

    2007-10-01

    The integral transform technique was implemented to solve a mathematical model developed for percutaneous drug absorption. The model included repeated application and removal of a patch from the skin. Fick's second law of diffusion was used to study the transport of a medicinal agent through the vehicle and subsequent penetration into the stratum corneum. Eigenmodes and eigenvalues were computed and introduced into an inversion formula to estimate the delivery rate and the amount of drug in the vehicle and the skin. A dynamic programming algorithm calculated the optimal doses necessary to achieve a desired transdermal flux. The analytical method predicted profiles that were in close agreement with published numerical solutions and provided an automated strategy to perform therapeutic drug monitoring and control.

  7. Virial coefficients and demixing in the Asakura-Oosawa model.

    PubMed

    López de Haro, Mariano; Tejero, Carlos F; Santos, Andrés; Yuste, Santos B; Fiumara, Giacomo; Saija, Franz

    2015-01-07

    The problem of demixing in the Asakura-Oosawa colloid-polymer model is considered. The critical constants are computed using truncated virial expansions up to fifth order. While the exact analytical results for the second and third virial coefficients are known for any size ratio, analytical results for the fourth virial coefficient are provided here, and fifth virial coefficients are obtained numerically for particular size ratios using standard Monte Carlo techniques. We have computed the critical constants by successively considering the truncated virial series up to the second, third, fourth, and fifth virial coefficients. The results for the critical colloid and (reservoir) polymer packing fractions are compared with those that follow from available Monte Carlo simulations in the grand canonical ensemble. Limitations and perspectives of this approach are pointed out.

  8. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.

    2018-03-01

    In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.

  9. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  10. Dissociable meta-analytic brain networks contribute to coordinated emotional processing.

    PubMed

    Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R

    2018-06-01

    Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.

  11. Data mining to support simulation modeling of patient flow in hospitals.

    PubMed

    Isken, Mark W; Rajagopalan, Balaji

    2002-04-01

    Spiraling health care costs in the United States are driving institutions to continually address the challenge of optimizing the use of scarce resources. One of the first steps towards optimizing resources is to utilize capacity effectively. For hospital capacity planning problems such as allocation of inpatient beds, computer simulation is often the method of choice. One of the more difficult aspects of using simulation models for such studies is the creation of a manageable set of patient types to include in the model. The objective of this paper is to demonstrate the potential of using data mining techniques, specifically clustering techniques such as K-means, to help guide the development of patient type definitions for purposes of building computer simulation or analytical models of patient flow in hospitals. Using data from a hospital in the Midwest this study brings forth several important issues that researchers need to address when applying clustering techniques in general and specifically to hospital data.

  12. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    1997-01-01

    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.

  13. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each direction on a time step. In each direction, they have tridiagonal structure. They are solved by the sweep method. An important advantage of the discrete-analytical schemes is that the values of derivatives at the boundaries of finite volume are calculated together with the values of the unknown functions. This technique is particularly attractive for problems with dominant convection, as it does not require artificial monotonization and limiters. The same idea of integrating factors is applied in temporal dimension to the stiff systems of equations describing chemical transformation models [2]. The proposed method is applicable for the problems involving convection-diffusion-reaction operators. The work has been partially supported by the Presidium of RAS under Program 43, and by the RFBR grants 14-01-00125 and 14-01-31482. References: 1. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, (2014) V.67, Issue 12, P. 2240-2256. 2. V.V.Penenko, E.A.Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220.

  14. Bayesian meta-analytical methods to incorporate multiple surrogate endpoints in drug development process.

    PubMed

    Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R

    2016-03-30

    A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  15. New analytical technique for carbon dioxide absorption solvents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pouryousefi, F.; Idem, R.O.

    2008-02-15

    The densities and refractive indices of two binary systems (water + MEA and water + MDEA) and three ternary systems (water + MEA + CO{sub 2}, water + MDEA + CO{sub 2}, and water + MEA + MDEA) used for carbon dioxide (CO{sub 2}) capture were measured over the range of compositions of the aqueous alkanolamine(s) used for CO{sub 2} absorption at temperatures from 295 to 338 K. Experimental densities were modeled empirically, while the experimental refractive indices were modeled using well-established models from the known values of their pure-component densities and refractive indices. The density and Gladstone-Dale refractive indexmore » models were then used to obtain the compositions of unknown samples of the binary and ternary systems by simultaneous solution of the density and refractive index equations. The results from this technique have been compared with HPLC (high-performance liquid chromatography) results, while a third independent technique (acid-base titration) was used to verify the results. The results show that the systems' compositions obtained from the simple and easy-to-use refractive index/density technique were very comparable to the expensive and laborious HPLC/titration techniques, suggesting that the refractive index/density technique can be used to replace existing methods for analysis of fresh or nondegraded, CO{sub 2}-loaded, single and mixed alkanolamine solutions.« less

  16. Dark-ages Reionization and Galaxy Formation Simulation - XIV. Gas accretion, cooling, and star formation in dwarf galaxies at high redshift

    NASA Astrophysics Data System (ADS)

    Qin, Yuxiang; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Geil, Paul M.; Mesinger, Andrei; Wyithe, J. Stuart B.

    2018-06-01

    We study dwarf galaxy formation at high redshift (z ≥ 5) using a suite of high-resolution, cosmological hydrodynamic simulations and a semi-analytic model (SAM). We focus on gas accretion, cooling, and star formation in this work by isolating the relevant process from reionization and supernova feedback, which will be further discussed in a companion paper. We apply the SAM to halo merger trees constructed from a collisionless N-body simulation sharing identical initial conditions to the hydrodynamic suite, and calibrate the free parameters against the stellar mass function predicted by the hydrodynamic simulations at z = 5. By making comparisons of the star formation history and gas components calculated by the two modelling techniques, we find that semi-analytic prescriptions that are commonly adopted in the literature of low-redshift galaxy formation do not accurately represent dwarf galaxy properties in the hydrodynamic simulation at earlier times. We propose three modifications to SAMs that will provide more accurate high-redshift simulations. These include (1) the halo mass and baryon fraction which are overestimated by collisionless N-body simulations; (2) the star formation efficiency which follows a different cosmic evolutionary path from the hydrodynamic simulation; and (3) the cooling rate which is not well defined for dwarf galaxies at high redshift. Accurate semi-analytic modelling of dwarf galaxy formation informed by detailed hydrodynamical modelling will facilitate reliable semi-analytic predictions over the large volumes needed for the study of reionization.

  17. Does leaf chemistry differentially affect breakdown in tropical vs temperate streams? Importance of standardized analytical techniques to measure leaf chemistry

    Treesearch

    Marcelo Ard& #243; n; Catherine M. Pringle; Susan L. Eggert

    2009-01-01

    Comparisons of the effects of leaf litter chemistry on leaf breakdown rates in tropical vs temperate streams are hindered by incompatibility among studies and across sites of analytical methods used to measure leaf chemistry. We used standardized analytical techniques to measure chemistry and breakdown rate of leaves from common riparian tree species at 2 sites, 1...

  18. Propulsion Health Monitoring for Enhanced Safety

    NASA Technical Reports Server (NTRS)

    Butz, Mark G.; Rodriguez, Hector M.

    2003-01-01

    This report presents the results of the NASA contract Propulsion System Health Management for Enhanced Safety performed by General Electric Aircraft Engines (GE AE), General Electric Global Research (GE GR), and Pennsylvania State University Applied Research Laboratory (PSU ARL) under the NASA Aviation Safety Program. This activity supports the overall goal of enhanced civil aviation safety through a reduction in the occurrence of safety-significant propulsion system malfunctions. Specific objectives are to develop and demonstrate vibration diagnostics techniques for the on-line detection of turbine rotor disk cracks, and model-based fault tolerant control techniques for the prevention and mitigation of in-flight engine shutdown, surge/stall, and flameout events. The disk crack detection work was performed by GE GR which focused on a radial-mode vibration monitoring technique, and PSU ARL which focused on a torsional-mode vibration monitoring technique. GE AE performed the Model-Based Fault Tolerant Control work which focused on the development of analytical techniques for detecting, isolating, and accommodating gas-path faults.

  19. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    PubMed

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  20. Synchrotron IR microspectroscopy for protein structure analysis: Potential and questions

    DOE PAGES

    Yu, Peiqiang

    2006-01-01

    Synchrotron radiation-based Fourier transform infrared microspectroscopy (S-FTIR) has been developed as a rapid, direct, non-destructive, bioanalytical technique. This technique takes advantage of synchrotron light brightness and small effective source size and is capable of exploring the molecular chemical make-up within microstructures of a biological tissue without destruction of inherent structures at ultra-spatial resolutions within cellular dimension. To date there has been very little application of this advanced technique to the study of pure protein inherent structure at a cellular level in biological tissues. In this review, a novel approach was introduced to show the potential of the newly developed, advancedmore » synchrotron-based analytical technology, which can be used to localize relatively “pure“ protein in the plant tissues and relatively reveal protein inherent structure and protein molecular chemical make-up within intact tissue at cellular and subcellular levels. Several complex protein IR spectra data analytical techniques (Gaussian and Lorentzian multi-component peak modeling, univariate and multivariate analysis, principal component analysis (PCA), and hierarchical cluster analysis (CLA) are employed to relatively reveal features of protein inherent structure and distinguish protein inherent structure differences between varieties/species and treatments in plant tissues. By using a multi-peak modeling procedure, RELATIVE estimates (but not EXACT determinations) for protein secondary structure analysis can be made for comparison purpose. The issues of pro- and anti-multi-peaking modeling/fitting procedure for relative estimation of protein structure were discussed. By using the PCA and CLA analyses, the plant molecular structure can be qualitatively separate one group from another, statistically, even though the spectral assignments are not known. The synchrotron-based technology provides a new approach for protein structure research in biological tissues at ultraspatial resolutions.« less

  1. Analytical Chemistry of Surfaces: Part II. Electron Spectroscopy.

    ERIC Educational Resources Information Center

    Hercules, David M.; Hercules, Shirley H.

    1984-01-01

    Discusses two surface techniques: X-ray photoelectron spectroscopy (ESCA) and Auger electron spectroscopy (AES). Focuses on fundamental aspects of each technique, important features of instrumentation, and some examples of how ESCA and AES have been applied to analytical surface problems. (JN)

  2. Eliciting expert opinion for economic models: an applied example.

    PubMed

    Leal, José; Wordsworth, Sarah; Legood, Rosa; Blair, Edward

    2007-01-01

    Expert opinion is considered as a legitimate source of information for decision-analytic modeling where required data are unavailable. Our objective was to develop a practical computer-based tool for eliciting expert opinion about the shape of the uncertainty distribution around individual model parameters. We first developed a prepilot survey with departmental colleagues to test a number of alternative approaches to eliciting opinions on the shape of the uncertainty distribution around individual parameters. This information was used to develop a survey instrument for an applied clinical example. This involved eliciting opinions from experts to inform a number of parameters involving Bernoulli processes in an economic model evaluating DNA testing for families with a genetic disease, hypertrophic cardiomyopathy. The experts were cardiologists, clinical geneticists, and laboratory scientists working with cardiomyopathy patient populations and DNA testing. Our initial prepilot work suggested that the more complex elicitation techniques advocated in the literature were difficult to use in practice. In contrast, our approach achieved a reasonable response rate (50%), provided logical answers, and was generally rated as easy to use by respondents. The computer software user interface permitted graphical feedback throughout the elicitation process. The distributions obtained were incorporated into the model, enabling the use of probabilistic sensitivity analysis. There is clearly a gap in the literature between theoretical elicitation techniques and tools that can be used in applied decision-analytic models. The results of this methodological study are potentially valuable for other decision analysts deriving expert opinion.

  3. Modeling the Soft Geometry of Biological Membranes

    NASA Astrophysics Data System (ADS)

    Daly, K.

    This dissertation presents work done applying the techniques of physics to biological systems. The difference in length scales of the thickness of the phospolipid bilayer and overall size of a biological cell allows bilayer to be modeled elastically as a thin sheet. The Helfrich free energy is extended applied to models representing various biological systems, in order to find quasi-equilibrium states as well as transitions between states. Morphologies are approximated as axially sym-metric. Stable morphologies are de-termined analytically and through the use of computer simulation. The simple morphologies examined analytically give a model for the pearling transition seen in growing biological cells. An analytic model of celluar bulging in gram-negative bacteria predicts a critical pore radius for bulging of 20 nanometers. This model is extended to the membrane dynamics of human red blood cells, predicting three morphologic phases which are seen in vivo. A computer simulation was developed to study more complex morphologies with models representing different bilayer compositions. Single and multi-component bilayer models reproduce morphologies previously predicted by Seifert. A mean field model representing the intrinsic curvature of proteins coupling to membrane curvature is used to explore the stability of the particular morphology of rod outer segment cells. The process of pore formation and expansion in cell-cell fusion is not well understood. Simulation of the pore created in cell-cell fusion led to the finding of a minimal pore radius required for pore expansion, suggesting pores formed in nature are formed with a minimum size.

  4. Rich Analysis and Rational Models: Inferring Individual Behavior from Infant Looking Data

    ERIC Educational Resources Information Center

    Piantadosi, Steven T.; Kidd, Celeste; Aslin, Richard

    2014-01-01

    Studies of infant looking times over the past 50 years have provided profound insights about cognitive development, but their dependent measures and analytic techniques are quite limited. In the context of infants' attention to discrete sequential events, we show how a Bayesian data analysis approach can be combined with a rational cognitive…

  5. Organic chemistry in space

    NASA Technical Reports Server (NTRS)

    Johnson, R. D.

    1977-01-01

    Organic cosmochemistry, organic materials in space exploration, and biochemistry of man in space are briefly surveyed. A model of Jupiter's atmosphere is considered, and the search for organic molecules in the solar system and in interstellar space is discussed. Materials and analytical techniques relevant to space exploration are indicated, and the blood and urine analyses performed on Skylab are described.

  6. Nondestructive assessment of timber bridges using a vibration-based method

    Treesearch

    Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  7. Towards a Context-Aware Proactive Decision Support Framework

    DTIC Science & Technology

    2013-11-15

    initiative that has developed text analytic technology that crosses the semantic gap into the area of event recognition and representation. The...recognizing operational context, and techniques for recognizing context shift. Additional research areas include: • Adequately capturing users...Universal Interaction Context Ontology [12] might serve as a foundation • Instantiating formal models of decision making based on information seeking

  8. Students' Commitment, Engagement and Locus of Control as Predictor of Academic Achievement at Higher Education Level

    ERIC Educational Resources Information Center

    Sarwar, Muhammad; Ashrafi, Ghulam Muhammad

    2014-01-01

    The purpose of this study was to analyze Students' Commitment, Engagement and Locus of Control as predictors of Academic Achievement at Higher Education Level. We used analytical model and conclusive research approach to conduct study and survey method for data collection. We selected 369 students using multistage sampling technique from three…

  9. Method for improving accuracy in full evaporation headspace analysis.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-05-01

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A modal parameter extraction procedure applicable to linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Kurdila, A. J.; Craig, R. R., Jr.

    1985-01-01

    Modal analysis has emerged as a valuable tool in many phases of the engineering design process. Complex vibration and acoustic problems in new designs can often be remedied through use of the method. Moreover, the technique has been used to enhance the conceptual understanding of structures by serving to verify analytical models. A new modal parameter estimation procedure is presented. The technique is applicable to linear, time-invariant systems and accommodates multiple input excitations. In order to provide a background for the derivation of the method, some modal parameter extraction procedures currently in use are described. Key features implemented in the new technique are elaborated upon.

  11. Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey

    State-of-the-art visual analytics models and frameworks mostly assume a static snapshot of the data, while in many cases it is a stream with constant updates and changes. Exploration of streaming data poses unique challenges as machine-level computations and abstractions need to be synchronized with the visual representation of the data and the temporally evolving human insights. In the visual analytics literature, we lack a thorough characterization of streaming data and analysis of the challenges associated with task abstraction, visualization design, and adaptation of the role of human-in-the-loop for exploration of data streams. We aim to fill this gap by conductingmore » a survey of the state-of-the-art in visual analytics of streaming data for systematically describing the contributions and shortcomings of current techniques and analyzing the research gaps that need to be addressed in the future. Our contributions are: i) problem characterization for identifying challenges that are unique to streaming data analysis tasks, ii) a survey and analysis of the state-of-the-art in streaming data visualization research with a focus on the visualization design space for dynamic data and the role of the human-in-the-loop, and iii) reflections on the design-trade-offs for streaming visual analytics techniques and their practical applicability in real-world application scenarios.« less

  12. Uncovering category specificity of genital sexual arousal in women: The critical role of analytic technique.

    PubMed

    Pulverman, Carey S; Hixon, J Gregory; Meston, Cindy M

    2015-10-01

    Based on analytic techniques that collapse data into a single average value, it has been reported that women lack category specificity and show genital sexual arousal to a large range of sexual stimuli including those that both match and do not match their self-reported sexual interests. These findings may be a methodological artifact of the way in which data are analyzed. This study examined whether using an analytic technique that models data over time would yield different results. Across two studies, heterosexual (N = 19) and lesbian (N = 14) women viewed erotic films featuring heterosexual, lesbian, and gay male couples, respectively, as their physiological sexual arousal was assessed with vaginal photoplethysmography. Data analysis with traditional methods comparing average genital arousal between films failed to detect specificity of genital arousal for either group. When data were analyzed with smoothing regression splines and a within-subjects approach, both heterosexual and lesbian women demonstrated different patterns of genital sexual arousal to the different types of erotic films, suggesting that sophisticated statistical techniques may be necessary to more fully understand women's genital sexual arousal response. Heterosexual women showed category-specific genital sexual arousal. Lesbian women showed higher arousal to the heterosexual film than the other films. However, within subjects, lesbian women showed significantly different arousal responses suggesting that lesbian women's genital arousal discriminates between different categories of stimuli at the individual level. Implications for the future use of vaginal photoplethysmography as a diagnostic tool of sexual preferences in clinical and forensic settings are discussed. © 2015 Society for Psychophysiological Research.

  13. Tannin quantification in red grapes and wine: comparison of polysaccharide- and protein-based tannin precipitation techniques and their ability to model wine astringency.

    PubMed

    Mercurio, Meagan D; Smith, Paul A

    2008-07-23

    Quantification of red grape tannin and red wine tannin using the methyl cellulose precipitable (MCP) tannin assay and the Adams-Harbertson (A-H) tannin assay were investigated. The study allowed for direct comparison between the repeatability of the assays and for the assessment of other practical considerations such as time efficiency, ease of practice, and throughput, and assessed the relationships between tannin quantification by both analytical techniques. A strong correlation between the two analytical techniques was observed when quantifying grape tannin (r(2) = 0.96), and a good correlation was observed for wine tannins (r(2) = 0.80). However, significant differences in the reported tannin values for the analytical techniques were observed (approximately 3-fold). To explore potential reasons for the difference, investigations were undertaken to determine how several variables influenced the final tannin quantification for both assays. These variables included differences in the amount of tannin precipitated (monitored by HPLC), assay matrix variables, and the monomers used to report the final values. The relationship between tannin quantification and wine astringency was assessed for the MCP and A-H tannin assays, and both showed strong correlations with perceived wine astringency (r(2) = 0.83 and r(2) = 0.90, respectively). The work described here gives guidance to those wanting to understand how the values between the two assays relate; however, a conclusive explanation for the differences in values between the MCP and A-H tannin assays remains unclear, and further work in this area is required.

  14. Crack Development in Cross-Ply Laminates Under Uniaxial Tension

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, Andrew L.

    1996-01-01

    This study addresses matrix-dominated failures in carbon fiber/polymer matrix composite laminates in a cross-ply lay-up. The events of interest are interlaminar fracture in the form of transverse cracks in the 90' plies and longitudinal splitting in the 0 deg plies and interlaminar fracture in the form of 0/90 delamination. These events were observed using various nondestructive evaluation (NDE) techniques during static tensile tests. Acoustic emission (AE) x radiography, and edge view microscopy were the principal ones utilized in a real-time environment. A comparison of the NDE results with an analytical model based on the classical linear fracture mechanics concept of strain energy release rate as a criterion for crack growth was performed. The virtual crack closure theory was incorporated with a finite element model to generate strain energy release rate curves for the analytical case. Celion carbon fiber/polyimide matrix (G30-500/PMR-15) was the material tested with cross-ply lay-ups of (0(2)/90(6))s and (0(4)/90(4))s. The test specimens contained thermally induced cracks caused by the high-temperature processing. The analytical model was updated to compensate for the initial damage and to study further accumulation by taking into account the crack interactions. By correlating the experimental and analytical data, the critical energy release rates were found for the observable events of interest.

  15. Evaluating sustainable energy harvesting systems for human implantable sensors

    NASA Astrophysics Data System (ADS)

    AL-Oqla, Faris M.; Omar, Amjad A.; Fares, Osama

    2018-03-01

    Achieving most appropriate energy-harvesting technique for human implantable sensors is still challenging for the industry where keen decisions have to be performed. Moreover, the available polymeric-based composite materials are offering plentiful renewable applications that can help sustainable development as being useful for the energy-harvesting systems such as photovoltaic, piezoelectric, thermoelectric devices as well as other energy storage systems. This work presents an expert-based model capable of better evaluating and examining various available renewable energy-harvesting techniques in urban surroundings subject to various technical and economic, often conflicting, criteria. Wide evaluation criteria have been adopted in the proposed model after examining their suitability as well as ensuring the expediency and reliability of the model by worldwide experts' feedback. The model includes establishing an analytic hierarchy structure with simultaneous 12 conflicting factors to establish a systematic road map for designers to better assess such techniques for human implantable medical sensors. The energy-harvesting techniques considered were limited to Wireless, Thermoelectric, Infrared Radiator, Piezoelectric, Magnetic Induction and Electrostatic Energy Harvesters. Results have demonstrated that the best decision was in favour of wireless-harvesting technology for the medical sensors as it is preferable by most of the considered evaluation criteria in the model.

  16. Analytical Modelling of the Effects of Different Gas Turbine Cooling Techniques on Engine Performance =

    NASA Astrophysics Data System (ADS)

    Uysal, Selcuk Can

    In this research, MATLAB SimulinkRTM was used to develop a cooled engine model for industrial gas turbines and aero-engines. The model consists of uncooled on-design, mean-line turbomachinery design and a cooled off-design analysis in order to evaluate the engine performance parameters by using operating conditions, polytropic efficiencies, material information and cooling system details. The cooling analysis algorithm involves a 2nd law analysis to calculate losses from the cooling technique applied. The model is used in a sensitivity analysis that evaluates the impacts of variations in metal Biot number, thermal barrier coating Biot number, film cooling effectiveness, internal cooling effectiveness and maximum allowable blade temperature on main engine performance parameters of aero and industrial gas turbine engines. The model is subsequently used to analyze the relative performance impact of employing Anti-Vortex Film Cooling holes (AVH) by means of data obtained for these holes by Detached Eddy Simulation-CFD Techniques that are valid for engine-like turbulence intensity conditions. Cooled blade configurations with AVH and other different external cooling techniques were used in a performance comparison study. (Abstract shortened by ProQuest.).

  17. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  18. An Analysis Model for Water Cone Subsidence in Bottom Water Drive Reservoirs

    NASA Astrophysics Data System (ADS)

    Wang, Jianjun; Xu, Hui; Wu, Shucheng; Yang, Chao; Kong, lingxiao; Zeng, Baoquan; Xu, Haixia; Qu, Tailai

    2017-12-01

    Water coning in bottom water drive reservoirs, which will result in earlier water breakthrough, rapid increase in water cut and low recovery level, has drawn tremendous attention in petroleum engineering field. As one simple and effective method to inhibit bottom water coning, shut-in coning control is usually preferred in oilfield to control the water cone and furthermore to enhance economic performance. However, most of the water coning researchers just have been done on investigation of the coning behavior as it grows up, the reported studies for water cone subsidence are very scarce. The goal of this work is to present an analytical model for water cone subsidence to analyze the subsidence of water cone when the well shut in. Based on Dupuit critical oil production rate formula, an analytical model is developed to estimate the initial water cone shape at the point of critical drawdown. Then, with the initial water cone shape equation, we propose an analysis model for water cone subsidence in bottom water reservoir reservoirs. Model analysis and several sensitivity studies are conducted. This work presents accurate and fast analytical model to perform the water cone subsidence in bottom water drive reservoirs. To consider the recent interests in development of bottom drive reservoirs, our approach provides a promising technique for better understanding the subsidence of water cone.

  19. Pre-concentration technique for reduction in "Analytical instrument requirement and analysis"

    NASA Astrophysics Data System (ADS)

    Pal, Sangita; Singha, Mousumi; Meena, Sher Singh

    2018-04-01

    Availability of analytical instruments for a methodical detection of known and unknown effluents imposes a serious hindrance in qualification and quantification. Several analytical instruments such as Elemental analyzer, ICP-MS, ICP-AES, EDXRF, ion chromatography, Electro-analytical instruments which are not only expensive but also time consuming, required maintenance, damaged essential parts replacement which are of serious concern. Move over for field study and instant detection installation of these instruments are not convenient to each and every place. Therefore, technique such as pre-concentration of metal ions especially for lean stream elaborated and justified. Chelation/sequestration is the key of immobilization technique which is simple, user friendly, most effective, least expensive, time efficient; easy to carry (10g - 20g vial) to experimental field/site has been demonstrated.

  20. Development of a carbon-nanoparticle-coated stirrer for stir bar sorptive extraction by a simple carbon deposition in flame.

    PubMed

    Feng, Juanjuan; Sun, Min; Bu, Yanan; Luo, Chuannan

    2016-03-01

    Stir bar sorptive extraction is an environmentally friendly microextraction technique based on a stir bar with various sorbents. A commercial stirrer is a good support, but it has not been used in stir bar sorptive extraction due to difficult modification. A stirrer was modified with carbon nanoparticles by a simple carbon deposition process in flame and characterized by scanning electron microscopy and energy-dispersive X-ray spectrometry. A three-dimensional porous coating was formed with carbon nanoparticles. In combination with high-performance liquid chromatography, the stir bar was evaluated using five polycyclic aromatic hydrocarbons as model analytes. Conditions including extraction time and temperature, ionic strength, and desorption solvent were investigated by a factor-by-factor optimization method. The established method exhibited good linearity (0.01-10 μg/L) and low limits of quantification (0.01 μg/L). It was applied to detect model analytes in environmental water samples. No analyte was detected in river water, and five analytes were quantified in rain water. The recoveries of five analytes in two samples with spiked at 2 μg/L were in the range of 92.2-106% and 93.4-108%, respectively. The results indicated that the carbon nanoparticle-coated stirrer was an efficient stir bar for extraction analysis of some polycyclic aromatic hydrocarbons. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The World Spatiotemporal Analytics and Mapping Project (WSTAMP): Discovering, Exploring, and Mapping Spatiotemporal Patterns Across Heterogenous Space-Time Data

    NASA Astrophysics Data System (ADS)

    Morton, A.; Stewart, R.; Held, E.; Piburn, J.; Allen, M. R.; McManamay, R.; Sanyal, J.; Sorokine, A.; Bhaduri, B. L.

    2017-12-01

    Spatiotemporal (ST) analytics applied to major spatio-temporal data sources from major vendors such as USGS, NOAA, World Bank and World Health Organization have tremendous value in shedding light on the evolution of physical, cultural, and geopolitical landscapes on a local and global level. Especially powerful is the integration of these physical and cultural datasets across multiple and disparate formats, facilitating new interdisciplinary analytics and insights. Realizing this potential first requires an ST data model that addresses challenges in properly merging data from multiple authors, with evolving ontological perspectives, semantical differences, changing attributes, and content that is textual, numeric, categorical, and hierarchical. Equally challenging is the development of analytical and visualization approaches that provide a serious exploration of this integrated data while remaining accessible to practitioners with varied backgrounds. The WSTAMP project at the Oak Ridge National Laboratory has yielded two major results in addressing these challenges: 1) development of the WSTAMP database, a significant advance in ST data modeling that integrates 16000+ attributes covering 200+ countries for over 50 years from over 30 major sources and 2) a novel online ST exploratory and analysis tool providing an array of modern statistical and visualization techniques for analyzing these data temporally, spatially, and spatiotemporally under a standard analytic workflow. We report on these advances, provide an illustrative case study, and inform how others may freely access the tool.

  2. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  3. Visualization Techniques for Computer Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Justin M; Steed, Chad A; Patton, Robert M

    2011-01-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operatormore » to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.« less

  4. A new method for the determination of short-chain fatty acids from the aliphatic series in wines by headspace solid-phase microextraction-gas chromatography-ion trap mass spectrometry.

    PubMed

    Olivero, Sergio J Pérez; Trujillo, Juan P Pérez

    2011-06-24

    A new analytical method for the determination of nine short-chain fatty acids (acetic, propionic, isobutyric, butyric, isovaleric, 2-methylbutyric, hexanoic, octanoic and decanoic acids) in wines using the automated HS/SPME-GC-ITMS technique was developed and optimised. Five different SPME fibers were tested and the influence of different factors such as temperature and time of extraction, temperature and time of desorption, pH, strength ionic, tannins, anthocyans, SO(2), sugar and ethanol content were studied and optimised using model solutions. Some analytes showed matrix effect so a study of recoveries was performed. The proposed HS/SPME-GC-ITMS method, that covers the concentration range of the different analytes in wines, showed wide linear ranges, values of repeatability and reproducibility lower than 4.0% of RSD and detection limits between 3 and 257 μgL(-1), lower than the olfactory thresholds. The optimised method is a suitable technique for the quantitative analysis of short-chain fatty acids from the aliphatic series in real samples of white, rose and red wines. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Comparative Characterization of Crofelemer Samples Using Data Mining and Machine Learning Approaches With Analytical Stability Data Sets.

    PubMed

    Nariya, Maulik K; Kim, Jae Hyun; Xiong, Jian; Kleindl, Peter A; Hewarathna, Asha; Fisher, Adam C; Joshi, Sangeeta B; Schöneich, Christian; Forrest, M Laird; Middaugh, C Russell; Volkin, David B; Deeds, Eric J

    2017-11-01

    There is growing interest in generating physicochemical and biological analytical data sets to compare complex mixture drugs, for example, products from different manufacturers. In this work, we compare various crofelemer samples prepared from a single lot by filtration with varying molecular weight cutoffs combined with incubation for different times at different temperatures. The 2 preceding articles describe experimental data sets generated from analytical characterization of fractionated and degraded crofelemer samples. In this work, we use data mining techniques such as principal component analysis and mutual information scores to help visualize the data and determine discriminatory regions within these large data sets. The mutual information score identifies chemical signatures that differentiate crofelemer samples. These signatures, in many cases, would likely be missed by traditional data analysis tools. We also found that supervised learning classifiers robustly discriminate samples with around 99% classification accuracy, indicating that mathematical models of these physicochemical data sets are capable of identifying even subtle differences in crofelemer samples. Data mining and machine learning techniques can thus identify fingerprint-type attributes of complex mixture drugs that may be used for comparative characterization of products. Copyright © 2017 American Pharmacists Association®. All rights reserved.

  6. Method development and qualification of capillary zone electrophoresis for investigation of therapeutic monoclonal antibody quality.

    PubMed

    Suba, Dávid; Urbányi, Zoltán; Salgó, András

    2016-10-01

    Capillary electrophoresis techniques are widely used in the analytical biotechnology. Different electrophoretic techniques are very adequate tools to monitor size-and charge heterogenities of protein drugs. Method descriptions and development studies of capillary zone electrophoresis (CZE) have been described in literature. Most of them are performed based on the classical one-factor-at-time (OFAT) approach. In this study a very simple method development approach is described for capillary zone electrophoresis: a "two-phase-four-step" approach is introduced which allows a rapid, iterative method development process and can be a good platform for CZE method. In every step the current analytical target profile and an appropriate control strategy were established to monitor the current stage of development. A very good platform was established to investigate intact and digested protein samples. Commercially available monoclonal antibody was chosen as model protein for the method development study. The CZE method was qualificated after the development process and the results were presented. The analytical system stability was represented by the calculated RSD% value of area percentage and migration time of the selected peaks (<0.8% and <5%) during the intermediate precision investigation. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Detection and characterization of corrosion of bridge cables by time domain reflectometry

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Hunsperger, Robert G.; Folliard, Kevin; Chajes, Michael J.; Barot, Jignesh; Jhaveri, Darshan; Kunz, Eric

    1999-02-01

    In this paper, we develop and demonstrate a nondestructive evaluation technique for corrosion detection of embedded or encased steel cables. This technique utilizes time domain reflectometry (TDR), which has been traditionally used to detect electrical discontinuities in transmission lines. By applying a sensor wire along with the bridge cable, we can model the cable as an asymmetric, twin-conductor transmission line. Physical defects of the bridge cable will change the electromagnetic properties of the line and can be detected by TDR. Furthermore, different types of defects can be modeled analytically, and identified using TDR. TDR measurement results from several fabricated bridge cable sections with built-in defects are reported.

  8. A radiative transfer model for remote sensing of laser induced fluorescence of phytoplankton in non-homogeneous turbid water

    NASA Technical Reports Server (NTRS)

    Venable, D. D.

    1983-01-01

    A semi-analytic Monte Carlo simulation methodology (SALMON) was discussed. This simulation technique is particularly well suited for addressing fundamental radiative transfer problems in oceanographic LIDAR (optical radar), and also provides a framework for investigating the effects of environmental factors on LIDAR system performance. The simulation model was extended for airborne laser fluorosensors to allow for inhomogeneities in the vertical distribution of constituents in clear sea water. Results of the simulations for linearly varying step concentrations of chlorophyll are presented. The SALMON technique was also employed to determine how the LIDAR signals from an inhomogeneous media differ from those from homogeneous media.

  9. Isotope-ratio-monitoring gas chromatography-mass spectrometry: methods for isotopic calibration

    NASA Technical Reports Server (NTRS)

    Merritt, D. A.; Brand, W. A.; Hayes, J. M.

    1994-01-01

    In trial analyses of a series of n-alkanes, precise determinations of 13C contents were based on isotopic standards introduced by five different techniques and results were compared. Specifically, organic-compound standards were coinjected with the analytes and carried through chromatography and combustion with them; or CO2 was supplied from a conventional inlet and mixed with the analyte in the ion source, or CO2 was supplied from an auxiliary mixing volume and transmitted to the source without interruption of the analyte stream. Additionally, two techniques were investigated in which the analyte stream was diverted and CO2 standards were placed on a near-zero background. All methods provided accurate results. Where applicable, methods not involving interruption of the analyte stream provided the highest performance (sigma = 0.00006 at.% 13C or 0.06% for 250 pmol C as CO2 reaching the ion source), but great care was required. Techniques involving diversion of the analyte stream were immune to interference from coeluting sample components and still provided high precision (0.0001 < or = sigma < or = 0.0002 at.% or 0.1 < or = sigma < or = 0.2%).

  10. Analytical technique characterizes all trace contaminants in water

    NASA Technical Reports Server (NTRS)

    Foster, J. N.; Lysyj, I.; Nelson, K. H.

    1967-01-01

    Properly programmed combination of advanced chemical and physical analytical techniques characterize critically all trace contaminants in both the potable and waste water from the Apollo Command Module. This methodology can also be applied to the investigation of the source of water pollution.

  11. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Effects of Vibrations on Metal Forming Process: Analytical Approach and Finite Element Simulations

    NASA Astrophysics Data System (ADS)

    Armaghan, Khan; Christophe, Giraud-Audine; Gabriel, Abba; Régis, Bigot

    2011-01-01

    Vibration assisted forming is one of the most recent and beneficial technique used to improve forming process. Effects of vibration on metal forming processes can be attributed to two causes. First, the volume effect links lowering of yield stress with the influence of vibration on the dislocation movement. Second, the surface effect explains lowering of the effective coefficient of friction by periodic reduction contact area. This work is related to vibration assisted forming process in viscoplastic domain. Impact of change in vibration waveform has been analyzed. For this purpose, two analytical models have been developed for two different types of vibration waveforms (sinusoidal and triangular). These models were developed on the basis of Slice method that is used to find out the required forming force for the process. Final relationships show that application of triangular waveform in forming process is more beneficial as compare to sinusoidal vibrations in terms of reduced forming force. Finite Element Method (FEM) based simulations were performed using Forge2008®and these confirmed the results of analytical models. The ratio of vibration speed to upper die speed is a critical factor in the reduction of the forming force.

  13. Development of a new semi-analytical model for cross-borehole flow experiments in fractured media

    USGS Publications Warehouse

    Roubinet, Delphine; Irving, James; Day-Lewis, Frederick D.

    2015-01-01

    Analysis of borehole flow logs is a valuable technique for identifying the presence of fractures in the subsurface and estimating properties such as fracture connectivity, transmissivity and storativity. However, such estimation requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. In this paper, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. In comparison with existing models, our approach presents major improvements in terms of computational expense and potential adaptation to a variety of fracture and experimental configurations. After derivation of the formulation, we demonstrate its application in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as for field-data analysis to investigate fracture connectivity and estimate fracture hydraulic properties. These applications provide important insights regarding (i) the strong sensitivity of fracture property estimates to the overall connectivity of the system; and (ii) the non-uniqueness of the corresponding inverse problem for realistic fracture configurations.

  14. Enhancement of PET Images

    NASA Astrophysics Data System (ADS)

    Davis, Paul B.; Abidi, Mongi A.

    1989-05-01

    PET is the only imaging modality that provides doctors with early analytic and quantitative biochemical assessment and precise localization of pathology. In PET images, boundary information as well as local pixel intensity are both crucial for manual and/or automated feature tracing, extraction, and identification. Unfortunately, the present PET technology does not provide the necessary image quality from which such precise analytic and quantitative measurements can be made. PET images suffer from significantly high levels of radial noise present in the form of streaks caused by the inexactness of the models used in image reconstruction. In this paper, our objective is to model PET noise and remove it without altering dominant features in the image. The ultimate goal here is to enhance these dominant features to allow for automatic computer interpretation and classification of PET images by developing techniques that take into consideration PET signal characteristics, data collection, and data reconstruction. We have modeled the noise steaks in PET images in both rectangular and polar representations and have shown both analytically and through computer simulation that it exhibits consistent mapping patterns. A class of filters was designed and applied successfully. Visual inspection of the filtered images show clear enhancement over the original images.

  15. Quantitative Electron Probe Microanalysis: State of the Art

    NASA Technical Reports Server (NTRS)

    Carpernter, P. K.

    2005-01-01

    Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.

  16. Reactive silica transport in fractured porous media: Analytical solutions for a system of parallel fractures

    NASA Astrophysics Data System (ADS)

    Yang, Jianwen

    2012-04-01

    A general analytical solution is derived by using the Laplace transformation to describe transient reactive silica transport in a conceptualized 2-D system involving a set of parallel fractures embedded in an impermeable host rock matrix, taking into account of hydrodynamic dispersion and advection of silica transport along the fractures, molecular diffusion from each fracture to the intervening rock matrix, and dissolution of quartz. A special analytical solution is also developed by ignoring the longitudinal hydrodynamic dispersion term but remaining other conditions the same. The general and special solutions are in the form of a double infinite integral and a single infinite integral, respectively, and can be evaluated using Gauss-Legendre quadrature technique. A simple criterion is developed to determine under what conditions the general analytical solution can be approximated by the special analytical solution. It is proved analytically that the general solution always lags behind the special solution, unless a dimensionless parameter is less than a critical value. Several illustrative calculations are undertaken to demonstrate the effect of fracture spacing, fracture aperture and fluid flow rate on silica transport. The analytical solutions developed here can serve as a benchmark to validate numerical models that simulate reactive mass transport in fractured porous media.

  17. Direct Analysis of Samples of Various Origin and Composition Using Specific Types of Mass Spectrometry.

    PubMed

    Byliński, Hubert; Gębicki, Jacek; Dymerski, Tomasz; Namieśnik, Jacek

    2017-07-04

    One of the major sources of error that occur during chemical analysis utilizing the more conventional and established analytical techniques is the possibility of losing part of the analytes during the sample preparation stage. Unfortunately, this sample preparation stage is required to improve analytical sensitivity and precision. Direct techniques have helped to shorten or even bypass the sample preparation stage; and in this review, we comment of some of the new direct techniques that are mass-spectrometry based. The study presents information about the measurement techniques using mass spectrometry, which allow direct sample analysis, without sample preparation or limiting some pre-concentration steps. MALDI - MS, PTR - MS, SIFT - MS, DESI - MS techniques are discussed. These solutions have numerous applications in different fields of human activity due to their interesting properties. The advantages and disadvantages of these techniques are presented. The trends in development of direct analysis using the aforementioned techniques are also presented.

  18. High-Fidelity Generalization Method of Cells for Inelastic Periodic Multiphase Materials

    NASA Technical Reports Server (NTRS)

    Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.

    2002-01-01

    An extension of a recently-developed linear thermoelastic theory for multiphase periodic materials is presented which admits inelastic behavior of the constituent phases. The extended theory is capable of accurately estimating both the effective inelastic response of a periodic multiphase composite and the local stress and strain fields in the individual phases. The model is presently limited to materials characterized by constituent phases that are continuous in one direction, but arbitrarily distributed within the repeating unit cell which characterizes the material's periodic microstructure. The model's analytical framework is based on the homogenization technique for periodic media, but the method of solution for the local displacement and stress fields borrows concepts previously employed by the authors in constructing the higher-order theory for functionally graded materials, in contrast with the standard finite-element solution method typically used in conjunction with the homogenization technique. The present approach produces a closed-form macroscopic constitutive equation for a periodic multiphase material valid for both uniaxial and multiaxial loading. The model's predictive accuracy in generating both the effective inelastic stress-strain response and the local stress said inelastic strain fields is demonstrated by comparison with the results of an analytical inelastic solution for the axisymmetric and axial shear response of a unidirectional composite based on the concentric cylinder model, and with finite-element results for transverse loading.

  19. Exact solutions to the fermion propagator Schwinger-Dyson equation in Minkowski space with on-shell renormalization for quenched QED

    DOE PAGES

    Jia, Shaoyang; Pennington, M. R.

    2017-08-01

    With the introduction of a spectral representation, the Schwinger-Dyson equation (SDE) for the fermion propagator is formulated in Minkowski space in QED. After imposing the on-shell renormalization conditions, analytic solutions for the fermion propagator spectral functions are obtained in four dimensions with a renormalizable version of the Gauge Technique anzatz for the fermion-photon vertex in the quenched approximation in the Landau gauge. Despite the limitations of this model, having an explicit solution provides a guiding example of the fermion propagator with the correct analytic structure. The Padé approximation for the spectral functions is also investigated.

  20. Exact solutions to the fermion propagator Schwinger-Dyson equation in Minkowski space with on-shell renormalization for quenched QED

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, Shaoyang; Pennington, M. R.

    With the introduction of a spectral representation, the Schwinger-Dyson equation (SDE) for the fermion propagator is formulated in Minkowski space in QED. After imposing the on-shell renormalization conditions, analytic solutions for the fermion propagator spectral functions are obtained in four dimensions with a renormalizable version of the Gauge Technique anzatz for the fermion-photon vertex in the quenched approximation in the Landau gauge. Despite the limitations of this model, having an explicit solution provides a guiding example of the fermion propagator with the correct analytic structure. The Padé approximation for the spectral functions is also investigated.

Top