Sample records for based analytical model

  1. Multi-gas interaction modeling on decorated semiconductor interfaces: A novel Fermi distribution-based response isotherm and the inverse hard/soft acid/base concept

    NASA Astrophysics Data System (ADS)

    Laminack, William; Gole, James

    2015-12-01

    A unique MEMS/NEMS approach is presented for the modeling of a detection platform for mixed gas interactions. Mixed gas analytes interact with nanostructured decorating metal oxide island sites supported on a microporous silicon substrate. The Inverse Hard/Soft acid/base (IHSAB) concept is used to assess a diversity of conductometric responses for mixed gas interactions as a function of these nanostructured metal oxides. The analyte conductometric responses are well represented using a combination diffusion/absorption-based model for multi-gas interactions where a newly developed response absorption isotherm, based on the Fermi distribution function is applied. A further coupling of this model with the IHSAB concept describes the considerations in modeling of multi-gas mixed analyte-interface, and analyte-analyte interactions. Taking into account the molecular electronic interaction of both the analytes with each other and an extrinsic semiconductor interface we demonstrate how the presence of one gas can enhance or diminish the reversible interaction of a second gas with the extrinsic semiconductor interface. These concepts demonstrate important considerations in the array-based formats for multi-gas sensing and its applications.

  2. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  3. Generalized analytical solutions to multispecies transport equations with scale-dependent dispersion coefficients subject to time-dependent boundary conditions

    NASA Astrophysics Data System (ADS)

    Chen, J. S.; Chiang, S. Y.; Liang, C. P.

    2017-12-01

    It is essential to develop multispecies transport analytical models based on a set of advection-dispersion equations (ADEs) coupled with sequential first-order decay reactions for the synchronous prediction of plume migrations of both parent and its daughter species of decaying contaminants such as radionuclides, dissolved chlorinated organic compounds, pesticides and nitrogen. Although several analytical models for multispecies transport have already been reported, those currently available in the literature have primarily been derived based on ADEs with constant dispersion coefficients. However, there have been a number of studies demonstrating that the dispersion coefficients increase with the solute travel distance as a consequence of variation in the hydraulic properties of the porous media. This study presents novel analytical models for multispecies transport with distance-dependent dispersion coefficients. The correctness of the derived analytical models is confirmed by comparing them against the numerical models. Results show perfect agreement between the analytical and numerical models. Comparison of our new analytical model for multispecies transport with scale-dependent dispersion to an analytical model with constant dispersion is made to illustrate the effects of the dispersion coefficients on the multispecies transport of decaying contaminants.

  4. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    PubMed Central

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  5. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    PubMed

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  6. S-2 stage 1/25 scale model base region thermal environment test. Volume 1: Test results, comparison with theory and flight data

    NASA Technical Reports Server (NTRS)

    Sadunas, J. A.; French, E. P.; Sexton, H.

    1973-01-01

    A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.

  7. An Active Learning Exercise for Introducing Agent-Based Modeling

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  8. Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.

    PubMed

    Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen

    2015-10-01

    Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.

  9. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

    PubMed

    Huhn, Carolin; Pyell, Ute

    2008-07-11

    It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

  10. Let's Go Off the Grid: Subsurface Flow Modeling With Analytic Elements

    NASA Astrophysics Data System (ADS)

    Bakker, M.

    2017-12-01

    Subsurface flow modeling with analytic elements has the major advantage that no grid or time stepping are needed. Analytic element formulations exist for steady state and transient flow in layered aquifers and unsaturated flow in the vadose zone. Analytic element models are vector-based and consist of points, lines and curves that represent specific features in the subsurface. Recent advances allow for the simulation of partially penetrating wells and multi-aquifer wells, including skin effect and wellbore storage, horizontal wells of poly-line shape including skin effect, sharp changes in subsurface properties, and surface water features with leaky beds. Input files for analytic element models are simple, short and readable, and can easily be generated from, for example, GIS databases. Future plans include the incorporation of analytic element in parts of grid-based models where additional detail is needed. This presentation will give an overview of advanced flow features that can be modeled, many of which are implemented in free and open-source software.

  11. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  12. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  13. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  14. Predictive Analytical Model for Isolator Shock-Train Location in a Mach 2.2 Direct-Connect Supersonic Combustion Tunnel

    NASA Astrophysics Data System (ADS)

    Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel

    2016-11-01

    This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.

  15. A structurally based analytic model of growth and biomass dynamics in single species stands of conifers

    Treesearch

    Robin J. Tausch

    2015-01-01

    A theoretically based analytic model of plant growth in single species conifer communities based on the species fully occupying a site and fully using the site resources is introduced. Model derivations result in a single equation simultaneously describes changes over both, different site conditions (or resources available), and over time for each variable for each...

  16. A physically based analytical spatial air temperature and humidity model

    Treesearch

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  17. Analytical performance specifications for changes in assay bias (Δbias) for data with logarithmic distributions as assessed by effects on reference change values.

    PubMed

    Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György

    2016-11-01

    Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.

  18. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  19. Assessment of Matrix Multiplication Learning with a Rule-Based Analytical Model--"A Bayesian Network Representation"

    ERIC Educational Resources Information Center

    Zhang, Zhidong

    2016-01-01

    This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…

  20. Simplified analytical model and balanced design approach for light-weight wood-based structural panel in bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2016-01-01

    This paper presents a simplified analytical model and balanced design approach for modeling lightweight wood-based structural panels in bending. Because many design parameters are required to input for the model of finite element analysis (FEA) during the preliminary design process and optimization, the equivalent method was developed to analyze the mechanical...

  1. Quantitative evaluation of analyte transport on microfluidic paper-based analytical devices (μPADs).

    PubMed

    Ota, Riki; Yamada, Kentaro; Suzuki, Koji; Citterio, Daniel

    2018-02-07

    The transport efficiency during capillary flow-driven sample transport on microfluidic paper-based analytical devices (μPADs) made from filter paper has been investigated for a selection of model analytes (Ni 2+ , Zn 2+ , Cu 2+ , PO 4 3- , bovine serum albumin, sulforhodamine B, amaranth) representing metal cations, complex anions, proteins and anionic molecules. For the first time, the transport of the analytical target compounds rather than the sample liquid, has been quantitatively evaluated by means of colorimetry and absorption spectrometry-based methods. The experiments have revealed that small paperfluidic channel dimensions, additional user operation steps (e.g. control of sample volume, sample dilution, washing step) as well as the introduction of sample liquid wicking areas allow to increase analyte transport efficiency. It is also shown that the interaction of analytes with the negatively charged cellulosic paper substrate surface is strongly influenced by the physico-chemical properties of the model analyte and can in some cases (Cu 2+ ) result in nearly complete analyte depletion during sample transport. The quantitative information gained through these experiments is expected to contribute to the development of more sensitive μPADs.

  2. New York State energy-analytic information system: first-stage implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allentuck, J.; Carroll, O.; Fiore, L.

    1979-09-01

    So that energy policy by state government may be formulated within the constraints imposed by policy determined at the national level - yet reflect the diverse interests of its citizens - large quantities of data and sophisticated analytic capabilities are required. This report presents the design of an energy-information/analytic system for New York State, the data for a base year, 1976, and projections of these data. At the county level, 1976 energy-supply demand data and electric generating plant data are provided as well. Data-base management is based on System 2000. Three computerized models provide the system's basic analytic capacity. Themore » Brookhaven Energy System Network Simulator provides an integrating framework while a price-response model and a weather sensitive energy demand model furnished a short-term energy response estimation capability. The operation of these computerized models is described. 62 references, 25 figures, 39 tables.« less

  3. Combining Model-Based and Feature-Driven Diagnosis Approaches - A Case Study on Electromechanical Actuators

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav

    2010-01-01

    Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.

  4. Comparison of thermal analytic model with experimental test results for 30-sentimeter-diameter engineering model mercury ion thruster

    NASA Technical Reports Server (NTRS)

    Oglebay, J. C.

    1977-01-01

    A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.

  5. Improved partition equilibrium model for predicting analyte response in electrospray ionization mass spectrometry.

    PubMed

    Du, Lihong; White, Robert L

    2009-02-01

    A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.

  6. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  7. Analyzing chromatographic data using multilevel modeling.

    PubMed

    Wiczling, Paweł

    2018-06-01

    It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.

  8. Source-term development for a contaminant plume for use by multimedia risk assessment models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less

  9. An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application

    ERIC Educational Resources Information Center

    Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth

    2016-01-01

    Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…

  10. A carrier-based analytical theory for negative capacitance symmetric double-gate field effect transistors and its simulation verification

    NASA Astrophysics Data System (ADS)

    Jiang, Chunsheng; Liang, Renrong; Wang, Jing; Xu, Jun

    2015-09-01

    A carrier-based analytical drain current model for negative capacitance symmetric double-gate field effect transistors (NC-SDG FETs) is proposed by solving the differential equation of the carrier, the Pao-Sah current formulation, and the Landau-Khalatnikov equation. The carrier equation is derived from Poisson’s equation and the Boltzmann distribution law. According to the model, an amplified semiconductor surface potential and a steeper subthreshold slope could be obtained with suitable thicknesses of the ferroelectric film and insulator layer at room temperature. Results predicted by the analytical model agree well with those of the numerical simulation from a 2D simulator without any fitting parameters. The analytical model is valid for all operation regions and captures the transitions between them without any auxiliary variables or functions. This model can be used to explore the operating mechanisms of NC-SDG FETs and to optimize device performance.

  11. Distributed parameter modeling to prevent charge cancellation for discrete thickness piezoelectric energy harvester

    NASA Astrophysics Data System (ADS)

    Krishnasamy, M.; Qian, Feng; Zuo, Lei; Lenka, T. R.

    2018-03-01

    The charge cancellation due to the change of strain along single continuous piezoelectric layer can remarkably affect the performance of a cantilever based harvester. In this paper, analytical models using distributed parameters are developed with some extent of averting the charge cancellation in cantilever piezoelectric transducer where the piezoelectric layers are segmented at strain nodes of concerned vibration mode. The electrode of piezoelectric segments are parallelly connected with a single external resistive load in the 1st model (Model 1). While each bimorph piezoelectric layers are connected in parallel to a resistor to form an independent circuit in the 2nd model (Model 2). The analytical expressions of the closed-form electromechanical coupling responses in frequency domain under harmonic base excitation are derived based on the Euler-Bernoulli beam assumption for both models. The developed analytical models are validated by COMSOL and experimental results. The results demonstrate that the energy harvesting performance of the developed segmented piezoelectric layer models is better than the traditional model of continuous piezoelectric layer.

  12. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  13. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  14. A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.

    1994-01-01

    A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.

  15. Numerical and analytical modeling of the end-loaded split (ELS) test specimens made of multi-directional coupled composite laminates

    NASA Astrophysics Data System (ADS)

    Samborski, Sylwester; Valvo, Paolo S.

    2018-01-01

    The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.

  16. Laser backscattering analytical model of Doppler power spectra about rotating convex quadric bodies of revolution

    NASA Astrophysics Data System (ADS)

    Gong, YanJun; Wu, ZhenSen; Wang, MingJun; Cao, YunHua

    2010-01-01

    We propose an analytical model of Doppler power spectra in backscatter from arbitrary rough convex quadric bodies of revolution (whose lateral surface is a quadric) rotating around axes. In the global Cartesian coordinate system, the analytical model deduced is suitable for general convex quadric body of revolution. Based on this analytical model, the Doppler power spectra of cones, cylinders, paraboloids of revolution, and sphere-cones combination are proposed. We analyze numerically the influence of geometric parameters, aspect angle, wavelength and reflectance of rough surface of the objects on the broadened spectra because of the Doppler effect. This analytical solution may contribute to laser Doppler velocimetry, and remote sensing of ballistic missile that spin.

  17. LARC-1: a Los Alamos release calculation program for fission product transport in HTGRs during the LOFC accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carruthers, L.M.; Lee, C.E.

    1976-10-01

    The theoretical and numerical data base development of the LARC-1 code is described. Four analytical models of fission product release from an HTGR core during the loss of forced circulation accident are developed. Effects of diffusion, adsorption and evaporation of the metallics and precursors are neglected in this first LARC model. Comparison of the analytic models indicates that the constant release-renormalized model is adequate to describe the processes involved. The numerical data base for release constants, temperature modeling, fission product release rates, coated fuel particle failure fraction and aged coated fuel particle failure fractions is discussed. Analytic fits and graphicmore » displays for these data are given for the Ft. St. Vrain and GASSAR models.« less

  18. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  19. Propagation of Bayesian Belief for Near-Real Time Statistical Assessment of Geosynchronous Satellite Status Based on Non-Resolved Photometry Data

    DTIC Science & Technology

    2014-09-01

    of the BRDF for the Body and Panel. In order to provide a continuously updated baseline, the Photometry Model application is performed using a...brightness to its predicted brightness. The brightness predictions can be obtained using any analytical model chosen by the user. The inference for a...the analytical model as possible; and to mitigate the effect of bias that could be introduced by the choice of analytical model . It considers that a

  20. Challenges and Opportunities in Analysing Students Modelling

    ERIC Educational Resources Information Center

    Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín

    2017-01-01

    Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them--the model of modelling diagram (MMD)--as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the…

  1. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  2. Factors Affecting Higher Order Thinking Skills of Students: A Meta-Analytic Structural Equation Modeling Study

    ERIC Educational Resources Information Center

    Budsankom, Prayoonsri; Sawangboon, Tatsirin; Damrongpanit, Suntorapot; Chuensirimongkol, Jariya

    2015-01-01

    The purpose of the research is to develop and identify the validity of factors affecting higher order thinking skills (HOTS) of students. The thinking skills can be divided into three types: analytical, critical, and creative thinking. This analysis is done by applying the meta-analytic structural equation modeling (MASEM) based on a database of…

  3. Model for Atmospheric Propagation of Spatially Combined Laser Beams

    DTIC Science & Technology

    2016-09-01

    thesis modeling tools is discussed. In Chapter 6, the thesis validated the model with analytical computations and simulations result from...using propagation model . Based on both the analytical computation and WaveTrain results, the diraction e ects simulated in the propagation model are...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS MODEL FOR ATMOSPHERIC PROPAGATION OF SPATIALLY COMBINED LASER BEAMS by Kum Leong Lee

  4. Analytical modeling and experimental validation of a magnetorheological mount

    NASA Astrophysics Data System (ADS)

    Nguyen, The; Ciocanel, Constantin; Elahinia, Mohammad

    2009-03-01

    Magnetorheological (MR) fluid has been increasingly researched and applied in vibration isolation devices. To date, the suspension system of several high performance vehicles has been equipped with MR fluid based dampers and research is ongoing to develop MR fluid based mounts for engine and powertrain isolation. MR fluid based devices have received attention due to the MR fluid's capability to change its properties in the presence of a magnetic field. This characteristic places MR mounts in the class of semiactive isolators making them a desirable substitution for the passive hydraulic mounts. In this research, an analytical model of a mixed-mode MR mount was constructed. The magnetorheological mount employs flow (valve) mode and squeeze mode. Each mode is powered by an independent electromagnet, so one mode does not affect the operation of the other. The analytical model was used to predict the performance of the MR mount with different sets of parameters. Furthermore, in order to produce the actual prototype, the analytical model was used to identify the optimal geometry of the mount. The experimental phase of this research was carried by fabricating and testing the actual MR mount. The manufactured mount was tested to evaluate the effectiveness of each mode individually and in combination. The experimental results were also used to validate the ability of the analytical model in predicting the response of the MR mount. Based on the observed response of the mount a suitable controller can be designed for it. However, the control scheme is not addressed in this study.

  5. A structurally based analytic model for estimation of biomass and fuel loads of woodland trees

    Treesearch

    Robin J. Tausch

    2009-01-01

    Allometric/structural relationships in tree crowns are a consequence of the physical, physiological, and fluid conduction processes of trees, which control the distribution, efficient support, and growth of foliage in the crown. The structural consequences of these processes are used to develop an analytic model based on the concept of branch orders. A set of...

  6. Analytical investigation of the faster-is-slower effect with a simplified phenomenological model

    NASA Astrophysics Data System (ADS)

    Suzuno, K.; Tomoeda, A.; Ueyama, D.

    2013-11-01

    We investigate the mechanism of the phenomenon called the “faster-is-slower”effect in pedestrian flow studies analytically with a simplified phenomenological model. It is well known that the flow rate is maximized at a certain strength of the driving force in simulations using the social force model when we consider the discharge of self-driven particles through a bottleneck. In this study, we propose a phenomenological and analytical model based on a mechanics-based modeling to reveal the mechanism of the phenomenon. We show that our reduced system, with only a few degrees of freedom, still has similar properties to the original many-particle system and that the effect comes from the competition between the driving force and the nonlinear friction from the model. Moreover, we predict the parameter dependences on the effect from our model qualitatively, and they are confirmed numerically by using the social force model.

  7. Dual metal gate tunneling field effect transistors based on MOSFETs: A 2-D analytical approach

    NASA Astrophysics Data System (ADS)

    Ramezani, Zeinab; Orouji, Ali A.

    2018-01-01

    A novel 2-D analytical drain current model of novel Dual Metal Gate Tunnel Field Effect Transistors Based on MOSFETs (DMG-TFET) is presented in this paper. The proposed Tunneling FET is extracted from a MOSFET structure by employing an additional electrode in the source region with an appropriate work function to induce holes in the N+ source region and hence makes it as a P+ source region. The electric field is derived which is utilized to extract the expression of the drain current by analytically integrating the band to band tunneling generation rate in the tunneling region based on the potential profile by solving the Poisson's equation. Through this model, the effects of the thin film thickness and gate voltage on the potential, the electric field, and the effects of the thin film thickness on the tunneling current can be studied. To validate our present model we use SILVACO ATLAS device simulator and the analytical results have been compared with it and found a good agreement.

  8. Study on bending behaviour of nickel–titanium rotary endodontic instruments by analytical and numerical analyses

    PubMed Central

    Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S

    2013-01-01

    Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762

  9. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  10. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  11. Efficient Band-to-Trap Tunneling Model Including Heterojunction Band Offset

    DOE PAGES

    Gao, Xujiao; Huang, Andy; Kerr, Bert

    2017-10-25

    In this paper, we present an efficient band-to-trap tunneling model based on the Schenk approach, in which an analytic density-of-states (DOS) model is developed based on the open boundary scattering method. The new model explicitly includes the effect of heterojunction band offset, in addition to the well-known field effect. Its analytic form enables straightforward implementation into TCAD device simulators. It is applicable to all one-dimensional potentials, which can be approximated to a good degree such that the approximated potentials lead to piecewise analytic wave functions with open boundary conditions. The model allows for simulating both the electric-field-enhanced and band-offset-enhanced carriermore » recombination due to the band-to-trap tunneling near the heterojunction in a heterojunction bipolar transistor (HBT). Simulation results of an InGaP/GaAs/GaAs NPN HBT show that the proposed model predicts significantly increased base currents, due to the hole-to-trap tunneling enhanced by the emitter-base junction band offset. Finally, the results compare favorably with experimental observation.« less

  12. Efficient Band-to-Trap Tunneling Model Including Heterojunction Band Offset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Xujiao; Huang, Andy; Kerr, Bert

    In this paper, we present an efficient band-to-trap tunneling model based on the Schenk approach, in which an analytic density-of-states (DOS) model is developed based on the open boundary scattering method. The new model explicitly includes the effect of heterojunction band offset, in addition to the well-known field effect. Its analytic form enables straightforward implementation into TCAD device simulators. It is applicable to all one-dimensional potentials, which can be approximated to a good degree such that the approximated potentials lead to piecewise analytic wave functions with open boundary conditions. The model allows for simulating both the electric-field-enhanced and band-offset-enhanced carriermore » recombination due to the band-to-trap tunneling near the heterojunction in a heterojunction bipolar transistor (HBT). Simulation results of an InGaP/GaAs/GaAs NPN HBT show that the proposed model predicts significantly increased base currents, due to the hole-to-trap tunneling enhanced by the emitter-base junction band offset. Finally, the results compare favorably with experimental observation.« less

  13. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  14. A Cyclic-Plasticity-Based Mechanistic Approach for Fatigue Evaluation of 316 Stainless Steel Under Arbitrary Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.

    In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less

  15. A Cyclic-Plasticity-Based Mechanistic Approach for Fatigue Evaluation of 316 Stainless Steel Under Arbitrary Loading

    DOE PAGES

    Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.; ...

    2017-12-05

    In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less

  16. Analytical modeling and numerical simulation of the short-wave infrared electron-injection detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Movassaghi, Yashar; Fathipour, Morteza; Fathipour, Vala

    2016-03-21

    This paper describes comprehensive analytical and simulation models for the design and optimization of the electron-injection based detectors. The electron-injection detectors evaluated here operate in the short-wave infrared range and utilize a type-II band alignment in InP/GaAsSb/InGaAs material system. The unique geometry of detectors along with an inherent negative-feedback mechanism in the device allows for achieving high internal avalanche-free amplifications without any excess noise. Physics-based closed-form analytical models are derived for the detector rise time and dark current. Our optical gain model takes into account the drop in the optical gain at high optical power levels. Furthermore, numerical simulation studiesmore » of the electrical characteristics of the device show good agreement with our analytical models as well experimental data. Performance comparison between devices with different injector sizes shows that enhancement in the gain and speed is anticipated by reducing the injector size. Sensitivity analysis for the key detector parameters shows the relative importance of each parameter. The results of this study may provide useful information and guidelines for development of future electron-injection based detectors as well as other heterojunction photodetectors.« less

  17. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Towards an Analytical Age-Dependent Model of Contrast Sensitivity Functions for an Ageing Society

    PubMed Central

    Joulan, Karine; Brémond, Roland

    2015-01-01

    The Contrast Sensitivity Function (CSF) describes how the visibility of a grating depends on the stimulus spatial frequency. Many published CSF data have demonstrated that contrast sensitivity declines with age. However, an age-dependent analytical model of the CSF is not available to date. In this paper, we propose such an analytical CSF model based on visual mechanisms, taking into account the age factor. To this end, we have extended an existing model from Barten (1999), taking into account the dependencies of this model's optical and physiological parameters on age. Age-dependent models of the cones and ganglion cells densities, the optical and neural MTF, and optical and neural noise are proposed, based on published data. The proposed age-dependent CSF is finally tested against available experimental data, with fair results. Such an age-dependent model may be beneficial when designing real-time age-dependent image coding and display applications. PMID:26078994

  19. Measurement-based reliability prediction methodology. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Linn, Linda Shen

    1991-01-01

    In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.

  20. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  1. Railroads and the Environment : Estimation of Fuel Consumption in Rail Transportation : Volume 1. Analytical Model

    DOT National Transportation Integrated Search

    1975-05-01

    The report describes an analytical approach to estimation of fuel consumption in rail transportation, and provides sample computer calculations suggesting the sensitivity of fuel usage to various parameters. The model used is based upon careful delin...

  2. High Resolution Electro-Optical Aerosol Phase Function Database PFNDAT2006

    DTIC Science & Technology

    2006-08-01

    snow models use the gamma distribution (equation 12) with m = 0. 3.4.1 Rain Model The most widely used analytical parameterization for raindrop size ...Uijlenhoet and Stricker (22), as the result of an analytical derivation based on a theoretical parameterization for the raindrop size distribution ...6 2.2 Particle Size Distribution Models

  3. A Bayesian Multi-Level Factor Analytic Model of Consumer Price Sensitivities across Categories

    ERIC Educational Resources Information Center

    Duvvuri, Sri Devi; Gruca, Thomas S.

    2010-01-01

    Identifying price sensitive consumers is an important problem in marketing. We develop a Bayesian multi-level factor analytic model of the covariation among household-level price sensitivities across product categories that are substitutes. Based on a multivariate probit model of category incidence, this framework also allows the researcher to…

  4. Analytical and numerical techniques for predicting the interfacial stresses of wavy carbon nanotube/polymer composites

    NASA Astrophysics Data System (ADS)

    Yazdchi, K.; Salehi, M.; Shokrieh, M. M.

    2009-03-01

    By introducing a new simplified 3D representative volume element for wavy carbon nanotubes, an analytical model is developed to study the stress transfer in single-walled carbon nanotube-reinforced polymer composites. Based on the pull-out modeling technique, the effects of waviness, aspect ratio, and Poisson ratio on the axial and interfacial shear stresses are analyzed in detail. The results of the present analytical model are in a good agreement with corresponding results for straight nanotubes.

  5. Evaluating child welfare policies with decision-analytic simulation models.

    PubMed

    Goldhaber-Fiebert, Jeremy D; Bailey, Stephanie L; Hurlburt, Michael S; Zhang, Jinjin; Snowden, Lonnie R; Wulczyn, Fred; Landsverk, John; Horwitz, Sarah M

    2012-11-01

    The objective was to demonstrate decision-analytic modeling in support of Child Welfare policymakers considering implementing evidence-based interventions. Outcomes included permanency (e.g., adoptions) and stability (e.g., foster placement changes). Analyses of a randomized trial of KEEP-a foster parenting intervention-and NSCAW-1 estimated placement change rates and KEEP's effects. A microsimulation model generalized these findings to other Child Welfare systems. The model projected that KEEP could increase permanency and stability, identifying strategies targeting higher-risk children and geographical regions that achieve benefits efficiently. Decision-analytic models enable planners to gauge the value of potential implementations.

  6. Modelling vortex-induced fluid-structure interaction.

    PubMed

    Benaroya, Haym; Gabbai, Rene D

    2008-04-13

    The principal goal of this research is developing physics-based, reduced-order, analytical models of nonlinear fluid-structure interactions associated with offshore structures. Our primary focus is to generalize the Hamilton's variational framework so that systems of flow-oscillator equations can be derived from first principles. This is an extension of earlier work that led to a single energy equation describing the fluid-structure interaction. It is demonstrated here that flow-oscillator models are a subclass of the general, physical-based framework. A flow-oscillator model is a reduced-order mechanical model, generally comprising two mechanical oscillators, one modelling the structural oscillation and the other a nonlinear oscillator representing the fluid behaviour coupled to the structural motion.Reduced-order analytical model development continues to be carried out using a Hamilton's principle-based variational approach. This provides flexibility in the long run for generalizing the modelling paradigm to complex, three-dimensional problems with multiple degrees of freedom, although such extension is very difficult. As both experimental and analytical capabilities advance, the critical research path to developing and implementing fluid-structure interaction models entails-formulating generalized equations of motion, as a superset of the flow-oscillator models; and-developing experimentally derived, semi-analytical functions to describe key terms in the governing equations of motion. The developed variational approach yields a system of governing equations. This will allow modelling of multiple d.f. systems. The extensions derived generalize the Hamilton's variational formulation for such problems. The Navier-Stokes equations are derived and coupled to the structural oscillator. This general model has been shown to be a superset of the flow-oscillator model. Based on different assumptions, one can derive a variety of flow-oscillator models.

  7. Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.

    PubMed

    Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U

    2015-05-01

    The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  9. A Fuzzy-Based Decision Support Model for Selecting the Best Dialyser Flux in Haemodialysis.

    PubMed

    Oztürk, Necla; Tozan, Hakan

    2015-01-01

    Decision making is an important procedure for every organization. The procedure is particularly challenging for complicated multi-criteria problems. Selection of dialyser flux is one of the decisions routinely made for haemodialysis treatment provided for chronic kidney failure patients. This study provides a decision support model for selecting the best dialyser flux between high-flux and low-flux dialyser alternatives. The preferences of decision makers were collected via a questionnaire. A total of 45 questionnaires filled by dialysis physicians and nephrologists were assessed. A hybrid fuzzy-based decision support software that enables the use of Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), Analytic Network Process (ANP), and Fuzzy Analytic Network Process (FANP) was used to evaluate the flux selection model. In conclusion, the results showed that a high-flux dialyser is the best. option for haemodialysis treatment.

  10. Design Recommendations for Concrete Tunnel Linings : Volume I. Results of Model Tests and Analytical Parameter Studies.

    DOT National Transportation Integrated Search

    1983-11-01

    Volume 1 of this report describes model tests and analytical studies based on experience, interviews with design engineers, and literature reviews, carried out to develop design recommendations for concrete tunnel linings. Volume 2 contains the propo...

  11. Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Malsbury, T.; Atencio, A., Jr.

    1992-01-01

    A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.

  12. Barycentric parameterizations for isotropic BRDFs.

    PubMed

    Stark, Michael M; Arvo, James; Smits, Brian

    2005-01-01

    A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In this paper, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF model proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs.

  13. Simulation and modeling of the temporal performance of path-based restoration schemes in planar mesh networks

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.

    2006-12-01

    We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.

  14. Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in Internet of Bio-Nano Things

    PubMed Central

    Akan, Ozgur B.

    2018-01-01

    We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics. PMID:29415019

  15. Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in Internet of Bio-Nano Things.

    PubMed

    Kuscu, Murat; Akan, Ozgur B

    2018-01-01

    We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics.

  16. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  17. Description and application of capture zone delineation for a wellfield at Hilton Head Island, South Carolina

    USGS Publications Warehouse

    Landmeyer, J.E.

    1994-01-01

    Ground-water capture zone boundaries for individual pumped wells in a confined aquffer were delineated by using groundwater models. Both analytical and numerical (semi-analytical) models that more accurately represent the $round-water-flow system were used. All models delineated 2-dimensional boundaries (capture zones) that represent the areal extent of groundwater contribution to a pumped well. The resultant capture zones were evaluated on the basis of the ability of each model to realistically rapresent the part of the ground-water-flow system that contributed water to the pumped wells. Analytical models used were based on a fixed radius approach, and induded; an arbitrary radius model, a calculated fixed radius model based on the volumetric-flow equation with a time-of-travel criterion, and a calculated fixed radius model derived from modification of the Theis model with a drawdown criterion. Numerical models used induded the 2-dimensional, finite-difference models RESSQC and MWCAP. The arbitrary radius and Theis analytical models delineated capture zone boundaries that compared least favorably with capture zones delineated using the volumetric-flow analytical model and both numerical models. The numerical models produced more hydrologically reasonable capture zones (that were oriented parallel to the regional flow direction) than the volumetric-flow equation. The RESSQC numerical model computed more hydrologically realistic capture zones than the MWCAP numerical model by accounting for changes in the shape of capture zones caused by multiple-well interference. The capture zone boundaries generated by using both analytical and numerical models indicated that the curnmtly used 100-foot radius of protection around a wellhead in South Carolina is an underestimate of the extent of ground-water capture for pumped wetis in this particular wellfield in the Upper Floridan aquifer. The arbitrary fixed radius of 100 feet was shown to underestimate the upgradient contribution of ground-water flow to a pumped well.

  18. Flexible aircraft dynamic modeling for dynamic analysis and control synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1989-01-01

    The linearization and simplification of a nonlinear, literal model for flexible aircraft is highlighted. Areas of model fidelity that are critical if the model is to be used for control system synthesis are developed and several simplification techniques that can deliver the necessary model fidelity are discussed. These techniques include both numerical and analytical approaches. An analytical approach, based on first-order sensitivity theory is shown to lead not only to excellent numerical results, but also to closed-form analytical expressions for key system dynamic properties such as the pole/zero factors of the vehicle transfer-function matrix. The analytical results are expressed in terms of vehicle mass properties, vibrational characteristics, and rigid-body and aeroelastic stability derivatives, thus leading to the underlying causes for critical dynamic characteristics.

  19. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6

  20. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA

  1. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  2. Local Spatial Obesity Analysis and Estimation Using Online Social Network Sensors.

    PubMed

    Sun, Qindong; Wang, Nan; Li, Shancang; Zhou, Hongyi

    2018-03-15

    Recently, the online social networks (OSNs) have received considerable attentions as a revolutionary platform to offer users massive social interaction among users that enables users to be more involved in their own healthcare. The OSNs have also promoted increasing interests in the generation of analytical, data models in health informatics. This paper aims at developing an obesity identification, analysis, and estimation model, in which each individual user is regarded as an online social network 'sensor' that can provide valuable health information. The OSN-based obesity analytic model requires each sensor node in an OSN to provide associated features, including dietary habit, physical activity, integral/incidental emotions, and self-consciousness. Based on the detailed measurements on the correlation of obesity and proposed features, the OSN obesity analytic model is able to estimate the obesity rate in certain urban areas and the experimental results demonstrate a high success estimation rate. The measurements and estimation experimental findings created by the proposed obesity analytic model show that the online social networks could be used in analyzing the local spatial obesity problems effectively. Copyright © 2018. Published by Elsevier Inc.

  3. L-shaped piezoelectric motor--part II: analytical modeling.

    PubMed

    Avirovik, Dragan; Karami, M Amin; Inman, Daniel; Priya, Shashank

    2012-01-01

    This paper develops an analytical model for an L-shaped piezoelectric motor. The motor structure has been described in detail in Part I of this study. The coupling of the bending vibration mode of the bimorphs results in an elliptical motion at the tip. The emphasis of this paper is on the development of a precise analytical model which can predict the dynamic behavior of the motor based on its geometry. The motor was first modeled mechanically to identify the natural frequencies and mode shapes of the structure. Next, an electromechanical model of the motor was developed to take into account the piezoelectric effect, and dynamics of L-shaped piezoelectric motor were obtained as a function of voltage and frequency. Finally, the analytical model was validated by comparing it to experiment results and the finite element method (FEM). © 2012 IEEE

  4. CZAEM USER'S GUIDE: MODELING CAPTURE ZONES OF GROUND-WATER WELLS USING ANALYTIC ELEMENTS

    EPA Science Inventory

    The computer program CZAEM is designed for elementary capture zone analysis, and is based on the analytic element method. CZAEM is applicable to confined and/or unconfined low in shallow aquifers; the Dupuit-Forchheimer assumption is adopted. CZAEM supports the following analyt...

  5. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  6. An Analysis of Machine- and Human-Analytics in Classification.

    PubMed

    Tam, Gary K L; Kothari, Vivek; Chen, Min

    2017-01-01

    In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.

  7. An Analytical Framework for Evaluating E-Commerce Business Models and Strategies.

    ERIC Educational Resources Information Center

    Lee, Chung-Shing

    2001-01-01

    Considers electronic commerce as a paradigm shift, or a disruptive innovation, and presents an analytical framework based on the theories of transaction costs and switching costs. Topics include business transformation process; scale effect; scope effect; new sources of revenue; and e-commerce value creation model and strategy. (LRW)

  8. An Examination of Advisor Concerns in the Era of Academic Analytics

    ERIC Educational Resources Information Center

    Daughtry, Jeremy J.

    2017-01-01

    Performance-based funding models are increasingly becoming the norm for many institutions of higher learning. Such models place greater emphasis on student retention and success metrics, for example, as requirements for receiving state appropriations. To stay competitive, universities have adopted academic analytics technologies capable of…

  9. Teaching Complex Concepts in the Geosciences by Integrating Analytical Reasoning with GIS

    ERIC Educational Resources Information Center

    Houser, Chris; Bishop, Michael P.; Lemmons, Kelly

    2017-01-01

    Conceptual models have long served as a means for physical geographers to organize their understanding of feedback mechanisms and complex systems. Analytical reasoning provides undergraduate students with an opportunity to develop conceptual models based upon their understanding of surface processes and environmental conditions. This study…

  10. The role of decision analytic modeling in the health economic assessment of spinal intervention.

    PubMed

    Edwards, Natalie C; Skelly, Andrea C; Ziewacz, John E; Cahill, Kevin; McGirt, Matthew J

    2014-10-15

    Narrative review. To review the common tenets, strengths, and weaknesses of decision modeling for health economic assessment and to review the use of decision modeling in the spine literature to date. For the majority of spinal interventions, well-designed prospective, randomized, pragmatic cost-effectiveness studies that address the specific decision-in-need are lacking. Decision analytic modeling allows for the estimation of cost-effectiveness based on data available to date. Given the rising demands for proven value in spine care, the use of decision analytic modeling is rapidly increasing by clinicians and policy makers. This narrative review discusses the general components of decision analytic models, how decision analytic models are populated and the trade-offs entailed, makes recommendations for how users of spine intervention decision models might go about appraising the models, and presents an overview of published spine economic models. A proper, integrated, clinical, and economic critical appraisal is necessary in the evaluation of the strength of evidence provided by a modeling evaluation. As is the case with clinical research, all options for collecting health economic or value data are not without their limitations and flaws. There is substantial heterogeneity across the 20 spine intervention health economic modeling studies summarized with respect to study design, models used, reporting, and general quality. There is sparse evidence for populating spine intervention models. Results mostly showed that interventions were cost-effective based on $100,000/quality-adjusted life-year threshold. Spine care providers, as partners with their health economic colleagues, have unique clinical expertise and perspectives that are critical to interpret the strengths and weaknesses of health economic models. Health economic models must be critically appraised for both clinical validity and economic quality before altering health care policy, payment strategies, or patient care decisions. 4.

  11. Incorporating photon recycling into the analytical drift-diffusion model of high efficiency solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumb, Matthew P.; Naval Research Laboratory, Washington, DC 20375; Steiner, Myles A.

    The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close tomore » the fundamental efficiency limit.« less

  12. Model of separation performance of bilinear gradients in scanning format counter-flow gradient electrofocusing techniques.

    PubMed

    Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L

    2015-03-01

    Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Low velocity impact analysis of composite laminated plates

    NASA Astrophysics Data System (ADS)

    Zheng, Daihua

    2007-12-01

    In the past few decades polymer composites have been utilized more in structures where high strength and light weight are major concerns, e.g., aircraft, high-speed boats and sports supplies. It is well known that they are susceptible to damage resulting from lateral impact by foreign objects, such as dropped tools, hail and debris thrown up from the runway. The impact response of the structures depends not only on the material properties but also on the dynamic behavior of the impacted structure. Although commercial software is capable of analyzing such impact processes, it often requires extensive expertise and rigorous training for design and analysis. Analytical models are useful as they allow parametric studies and provide a foundation for validating the numerical results from large-scale commercial software. Therefore, it is necessary to develop analytical or semi-analytical models to better understand the behaviors of composite structures under impact and their associated failure process. In this study, several analytical models are proposed in order to analyze the impact response of composite laminated plates. Based on Meyer's Power Law, a semi-analytical model is obtained for small mass impact response of infinite composite laminates by the method of asymptotic expansion. The original nonlinear second-order ordinary differential equation is transformed into two linear ordinary differential equations. This is achieved by neglecting high-order terms in the asymptotic expansion. As a result, the semi-analytical solution of the overall impact response can be applied to contact laws with varying coefficients. Then an analytical model accounting for permanent deformation based on an elasto-plastic contact law is proposed to obtain the closed-form solutions of the wave-controlled impact responses of composite laminates. The analytical model is also used to predict the threshold velocity for delamination onset by combining with an existing quasi-static delamination criterion. The predictions are compared with experimental data and explicit finite element LS-DYNA simulation. The comparisons show reasonable agreement. Furthermore, an analytical model is developed to evaluate the combined effects of prestresses and permanent deformation based on the linearized elasto-plastic contact law and the Laplace Transform technique. It is demonstrated that prestresses do not have noticeable effects on the time history of contact force and strains, but they have significant consequences on the plate central displacement. For a impacted composite laminate with the presence of prestresses, the contact force increases with the increasing of the mass of impactor, thickness and interlaminar shear strength of the laminate. The combined analytical and numerical investigations provide validated models for elastic and elasto-plastic impact analysis of composite structures and shed light on the design of impact-resistant composite systems.

  14. Classifying Correlation Matrices into Relatively Homogeneous Subgroups: A Cluster Analytic Approach

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Chan, Wai

    2005-01-01

    Researchers are becoming interested in combining meta-analytic techniques and structural equation modeling to test theoretical models from a pool of studies. Most existing procedures are based on the assumption that all correlation matrices are homogeneous. Few studies have addressed what the next step should be when studies being analyzed are…

  15. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  16. Pavement Performance : Approaches Using Predictive Analytics

    DOT National Transportation Integrated Search

    2018-03-23

    Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...

  17. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    NASA Astrophysics Data System (ADS)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  18. Analytical modeling of fire growth on fire-resistive wood-based materials with changing conditions

    Treesearch

    Mark A. Dietenberger

    2006-01-01

    Our analytical model of fire growth for the ASTM E 84 tunnel, which simultaneously predicts heat release rate, flame-over area, and pyrolysis area as functions of time for constant conditions, was documented in the 2001 BCC Symposium for different treated wood materials. The model was extended to predict ignition and fire growth on exterior fire-resistive structures...

  19. An analytical solution for predicting the transient seepage from a subsurface drainage system

    NASA Astrophysics Data System (ADS)

    Xin, Pei; Dan, Han-Cheng; Zhou, Tingzhang; Lu, Chunhui; Kong, Jun; Li, Ling

    2016-05-01

    Subsurface drainage systems have been widely used to deal with soil salinization and waterlogging problems around the world. In this paper, a mathematical model was introduced to quantify the transient behavior of the groundwater table and the seepage from a subsurface drainage system. Based on the assumption of a hydrostatic pressure distribution, the model considered the pore-water flow in both the phreatic and vadose soil zones. An approximate analytical solution for the model was derived to quantify the drainage of soils which were initially water-saturated. The analytical solution was validated against laboratory experiments and a 2-D Richards equation-based model, and found to predict well the transient water seepage from the subsurface drainage system. A saturated flow-based model was also tested and found to over-predict the time required for drainage and the total water seepage by nearly one order of magnitude, in comparison with the experimental results and the present analytical solution. During drainage, a vadose zone with a significant water storage capacity developed above the phreatic surface. A considerable amount of water still remained in the vadose zone at the steady state with the water table situated at the drain bottom. Sensitivity analyses demonstrated that effects of the vadose zone were intensified with an increased thickness of capillary fringe, capillary rise and/or burying depth of drains, in terms of the required drainage time and total water seepage. The analytical solution provides guidance for assessing the capillary effects on the effectiveness and efficiency of subsurface drainage systems for combating soil salinization and waterlogging problems.

  20. A practical model of thin disk regenerative amplifier based on analytical expression of ASE lifetime

    NASA Astrophysics Data System (ADS)

    Zhou, Huang; Chyla, Michal; Nagisetty, Siva Sankar; Chen, Liyuan; Endo, Akira; Smrz, Martin; Mocek, Tomas

    2017-12-01

    In this paper, a practical model of a thin disk regenerative amplifier has been developed based on an analytical approach, in which Drew A. Copeland [1] had evaluated the loss rate of the upper state laser level due to ASE and derived the analytical expression of the effective life-time of the upper-state laser level by taking the Lorentzian stimulated emission line-shape and total internal reflection into account. By adopting the analytical expression of effective life-time in the rate equations, we have developed a less numerically intensive model for predicting and analyzing the performance of a thin disk regenerative amplifier. Thanks to the model, optimized combination of various parameters can be obtained to avoid saturation, period-doubling bifurcation or first pulse suppression prior to experiments. The effective life-time due to ASE is also analyzed against various parameters. The simulated results fit well with experimental data. By fitting more experimental results with numerical model, we can improve the parameters of the model, such as reflective factor which is used to determine the weight of boundary reflection within the influence of ASE. This practical model will be used to explore the scaling limits imposed by ASE of the thin disk regenerative amplifier being developed in HiLASE Centre.

  1. A New Model for Temperature Jump at a Fluid-Solid Interface

    PubMed Central

    Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong

    2016-01-01

    The problem presented involves the development of a new analytical model for the general fluid-solid temperature jump. To the best of our knowledge, there are no analytical models that provide the accurate predictions of the temperature jump for both gas and liquid systems. In this paper, a unified model for the fluid-solid temperature jump has been developed based on our adsorption model of the interfacial interactions. Results obtained from this model are validated with available results from the literature. PMID:27764230

  2. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    NASA Astrophysics Data System (ADS)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  3. A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education.

    PubMed

    Hervatis, Vasilis; Loe, Alan; Barman, Linda; O'Donoghue, John; Zary, Nabil

    2015-10-06

    Preparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education. The aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators' decision making. A deductive case study approach was applied to develop the conceptual model. The analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators. The model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach.

  4. A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education

    PubMed Central

    Loe, Alan; Barman, Linda; O'Donoghue, John; Zary, Nabil

    2015-01-01

    Background Preparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education. Objective The aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators’ decision making. Methods A deductive case study approach was applied to develop the conceptual model. Results The analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators. Conclusions The model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach. PMID:27731840

  5. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less

  6. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less

  7. On the performance of piezoelectric harvesters loaded by finite width impulses

    NASA Astrophysics Data System (ADS)

    Doria, A.; Medè, C.; Desideri, D.; Maschio, A.; Codecasa, L.; Moro, F.

    2018-02-01

    The response of cantilevered piezoelectric harvesters loaded by finite width impulses of base acceleration is studied analytically in the frequency domain in order to identify the parameters that influence the generated voltage. Experimental tests are then performed on harvesters loaded by hammer impacts. The latter are used to confirm analytical results and to validate a linear finite element (FE) model of a unimorph harvester. The FE model is, in turn, used to extend analytical results to more general harvesters (tapered, inverse tapered, triangular) and to more general impulses (heel strike in human gait). From analytical and numerical results design criteria for improving harvester performance are obtained.

  8. Analysis of high-speed rotating flow inside gas centrifuge casing

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev, , Dr.

    2017-10-01

    The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ω_i while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.

  9. Analysis of high-speed rotating flow inside gas centrifuge casing

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev, , Dr.

    2017-09-01

    The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ωi while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.

  10. Analysis of high-speed rotating flow inside gas centrifuge casing

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev

    2017-11-01

    The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ωi while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.

  11. Analytical model for screening potential CO2 repositories

    USGS Publications Warehouse

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  12. Analytical stability and simulation response study for a coupled two-body system

    NASA Technical Reports Server (NTRS)

    Tao, K. M.; Roberts, J. R.

    1975-01-01

    An analytical stability study and a digital simulation response study of two connected rigid bodies are documented. Relative rotation of the bodies at the connection is allowed, thereby providing a model suitable for studying system stability and response during a soft-dock regime. Provisions are made of a docking port axes alignment torque and a despin torque capability for encountering spinning payloads. Although the stability analysis is based on linearized equations, the digital simulation is based on nonlinear models.

  13. Health Informatics for Neonatal Intensive Care Units: An Analytical Modeling Perspective

    PubMed Central

    Mench-Bressan, Nadja; McGregor, Carolyn; Pugh, James Edward

    2015-01-01

    The effective use of data within intensive care units (ICUs) has great potential to create new cloud-based health analytics solutions for disease prevention or earlier condition onset detection. The Artemis project aims to achieve the above goals in the area of neonatal ICUs (NICU). In this paper, we proposed an analytical model for the Artemis cloud project which will be deployed at McMaster Children’s Hospital in Hamilton. We collect not only physiological data but also the infusion pumps data that are attached to NICU beds. Using the proposed analytical model, we predict the amount of storage, memory, and computation power required for the system. Capacity planning and tradeoff analysis would be more accurate and systematic by applying the proposed analytical model in this paper. Numerical results are obtained using real inputs acquired from McMaster Children’s Hospital and a pilot deployment of the system at The Hospital for Sick Children (SickKids) in Toronto. PMID:27170907

  14. Analytical method for determining rill detachment rate of purple soil as compared with that of loess soil

    USDA-ARS?s Scientific Manuscript database

    Rill detachment is an important process in rill erosion. The rill detachment rate is the fundamental basis for determination of the parameters of a rill erosion model. In this paper, an analytical method was proposed to estimate the rill detachment rate. The method is based on the exact analytical s...

  15. Numerical investigation of band gaps in 3D printed cantilever-in-mass metamaterials

    NASA Astrophysics Data System (ADS)

    Qureshi, Awais; Li, Bing; Tan, K. T.

    2016-06-01

    In this research, the negative effective mass behavior of elastic/mechanical metamaterials is exhibited by a cantilever-in-mass structure as a proposed design for creating frequency stopping band gaps, based on local resonance of the internal structure. The mass-in-mass unit cell model is transformed into a cantilever-in-mass model using the Bernoulli-Euler beam theory. An analytical model of the cantilever-in-mass structure is derived and the effects of geometrical dimensions and material parameters to create frequency band gaps are examined. A two-dimensional finite element model is created to validate the analytical results, and excellent agreement is achieved. The analytical model establishes an easily tunable metamaterial design to realize wave attenuation based on locally resonant frequency. To demonstrate feasibility for 3D printing, the analytical model is employed to design and fabricate 3D printable mechanical metamaterial. A three-dimensional numerical experiment is performed using COMSOL Multiphysics to validate the wave attenuation performance. Results show that the cantilever-in-mass metamaterial is capable of mitigating stress waves at the desired resonance frequency. Our study successfully presents the use of one constituent material to create a 3D printed cantilever-in-mass metamaterial with negative effective mass density for stress wave mitigation purposes.

  16. Proactive Supply Chain Performance Management with Predictive Analytics

    PubMed Central

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605

  17. Proactive supply chain performance management with predictive analytics.

    PubMed

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  18. An analytical model of SAGD process considering the effect of threshold pressure gradient

    NASA Astrophysics Data System (ADS)

    Morozov, P.; Abdullin, A.; Khairullin, M.

    2018-05-01

    An analytical model is proposed for the development of super-viscous oil deposits by the method of steam-assisted gravity drainage, taking into account the nonlinear filtration law with the limiting gradient. The influence of non-Newtonian properties of oil on the productivity of a horizontal well and the cumulative steam-oil ratio are studied. Verification of the proposed model based on the results of physical modeling of the SAGD process was carried out.

  19. Argumentation in Science Education: A Model-based Framework

    NASA Astrophysics Data System (ADS)

    Böttcher, Florian; Meisert, Anke

    2011-02-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.

  20. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    DTIC Science & Technology

    2017-09-19

    Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION

  1. SU-E-T-378: Evaluation of An Analytical Model for the Inter-Seed Attenuation Effect in 103-Pd Multi-Seed Implant Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safigholi, H; Soliman, A; Song, W

    Purpose: Brachytherapy treatment planning systems based on TG-43 protocol calculate the dose in water and neglects the heterogeneity effect of seeds in multi-seed implant brachytherapy. In this research, the accuracy of a novel analytical model that we propose for the inter-seed attenuation effect (ISA) for 103-Pd seed model is evaluated. Methods: In the analytical model, dose perturbation due to the ISA effect for each seed in an LDR multi-seed implant for 103-Pd is calculated by assuming that the seed of interest is active and the other surrounding seeds are inactive. The cumulative dosimetric effect of all seeds is then summedmore » using the superposition principle. The model is based on pre Monte Carlo (MC) simulated 3D kernels of the dose perturbations caused by the ISA effect. The cumulative ISA effect due to multiple surrounding seeds is obtained by a simple multiplication of the individual ISA effect by each seed, the effect of which is determined by the distance from the seed of interest. This novel algorithm is then compared with full MC water-based simulations (FMCW). Results: The results show that the dose perturbation model we propose is in excellent agreement with the FMCW values for a case with three seeds separated by 1 cm. The average difference of the model and the FMCW simulations was less than 8%±2%. Conclusion: Using the proposed novel analytical ISA effect model, one could expedite the corrections due to the ISA dose perturbation effects during permanent seed 103-Pd brachytherapy planning with minimal increase in time since the model is based on multiplications and superposition. This model can be applied, in principle, to any other brachytherapy seeds. Further work is necessary to validate this model on a more complicated geometry as well.« less

  2. Analytical model of cracking due to rebar corrosion expansion in concrete considering the structure internal force

    NASA Astrophysics Data System (ADS)

    Lin, Xiangyue; Peng, Minli; Lei, Fengming; Tan, Jiangxian; Shi, Huacheng

    2017-12-01

    Based on the assumptions of uniform corrosion and linear elastic expansion, an analytical model of cracking due to rebar corrosion expansion in concrete was established, which is able to consider the structure internal force. And then, by means of the complex variable function theory and series expansion technology established by Muskhelishvili, the corresponding stress component functions of concrete around the reinforcement were obtained. Also, a comparative analysis was conducted between the numerical simulation model and present model in this paper. The results show that the calculation results of both methods were consistent with each other, and the numerical deviation was less than 10%, proving that the analytical model established in this paper is reliable.

  3. Experimental investigation and numerical simulation of 3He gas diffusion in simple geometries: implications for analytical models of 3He MR lung morphometry.

    PubMed

    Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M

    2010-06-01

    Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.

  4. Analytical transmissibility based transfer path analysis for multi-energy-domain systems using four-pole parameter theory

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Mohammad Jalali; Behdinan, Kamran

    2017-10-01

    The increasing demand to minimize undesired vibration and noise levels in several high-tech industries has generated a renewed interest in vibration transfer path analysis. Analyzing vibration transfer paths within a system is of crucial importance in designing an effective vibration isolation strategy. Most of the existing vibration transfer path analysis techniques are empirical which are suitable for diagnosis and troubleshooting purpose. The lack of an analytical transfer path analysis to be used in the design stage is the main motivation behind this research. In this paper an analytical transfer path analysis based on the four-pole theory is proposed for multi-energy-domain systems. Bond graph modeling technique which is an effective approach to model multi-energy-domain systems is used to develop the system model. In this paper an electro-mechanical system is used as a benchmark example to elucidate the effectiveness of the proposed technique. An algorithm to obtain the equivalent four-pole representation of a dynamical systems based on the corresponding bond graph model is also presented in this paper.

  5. Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive.

    PubMed

    Richardson, Magnus J E

    2007-08-01

    Integrate-and-fire models are mainstays of the study of single-neuron response properties and emergent states of recurrent networks of spiking neurons. They also provide an analytical base for perturbative approaches that treat important biological details, such as synaptic filtering, synaptic conductance increase, and voltage-activated currents. Steady-state firing rates of both linear and nonlinear integrate-and-fire models, receiving fluctuating synaptic drive, can be calculated from the time-independent Fokker-Planck equation. The dynamic firing-rate response is less easy to extract, even at the first-order level of a weak modulation of the model parameters, but is an important determinant of neuronal response and network stability. For the linear integrate-and-fire model the response to modulations of current-based synaptic drive can be written in terms of hypergeometric functions. For the nonlinear exponential and quadratic models no such analytical forms for the response are available. Here it is demonstrated that a rather simple numerical method can be used to obtain the steady-state and dynamic response for both linear and nonlinear models to parameter modulation in the presence of current-based or conductance-based synaptic fluctuations. To complement the full numerical solution, generalized analytical forms for the high-frequency response are provided. A special case is also identified--time-constant modulation--for which the response to an arbitrarily strong modulation can be calculated exactly.

  6. Satellite attitude motion models for capture and retrieval investigations

    NASA Technical Reports Server (NTRS)

    Cochran, John E., Jr.; Lahr, Brian S.

    1986-01-01

    The primary purpose of this research is to provide mathematical models which may be used in the investigation of various aspects of the remote capture and retrieval of uncontrolled satellites. Emphasis has been placed on analytical models; however, to verify analytical solutions, numerical integration must be used. Also, for satellites of certain types, numerical integration may be the only practical or perhaps the only possible method of solution. First, to provide a basis for analytical and numerical work, uncontrolled satellites were categorized using criteria based on: (1) orbital motions, (2) external angular momenta, (3) internal angular momenta, (4) physical characteristics, and (5) the stability of their equilibrium states. Several analytical solutions for the attitude motions of satellite models were compiled, checked, corrected in some minor respects and their short-term prediction capabilities were investigated. Single-rigid-body, dual-spin and multi-rotor configurations are treated. To verify the analytical models and to see how the true motion of a satellite which is acted upon by environmental torques differs from its corresponding torque-free motion, a numerical simulation code was developed. This code contains a relatively general satellite model and models for gravity-gradient and aerodynamic torques. The spacecraft physical model for the code and the equations of motion are given. The two environmental torque models are described.

  7. Regarding on the prototype solutions for the nonlinear fractional-order biological population model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskonus, Haci Mehmet, E-mail: hmbaskonus@gmail.com; Bulut, Hasan

    2016-06-08

    In this study, we have submitted to literature a method newly extended which is called as Improved Bernoulli sub-equation function method based on the Bernoulli Sub-ODE method. The proposed analytical scheme has been expressed with steps. We have obtained some new analytical solutions to the nonlinear fractional-order biological population model by using this technique. Two and three dimensional surfaces of analytical solutions have been drawn by wolfram Mathematica 9. Finally, a conclusion has been submitted by mentioning important acquisitions founded in this study.

  8. Experimental, Numerical and Analytical Characterization of Slosh Dynamics Applied to In-Space Propellant Storage, Management and Transfer

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah M.; Kirk, Daniel; Gutierrez, Hector; Marsell, Brandon; Schallhorn, Paul; Lapilli, Gabriel D.

    2015-01-01

    Experimental and numerical results are presented from a new cryogenic fluid slosh program at the Florida Institute of Technology (FIT). Water and cryogenic liquid nitrogen are used in various ground-based tests with an approximately 30 cm diameter spherical tank to characterize damping, slosh mode frequencies, and slosh forces. The experimental results are compared to a computational fluid dynamics (CFD) model for validation. An analytical model is constructed from prior work for comparison. Good agreement is seen between experimental, numerical, and analytical results.

  9. Cost and schedule analytical techniques development

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This contract provided technical services and products to the Marshall Space Flight Center's Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) for the period of 3 Aug. 1991 - 30 Nov. 1994. Accomplishments summarized cover the REDSTAR data base, NASCOM hard copy data base, NASCOM automated data base, NASCOM cost model, complexity generators, program planning, schedules, NASA computer connectivity, other analytical techniques, and special project support.

  10. The Effectiveness of CBL Model to Improve Analytical Thinking Skills the Students of Sport Science

    ERIC Educational Resources Information Center

    Sudibyo, Elok; Jatmiko, Budi; Widodo, Wahono

    2016-01-01

    Sport science undergraduate education, one of which purposes is to produce an analyst in sport. However, generally analytical thinking skills of sport science's students is still relatively very low in the context of sport. This study aimed to describe the effectiveness of Physics Learning Model in Sport Context, Context Based Learning (CBL)…

  11. Modal analysis of graphene-based structures for large deformations, contact and material nonlinearities

    NASA Astrophysics Data System (ADS)

    Ghaffari, Reza; Sauer, Roger A.

    2018-06-01

    The nonlinear frequencies of pre-stressed graphene-based structures, such as flat graphene sheets and carbon nanotubes, are calculated. These structures are modeled with a nonlinear hyperelastic shell model. The model is calibrated with quantum mechanics data and is valid for high strains. Analytical solutions of the natural frequencies of various plates are obtained for the Canham bending model by assuming infinitesimal strains. These solutions are used for the verification of the numerical results. The performance of the model is illustrated by means of several examples. Modal analysis is performed for square plates under pure dilatation or uniaxial stretch, circular plates under pure dilatation or under the effects of an adhesive substrate, and carbon nanotubes under uniaxial compression or stretch. The adhesive substrate is modeled with van der Waals interaction (based on the Lennard-Jones potential) and a coarse grained contact model. It is shown that the analytical natural frequencies underestimate the real ones, and this should be considered in the design of devices based on graphene structures.

  12. Electromagnetic Radiation in the Atmosphere Generated by Excess Negative Charge in a Nuclear-Electromagnetic Cascade

    NASA Astrophysics Data System (ADS)

    Malyshevsky, V. S.; Fomin, G. V.

    2017-01-01

    On the basis of the analytical model "PARMA" (PHITS-based Analytical Radiation Model in the Atmosphere), developed to model particle fluxes of secondary cosmic radiation in the Earth's atmosphere, we have calculated the characteristics of radio waves emitted by excess negative charge in an electromagnetic cascade. The results may be of use in an analysis of experimental data on radio emission of electron-photon showers in the atmosphere.

  13. Sharing the Data along with the Responsibility: Examining an Analytic Scale-Based Model for Assessing School Climate.

    ERIC Educational Resources Information Center

    Shindler, John; Taylor, Clint; Cadenas, Herminia; Jones, Albert

    This study was a pilot effort to examine the efficacy of an analytic trait scale school climate assessment instrument and democratic change system in two urban high schools. Pilot study results indicate that the instrument shows promising soundness in that it exhibited high levels of validity and reliability. In addition, the analytic trait format…

  14. Consequences of Base Time for Redundant Signals Experiments

    PubMed Central

    Townsend, James T.; Honey, Christopher

    2007-01-01

    We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591

  15. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  16. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  17. Modeling Carbon-Black/Polymer Composite Sensors

    PubMed Central

    Lei, Hua; Pitt, William G.; McGrath, Lucas K.; Ho, Clifford K.

    2012-01-01

    Conductive polymer composite sensors have shown great potential in identifying gaseous analytes. To more thoroughly understand the physical and chemical mechanisms of this type of sensor, a mathematical model was developed by combining two sub-models: a conductivity model and a thermodynamic model, which gives a relationship between the vapor concentration of analyte(s) and the change of the sensor signals. In this work, 64 chemiresistors representing eight different carbon concentrations (8–60 vol% carbon) were constructed by depositing thin films of a carbon-black/polyisobutylene composite onto concentric spiral platinum electrodes on a silicon chip. The responses of the sensors were measured in dry air and at various vapor pressures of toluene and trichloroethylene. Three parameters in the conductivity model were determined by fitting the experimental data. It was shown that by applying this model, the sensor responses can be adequately predicted for given vapor pressures; furthermore the analyte vapor concentrations can be estimated based on the sensor responses. This model will guide the improvement of the design and fabrication of conductive polymer composite sensors for detecting and identifying mixtures of organic vapors. PMID:22518071

  18. Predicting CH4 adsorption capacity of microporous carbon using N2 isotherm and a new analytical model

    USGS Publications Warehouse

    Sun, Jielun; Chen, S.; Rostam-Abadi, M.; Rood, M.J.

    1998-01-01

    A new analytical pore size distribution (PSD) model was developed to predict CH4 adsorption (storage) capacity of microporous adsorbent carbon. The model is based on a 3-D adsorption isotherm equation, derived from statistical mechanical principles. Least squares error minimization is used to solve the PSD without any pre-assumed distribution function. In comparison with several well-accepted analytical methods from the literature, this 3-D model offers relatively realistic PSD description for select reference materials, including activated carbon fibers. N2 and CH4 adsorption data were correlated using the 3-D model for commercial carbons BPL and AX-21. Predicted CH4 adsorption isotherms, based on N2 adsorption at 77 K, were in reasonable agreement with the experimental CH4 isotherms. Modeling results indicate that not all the pores contribute the same percentage Vm/Vs for CH4 storage due to different adsorbed CH4 densities. Pores near 8-9 A?? shows higher Vm/Vs on the equivalent volume basis than does larger pores.

  19. Nonlinear feedback control for high alpha flight

    NASA Technical Reports Server (NTRS)

    Stalford, Harold

    1990-01-01

    Analytical aerodynamic models are derived from a high alpha 6 DOF wind tunnel model. One detail model requires some interpolation between nonlinear functions of alpha. One analytical model requires no interpolation and as such is a completely continuous model. Flight path optimization is conducted on the basic maneuvers: half-loop, 90 degree pitch-up, and level turn. The optimal control analysis uses the derived analytical model in the equations of motion and is based on both moment and force equations. The maximum principle solution for the half-loop is poststall trajectory performing the half-loop in 13.6 seconds. The agility induced by thrust vectoring capability provided a minimum effect on reducing the maneuver time. By means of thrust vectoring control the 90 degrees pitch-up maneuver can be executed in a small place over a short time interval. The agility capability of thrust vectoring is quite beneficial for pitch-up maneuvers. The level turn results are based currently on only outer layer solutions of singular perturbation. Poststall solutions provide high turn rates but generate higher losses of energy than that of classical sustained solutions.

  20. Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.

    2017-05-01

    This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.

  1. Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.

    PubMed

    Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim

    2016-04-01

    Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.

  2. Original analytic solution of a half-bridge modelled as a statically indeterminate system

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra; Barhalescu, Mihaela

    2016-12-01

    The paper presents an original computer based analytical model of a half-bridge belonging to a circular settling tank. The primary unknown is computed using the force method, the coefficients of the canonical equation being calculated using either the discretization of the bending moment diagram in trapezoids, or using the relations specific to the polygons. A second algorithm based on the method of initial parameters is also presented. Analyzing the new solution we came to the conclusion that most of the computer code developed for other model may be reused. The results are useful to evaluate the behavior of the structure and to compare with the results of the finite element models.

  3. Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.

    PubMed

    Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis

    2016-07-01

    Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.

  4. Probabilistic assessment methodology for continuous-type petroleum accumulations

    USGS Publications Warehouse

    Crovelli, R.A.

    2003-01-01

    The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.

  5. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  6. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  7. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    NASA Technical Reports Server (NTRS)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  8. Validation of a BOTDR-based system for the detection of smuggling tunnels

    NASA Astrophysics Data System (ADS)

    Elkayam, Itai; Klar, Assaf; Linker, Raphael; Marshall, Alec M.

    2010-04-01

    Cross-border smuggling tunnels enable unmonitored movement of people, drugs and weapons and pose a very serious threat to homeland security. Recently, Klar and Linker (2009) [SPIE paper No. 731603] presented an analytical study of the feasibility of a Brillouin Optical Time Domain Reflectometry (BOTDR) based system for the detection of small sized smuggling tunnels. The current study extends this work by validating the analytical models against real strain measurements in soil obtained from small scale experiments in a geotechnical centrifuge. The soil strains were obtained using an image analysis method that tracked the displacement of discrete patches of soil through a sequence of digital images of the soil around the tunnel during the centrifuge test. The results of the present study are in agreement with those of a previous study which was based on synthetic signals generated using empirical and analytical models from the literature.

  9. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  10. Design of permanent magnet eddy current brake for a small scaled electromagnetic launch model

    NASA Astrophysics Data System (ADS)

    Zhou, Shigui; Yu, Haitao; Hu, Minqiang; Huang, Lei

    2012-04-01

    A variable pole-pitch double-sided permanent magnet (PM) linear eddy current brake (LECB) is proposed for a small scaled electromagnetic launch model. A two-dimensional (2D) analytical steady state model is presented for the double-sided PM-LECB, and the expression for the braking force is derived. Based on the analytical model, the material and eddy current skin effect of the conducting plate are analyzed. Moreover, a variable pole-pitch double-sided PM-LECB is proposed for the effective braking of the moving plate. In addition, the braking force is predicted by finite element (FE) analysis, and the simulated results are in good agreement with the analytical model. Finally, a prototype is presented to test the braking profile for validation of the proposed design.

  11. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE PAGES

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol; ...

    2017-07-10

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  12. Modeling of heat flow and effective thermal conductivity of fractured media: Analytical and numerical methods

    NASA Astrophysics Data System (ADS)

    Nguyen, S. T.; Vu, M.-H.; Vu, M. N.; Tang, A. M.

    2017-05-01

    The present work aims to modeling the thermal conductivity of fractured materials using homogenization-based analytical and pattern-based numerical methods. These materials are considered as a network of cracks distributed inside a solid matrix. Heat flow through such media is perturbed by the crack system. The problem of heat flow across a single crack is firstly investigated. The classical Eshelby's solution, extended to the thermal conduction problem of an ellipsoidal inclusion embedding in an infinite homogeneous matrix, gives an analytical solution of temperature discontinuity across a non-conducting penny-shaped crack. This solution is then validated by the numerical simulation based on the finite elements method. The numerical simulation allows analyzing the effect of crack conductivity. The problem of a single crack is then extended to a medium containing multiple cracks. Analytical estimations for effective thermal conductivity, that take into account the interaction between cracks and their spatial distribution, are developed for the case of non-conducting cracks. Pattern-based numerical method is then employed for both cases non-conducting and conducting cracks. In the case of non-conducting cracks, numerical and analytical methods, both account for the spatial distribution of the cracks, fit perfectly. In the case of conducting cracks, the numerical analyzing of crack conductivity effect shows that highly conducting cracks weakly affect heat flow and the effective thermal conductivity of fractured media.

  13. Geometric model of pseudo-distance measurement in satellite location systems

    NASA Astrophysics Data System (ADS)

    Panchuk, K. L.; Lyashkov, A. A.; Lyubchinov, E. V.

    2018-04-01

    The existing mathematical model of pseudo-distance measurement in satellite location systems does not provide a precise solution of the problem, but rather an approximate one. The existence of such inaccuracy, as well as bias in measurement of distance from satellite to receiver, results in inaccuracy level of several meters. Thereupon, relevance of refinement of the current mathematical model becomes obvious. The solution of the system of quadratic equations used in the current mathematical model is based on linearization. The objective of the paper is refinement of current mathematical model and derivation of analytical solution of the system of equations on its basis. In order to attain the objective, geometric analysis is performed; geometric interpretation of the equations is given. As a result, an equivalent system of equations, which allows analytical solution, is derived. An example of analytical solution implementation is presented. Application of analytical solution algorithm to the problem of pseudo-distance measurement in satellite location systems allows to improve the accuracy such measurements.

  14. An analytical model for solute transport through a GCL-based two-layered liner considering biodegradation.

    PubMed

    Guan, C; Xie, H J; Wang, Y Z; Chen, Y M; Jiang, Y S; Tang, X W

    2014-01-01

    An analytical model for solute advection and dispersion in a two-layered liner consisting of a geosynthetic clay liner (GCL) and a soil liner (SL) considering the effect of biodegradation was proposed. The analytical solution was derived by Laplace transformation and was validated over a range of parameters using the finite-layer method based software Pollute v7.0. Results show that if the half-life of the solute in GCL is larger than 1 year, the degradation in GCL can be neglected for solute transport in GCL/SL. When the half-life of GCL is less than 1 year, neglecting the effect of degradation in GCL on solute migration will result in a large difference of relative base concentration of GCL/SL (e.g., 32% for the case with half-life of 0.01 year). The 100-year solute base concentration can be reduced by a factor of 2.2 when the hydraulic conductivity of the SL was reduced by an order of magnitude. The 100-year base concentration was reduced by a factor of 155 when the half life of the contaminant in the SL was reduced by an order of magnitude. The effect of degradation is more important in approving the groundwater protection level than the hydraulic conductivity. The analytical solution can be used for experimental data fitting, verification of complicated numerical models and preliminary design of landfill liner systems. © 2013.

  15. An analytical model for regular respiratory signals derived from the probability density function of Rayleigh distribution.

    PubMed

    Li, Xin; Li, Ye

    2015-01-01

    Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.

  16. Modeling Analyte Transport and Capture in Porous Bead Sensors

    PubMed Central

    Chou, Jie; Lennart, Alexis; Wong, Jorge; Ali, Mehnaaz F.; Floriano, Pierre N.; Christodoulides, Nicolaos; Camp, James; McDevitt, John T.

    2013-01-01

    Porous agarose microbeads, with high surface to volume ratios and high binding densities, are attracting attention as highly sensitive, affordable sensor elements for a variety of high performance bioassays. While such polymer microspheres have been extensively studied and reported on previously and are now moving into real-world clinical practice, very little work has been completed to date to model the convection, diffusion, and binding kinetics of soluble reagents captured within such fibrous networks. Here, we report the development of a three-dimensional computational model and provide the initial evidence for its agreement with experimental outcomes derived from the capture and detection of representative protein and genetic biomolecules in 290μm porous beads. We compare this model to antibody-mediated capture of C-reactive protein and bovine serum albumin, along with hybridization of oligonucleotide sequences to DNA probes. These results suggest that due to the porous interior of the agarose bead, internal analyte transport is both diffusion- and convection-based, and regardless of the nature of analyte, the bead interiors reveal an interesting trickle of convection-driven internal flow. Based on this model, the internal to external flow rate ratio is found to be in the range of 1:3100 to 1:170 for beads with agarose concentration ranging from 0.5% to 8% for the sensor ensembles here studied. Further, both model and experimental evidence suggest that binding kinetics strongly affect analyte distribution of captured reagents within the beads. These findings reveal that high association constants create a steep moving boundary in which unbound analytes are held back at the periphery of the bead sensor. Low association constants create a more shallow moving boundary in which unbound analytes diffuse further into the bead before binding. These models agree with experimental evidence and thus serve as a new tool set for the study of bio-agent transport processes within a new class of medical microdevices. PMID:22250703

  17. Nonlinear analysis for dual-frequency concurrent energy harvesting

    NASA Astrophysics Data System (ADS)

    Yan, Zhimiao; Lei, Hong; Tan, Ting; Sun, Weipeng; Huang, Wenhu

    2018-05-01

    The dual-frequency responses of the hybrid energy harvester undergoing the base excitation and galloping were analyzed numerically. In this work, an approximate dual-frequency analytical method is proposed for the nonlinear analysis of such a system. To obtain the approximate analytical solutions of the full coupled distributed-parameter model, the forcing interactions is first neglected. Then, the electromechanical decoupled governing equation is developed using the equivalent structure method. The hybrid mechanical response is finally separated to be the self-excited and forced responses for deriving the analytical solutions, which are confirmed by the numerical simulations of the full coupled model. The forced response has great impacts on the self-excited response. The boundary of Hopf bifurcation is analytically determined by the onset wind speed to galloping, which is linearly increased by the electrical damping. Quenching phenomenon appears when the increasing base excitation suppresses the galloping. The theoretical quenching boundary depends on the forced mode velocity. The quenching region increases with the base acceleration and electrical damping, but decreases with the wind speed. Superior to the base-excitation-alone case, the existence of the aerodynamic force protects the hybrid energy harvester at resonance from damages caused by the excessive large displacement. From the view of the harvested power, the hybrid system surpasses the base-excitation-alone system or the galloping-alone system. This study advances our knowledge on intrinsic nonlinear dynamics of the dual-frequency energy harvesting system by taking advantage of the analytical solutions.

  18. Mathematical model of polyethylene pipe bending stress state

    NASA Astrophysics Data System (ADS)

    Serebrennikov, Anatoly; Serebrennikov, Daniil

    2018-03-01

    Introduction of new machines and new technologies of polyethylene pipeline installation is usually based on the polyethylene pipe flexibility. It is necessary that existing bending stresses do not lead to an irreversible polyethylene pipe deformation and to violation of its strength characteristics. Derivation of the mathematical model which allows calculating analytically the bending stress level of polyethylene pipes with consideration of nonlinear characteristics is presented below. All analytical calculations made with the mathematical model are experimentally proved and confirmed.

  19. Improving Adolescent Judgment and Decision Making

    PubMed Central

    Dansereau, Donald F.; Knight, Danica K.; Flynn, Patrick M.

    2013-01-01

    Human judgment and decision making (JDM) has substantial room for improvement, especially among adolescents. Increased technological and social complexity “ups the ante” for developing impactful JDM interventions and aids. Current explanatory advances in this field emphasize dual processing models that incorporate both experiential and analytic processing systems. According to these models, judgment and decisions based on the experiential system are rapid and stem from automatic reference to previously stored episodes. Those based on the analytic system are viewed as slower and consciously developed. These models also hypothesize that metacognitive (self-monitoring) activities embedded in the analytic system influence how and when the two systems are used. What is not included in these models is the development of an intersection between the two systems. Because such an intersection is strongly suggested by memory and educational research as the basis of wisdom/expertise, the present paper describes an Integrated Judgment and Decision-Making Model (IJDM) that incorporates this component. Wisdom/expertise is hypothesized to contain a collection of schematic structures that can emerge from the accumulation of similar episodes or repeated analytic practice. As will be argued, in comparisons to dual system models, the addition of this component provides a broader basis for selecting and designing interventions to improve adolescent JDM. Its development also has implications for generally enhancing cognitive interventions by adopting principles from athletic training to create automated, expert behaviors. PMID:24391350

  20. Framework for event-based semidistributed modeling that unifies the SCS-CN method, VIC, PDM, and TOPMODEL

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2016-09-01

    Hydrologists and engineers may choose from a range of semidistributed rainfall-runoff models such as VIC, PDM, and TOPMODEL, all of which predict runoff from a distribution of watershed properties. However, these models are not easily compared to event-based data and are missing ready-to-use analytical expressions that are analogous to the SCS-CN method. The SCS-CN method is an event-based model that describes the runoff response with a rainfall-runoff curve that is a function of the cumulative storm rainfall and antecedent wetness condition. Here we develop an event-based probabilistic storage framework and distill semidistributed models into analytical, event-based expressions for describing the rainfall-runoff response. The event-based versions called VICx, PDMx, and TOPMODELx also are extended with a spatial description of the runoff concept of "prethreshold" and "threshold-excess" runoff, which occur, respectively, before and after infiltration exceeds a storage capacity threshold. For total storm rainfall and antecedent wetness conditions, the resulting ready-to-use analytical expressions define the source areas (fraction of the watershed) that produce runoff by each mechanism. They also define the probability density function (PDF) representing the spatial variability of runoff depths that are cumulative values for the storm duration, and the average unit area runoff, which describes the so-called runoff curve. These new event-based semidistributed models and the traditional SCS-CN method are unified by the same general expression for the runoff curve. Since the general runoff curve may incorporate different model distributions, it may ease the way for relating such distributions to land use, climate, topography, ecology, geology, and other characteristics.

  1. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  2. Analytic expressions for Atomic Layer Deposition: coverage, throughput, and materials utilization in cross-flow, particle coating, and spatial ALD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanguas-Gil, Angel; Elam, Jeffrey W.

    2014-05-01

    In this work, the authors present analytic models for atomic layer deposition (ALD) in three common experimental configurations: cross-flow, particle coating, and spatial ALD. These models, based on the plug-flow and well-mixed approximations, allow us to determine the minimum dose times and materials utilization for all three configurations. A comparison between the three models shows that throughput and precursor utilization can each be expressed by universal equations, in which the particularity of the experimental system is contained in a single parameter related to the residence time of the precursor in the reactor. For the case of cross-flow reactors, the authorsmore » show how simple analytic expressions for the reactor saturation profiles agree well with experimental results. Consequently, the analytic model can be used to extract information about the ALD surface chemistry (e. g., the reaction probability) by comparing the analytic and experimental saturation profiles, providing a useful tool for characterizing new and existing ALD processes. (C) 2014 American Vacuum Society« less

  3. Transforming Undergraduate Education Through the use of Analytical Reasoning (TUETAR)

    NASA Astrophysics Data System (ADS)

    Bishop, M. P.; Houser, C.; Lemmons, K.

    2015-12-01

    Traditional learning limits the potential for self-discovery, and the use of data and knowledge to understand Earth system relationships, processes, feedback mechanisms and system coupling. It is extremely difficult for undergraduate students to analyze, synthesize, and integrate quantitative information related to complex systems, as many concepts may not be mathematically tractable or yet to be formalized. Conceptual models have long served as a means for Earth scientists to organize their understanding of Earth's dynamics, and have served as a basis for human analytical reasoning and landscape interpretation. Consequently, we evaluated the use of conceptual modeling, knowledge representation and analytical reasoning to provide undergraduate students with an opportunity to develop and test geocomputational conceptual models based upon their understanding of Earth science concepts. This study describes the use of geospatial technologies and fuzzy cognitive maps to predict desertification across the South-Texas Sandsheet in an upper-level geomorphology course. Students developed conceptual models based on their understanding of aeolian processes from lectures, and then compared and evaluated their modeling results against an expert conceptual model and spatial predictions, and the observed distribution of dune activity in 2010. Students perceived that the analytical reasoning approach was significantly better for understanding desertification compared to traditional lecture, and promoted reflective learning, working with data, teamwork, student interaction, innovation, and creative thinking. Student evaluations support the notion that the adoption of knowledge representation and analytical reasoning in the classroom has the potential to transform undergraduate education by enabling students to formalize and test their conceptual understanding of Earth science. A model for developing and utilizing this geospatial technology approach in Earth science is presented.

  4. Transient vibration analytical modeling and suppressing for vibration absorber system under impulse excitation

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Yang, Bintang; Yu, Hu; Gao, Yulong

    2017-04-01

    The impulse excitation of mechanism causes transient vibration. In order to achieve adaptive transient vibration control, a method which can exactly model the response need to be proposed. This paper presents an analytical model to obtain the response of the primary system attached with dynamic vibration absorber (DVA) under impulse excitation. The impulse excitation which can be divided into single-impulse excitation and multi-impulse excitation is simplified as sinusoidal wave to establish the analytical model. To decouple the differential governing equations, a transform matrix is applied to convert the response from the physical coordinate to model coordinate. Therefore, the analytical response in the physical coordinate can be obtained by inverse transformation. The numerical Runge-Kutta method and experimental tests have demonstrated the effectiveness of the analytical model proposed. The wavelet of the response indicates that the transient vibration consists of components with multiple frequencies, and it shows that the modeling results coincide with the experiments. The optimizing simulations based on genetic algorithm and experimental tests demonstrate that the transient vibration of the primary system can be decreased by changing the stiffness of the DVA. The results presented in this paper are the foundations for us to develop the adaptive transient vibration absorber in the future.

  5. Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.

    PubMed

    Endert, A; Fiaux, P; North, C

    2012-12-01

    Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaurov, Alexander A., E-mail: kaurov@uchicago.edu

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less

  7. Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks

    PubMed Central

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-01-01

    The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability. PMID:28937632

  8. Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks.

    PubMed

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-09-22

    The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability.

  9. Propagation of flat-topped multi-Gaussian beams through a double-lens system with apertures.

    PubMed

    Gao, Yanqi; Zhu, Baoqiang; Liu, Daizhong; Lin, Zunqi

    2009-07-20

    A general model for different apertures and flat-topped laser beams based on the multi-Gaussian function is developed. The general analytical expression for the propagation of a flat-topped beam through a general double-lens system with apertures is derived using the above model. Then, the propagation characteristics of the flat-topped beam through a spatial filter are investigated by using a simplified analytical expression. Based on the Fluence beam contrast and the Fill factor, the influences of a pinhole size on the propagation of the flat-topped multi-Gaussian beam (FMGB) through the spatial filter are illustrated. An analytical expression for the propagation of the FMGB through the spatial filter with a misaligned pinhole is presented, and the influences of the pinhole offset are evaluated.

  10. Analysis and Experimental Investigation of Optimum Design of Thermoelectric Cooling/Heating System for Car Seat Climate Control (CSCC)

    NASA Astrophysics Data System (ADS)

    Elarusi, Abdulmunaem; Attar, Alaa; Lee, HoSung

    2018-02-01

    The optimum design of a thermoelectric system for application in car seat climate control has been modeled and its performance evaluated experimentally. The optimum design of the thermoelectric device combining two heat exchangers was obtained by using a newly developed optimization method based on the dimensional technique. Based on the analytical optimum design results, commercial thermoelectric cooler and heat sinks were selected to design and construct the climate control heat pump. This work focuses on testing the system performance in both cooling and heating modes to ensure accurate analytical modeling. Although the analytical performance was calculated using the simple ideal thermoelectric equations with effective thermoelectric material properties, it showed very good agreement with experiment for most operating conditions.

  11. Understanding wax screen-printing: a novel patterning process for microfluidic cloth-based analytical devices.

    PubMed

    Liu, Min; Zhang, Chunsun; Liu, Feifei

    2015-09-03

    In this work, we first introduce the fabrication of microfluidic cloth-based analytical devices (μCADs) using a wax screen-printing approach that is suitable for simple, inexpensive, rapid, low-energy-consumption and high-throughput preparation of cloth-based analytical devices. We have carried out a detailed study on the wax screen-printing of μCADs and have obtained some interesting results. Firstly, an analytical model is established for the spreading of molten wax in cloth. Secondly, a new wax screen-printing process has been proposed for fabricating μCADs, where the melting of wax into the cloth is much faster (∼5 s) and the heating temperature is much lower (75 °C). Thirdly, the experimental results show that the patterning effects of the proposed wax screen-printing method depend to a certain extent on types of screens, wax melting temperatures and melting time. Under optimized conditions, the minimum printing width of hydrophobic wax barrier and hydrophilic channel is 100 μm and 1.9 mm, respectively. Importantly, the developed analytical model is also well validated by these experiments. Fourthly, the μCADs fabricated by the presented wax screen-printing method are used to perform a proof-of-concept assay of glucose or protein in artificial urine with rapid high-throughput detection taking place on a 48-chamber cloth-based device and being performed by a visual readout. Overall, the developed cloth-based wax screen-printing and arrayed μCADs should provide a new research direction in the development of advanced sensor arrays for detection of a series of analytes relevant to many diverse applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodríguez-Ramírez, J. C.; Raga, A. C.; Lora, V.

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. Wemore » compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.« less

  13. Semi-analytical solutions of the Schnakenberg model of a reaction-diffusion cell with feedback

    NASA Astrophysics Data System (ADS)

    Al Noufaey, K. S.

    2018-06-01

    This paper considers the application of a semi-analytical method to the Schnakenberg model of a reaction-diffusion cell. The semi-analytical method is based on the Galerkin method which approximates the original governing partial differential equations as a system of ordinary differential equations. Steady-state curves, bifurcation diagrams and the region of parameter space in which Hopf bifurcations occur are presented for semi-analytical solutions and the numerical solution. The effect of feedback control, via altering various concentrations in the boundary reservoirs in response to concentrations in the cell centre, is examined. It is shown that increasing the magnitude of feedback leads to destabilization of the system, whereas decreasing this parameter to negative values of large magnitude stabilizes the system. The semi-analytical solutions agree well with numerical solutions of the governing equations.

  14. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  15. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  16. Prediction of the chromatographic retention of acid-base compounds in pH buffered methanol-water mobile phases in gradient mode by a simplified model.

    PubMed

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2015-03-13

    Retention of ionizable analytes under gradient elution depends on the pH of the mobile phase, the pKa of the analyte and their evolution along the programmed gradient. In previous work, a model depending on two fitting parameters was recommended because of its very favorable relationship between accuracy and required experimental work. It was developed using acetonitrile as the organic modifier and involves pKa modeling by means of equations that take into account the acidic functional group of the compound (carboxylic acid, protonated amine, etc.). In this work, the two-parameter predicting model is tested and validated using methanol as the organic modifier of the mobile phase and several compounds of higher pharmaceutical relevance and structural complexity as testing analytes. The results have been quite good overall, showing that the predicting model is applicable to a wide variety of acid-base compounds using mobile phases prepared with acetonitrile or methanol. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. User's manual for the one-dimensional hypersonic experimental aero-thermodynamic (1DHEAT) data reduction code

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1995-01-01

    A FORTRAN computer code for the reduction and analysis of experimental heat transfer data has been developed. This code can be utilized to determine heat transfer rates from surface temperature measurements made using either thin-film resistance gages or coaxial surface thermocouples. Both an analytical and a numerical finite-volume heat transfer model are implemented in this code. The analytical solution is based on a one-dimensional, semi-infinite wall thickness model with the approximation of constant substrate thermal properties, which is empirically corrected for the effects of variable thermal properties. The finite-volume solution is based on a one-dimensional, implicit discretization. The finite-volume model directly incorporates the effects of variable substrate thermal properties and does not require the semi-finite wall thickness approximation used in the analytical model. This model also includes the option of a multiple-layer substrate. Fast, accurate results can be obtained using either method. This code has been used to reduce several sets of aerodynamic heating data, of which samples are included in this report.

  18. Sample distribution in peak mode isotachophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il

    We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less

  19. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  20. Improvement of analytical dynamic models using modal test data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Wei, F. S.; Rao, K. V.

    1980-01-01

    A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.

  1. [Basic research on digital logistic management of hospital].

    PubMed

    Cao, Hui

    2010-05-01

    This paper analyzes and explores the possibilities of digital information-based management realized by equipment department, general services department, supply room and other material flow departments in different hospitals in order to optimize the procedures of information-based asset management. There are various analytical methods of medical supplies business models, providing analytical data for correct decisions made by departments and leaders of hospital and the governing authorities.

  2. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  3. Advances in analytical chemistry

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.

    1991-01-01

    Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.

  4. Mechanisms of Hydrocarbon Based Polymer Etch

    NASA Astrophysics Data System (ADS)

    Lane, Barton; Ventzek, Peter; Matsukuma, Masaaki; Suzuki, Ayuta; Koshiishi, Akira

    2015-09-01

    Dry etch of hydrocarbon based polymers is important for semiconductor device manufacturing. The etch mechanisms for oxygen rich plasma etch of hydrocarbon based polymers has been studied but the mechanism for lean chemistries has received little attention. We report on an experimental and analytic study of the mechanism for etching of a hydrocarbon based polymer using an Ar/O2 chemistry in a single frequency 13.56 MHz test bed. The experimental study employs an analysis of transients from sequential oxidation and Ar sputtering steps using OES and surface analytics to constrain conceptual models for the etch mechanism. The conceptual model is consistent with observations from MD studies and surface analysis performed by Vegh et al. and Oehrlein et al. and other similar studies. Parameters of the model are fit using published data and the experimentally observed time scales.

  5. Analytical Modeling of a Double-Sided Flux Concentrating E-Core Transverse Flux Machine with Pole Windings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Hasan, Iftekhar; Husain, Tausif

    In this paper, a nonlinear analytical model based on the Magnetic Equivalent Circuit (MEC) method is developed for a double-sided E-Core Transverse Flux Machine (TFM). The proposed TFM has a cylindrical rotor, sandwiched between E-core stators on both sides. Ferrite magnets are used in the rotor with flux concentrating design to attain high airgap flux density, better magnet utilization, and higher torque density. The MEC model was developed using a series-parallel combination of flux tubes to estimate the reluctance network for different parts of the machine including air gaps, permanent magnets, and the stator and rotor ferromagnetic materials, in amore » two-dimensional (2-D) frame. An iterative Gauss-Siedel method is integrated with the MEC model to capture the effects of magnetic saturation. A single phase, 1 kW, 400 rpm E-Core TFM is analytically modeled and its results for flux linkage, no-load EMF, and generated torque, are verified with Finite Element Analysis (FEA). The analytical model significantly reduces the computation time while estimating results with less than 10 percent error.« less

  6. A new model for fluid velocity slip on a solid surface.

    PubMed

    Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong

    2016-10-12

    A general adsorption model is developed to describe the interactions between near-wall fluid molecules and solid surfaces. This model serves as a framework for the theoretical modelling of boundary slip phenomena. Based on this adsorption model, a new general model for the slip velocity of fluids on solid surfaces is introduced. The slip boundary condition at a fluid-solid interface has hitherto been considered separately for gases and liquids. In this paper, we show that the slip velocity in both gases and liquids may originate from dynamical adsorption processes at the interface. A unified analytical model that is valid for both gas-solid and liquid-solid slip boundary conditions is proposed based on surface science theory. The corroboration with the experimental data extracted from the literature shows that the proposed model provides an improved prediction compared to existing analytical models for gases at higher shear rates and close agreement for liquid-solid interfaces in general.

  7. What makes us think? A three-stage dual-process model of analytic engagement.

    PubMed

    Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J

    2015-08-01

    The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Mixed Initiative Visual Analytics Using Task-Driven Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Cramer, Nicholas O.; Israel, David

    2015-12-07

    Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less

  9. Maximum flow-based resilience analysis: From component to system

    PubMed Central

    Jin, Chong; Li, Ruiying; Kang, Rui

    2017-01-01

    Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135

  10. An Illumination- and Temperature-Dependent Analytical Model for Copper Indium Gallium Diselenide (CIGS) Solar Cells

    DOE PAGES

    Sun, Xingshu; Silverman, Timothy; Garris, Rebekah; ...

    2016-07-18

    In this study, we present a physics-based analytical model for copper indium gallium diselenide (CIGS) solar cells that describes the illumination- and temperature-dependent current-voltage (I-V) characteristics and accounts for the statistical shunt variation of each cell. The model is derived by solving the drift-diffusion transport equation so that its parameters are physical and, therefore, can be obtained from independent characterization experiments. The model is validated against CIGS I-V characteristics as a function of temperature and illumination intensity. This physics-based model can be integrated into a large-scale simulation framework to optimize the performance of solar modules, as well as predict themore » long-term output yields of photovoltaic farms under different environmental conditions.« less

  11. Fast analytical model of MZI micro-opto-mechanical pressure sensor

    NASA Astrophysics Data System (ADS)

    Rochus, V.; Jansen, R.; Goyvaerts, J.; Neutens, P.; O’Callaghan, J.; Rottenberg, X.

    2018-06-01

    This paper presents a fast analytical procedure in order to design a micro-opto-mechanical pressure sensor (MOMPS) taking into account the mechanical nonlinearity and the optical losses. A realistic model of the photonic MZI is proposed, strongly coupled to a nonlinear mechanical model of the membrane. Based on the membrane dimensions, the residual stress, the position of the waveguide, the optical wavelength and the phase variation due to the opto-mechanical coupling, we derive an analytical model which allows us to predict the response of the total system. The effect of the nonlinearity and the losses on the total performance are carefully studied and measurements on fabricated devices are used to validate the model. Finally, a design procedure is proposed in order to realize fast design of this new type of pressure sensor.

  12. Steady-state analytical model of suspended p-type 3C-SiC bridges under consideration of Joule heating

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Vivekananthan; Dinh, Toan; Phan, Hoang-Phuong; Kozeki, Takahiro; Namazu, Takahiro; Viet Dao, Dzung; Nguyen, Nam-Trung

    2017-07-01

    This paper reports an analytical model and its validation for a released microscale heater made of 3C-SiC thin films. A model for the equivalent electrical and thermal parameters was developed for the two-layer multi-segment heat and electric conduction. The model is based on a 1D energy equation, which considers the temperature-dependent resistivity and allows for the prediction of voltage-current and power-current characteristics of the microheater. The steady-state analytical model was validated by experimental characterization. The results, in particular the nonlinearity caused by temperature dependency, are in good agreement. The low power consumption of the order of 0.18 mW at approximately 310 K indicates the potential use of the structure as thermal sensors in portable applications.

  13. Theoretical and experimental investigation of architected core materials incorporating negative stiffness elements

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Ming; Keefe, Andrew; Carter, William B.; Henry, Christopher P.; McKnight, Geoff P.

    2014-04-01

    Structural assemblies incorporating negative stiffness elements have been shown to provide both tunable damping properties and simultaneous high stiffness and damping over prescribed displacement regions. In this paper we explore the design space for negative stiffness based assemblies using analytical modeling combined with finite element analysis. A simplified spring model demonstrates the effects of element stiffness, geometry, and preloads on the damping and stiffness performance. Simplified analytical models were validated for realistic structural implementations through finite element analysis. A series of complementary experiments was conducted to compare with modeling and determine the effects of each element on the system response. The measured damping performance follows the theoretical predictions obtained by analytical modeling. We applied these concepts to a novel sandwich core structure that exhibited combined stiffness and damping properties 8 times greater than existing foam core technologies.

  14. Radiative transfer model for aerosols in infrared wavelengths for passive remote sensing applications.

    PubMed

    Ben-David, Avishai; Embury, Janon F; Davidson, Charles E

    2006-09-10

    A comprehensive analytical radiative transfer model for isothermal aerosols and vapors for passive infrared remote sensing applications (ground-based and airborne sensors) has been developed. The theoretical model illustrates the qualitative difference between an aerosol cloud and a chemical vapor cloud. The model is based on two and two/four stream approximations and includes thermal emission-absorption by the aerosols; scattering of diffused sky radiances incident from all sides on the aerosols (downwelling, upwelling, left, and right); and scattering of aerosol thermal emission. The model uses moderate resolution transmittance ambient atmospheric radiances as boundary conditions and provides analytical expressions for the information on the aerosol cloud that is contained in remote sensing measurements by using thermal contrasts between the aerosols and diffused sky radiances. Simulated measurements of a ground-based sensor viewing Bacillus subtilis var. niger bioaerosols and kaolin aerosols are given and discussed to illustrate the differences between a vapor-only model (i.e., only emission-absorption effects) and a complete model that adds aerosol scattering effects.

  15. Analytical and simulator study of advanced transport

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Rickard, W. W.

    1982-01-01

    An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.

  16. service line analytics in the new era.

    PubMed

    Spence, Jay; Seargeant, Dan

    2015-08-01

    To succeed under the value-based business model, hospitals and health systems require effective service line analytics that combine inpatient and outpatient data and that incorporate quality metrics for evaluating clinical operations. When developing a framework for collection, analysis, and dissemination of service line data, healthcare organizations should focus on five key aspects of effective service line analytics: Updated service line definitions. Ability to analyze and trend service line net patient revenues by payment source. Access to accurate service line cost information across multiple dimensions with drill-through capabilities. Ability to redesign key reports based on changing requirements. Clear assignment of accountability.

  17. Analytical fitting model for rough-surface BRDF.

    PubMed

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  18. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    PubMed

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.

  19. Evaluation of one dimensional analytical models for vegetation canopies

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  20. An optimization model for infrared image enhancement method based on p-q norm constrained by saliency value

    NASA Astrophysics Data System (ADS)

    Fan, Fan; Ma, Yong; Dai, Xiaobing; Mei, Xiaoguang

    2018-04-01

    Infrared image enhancement is an important and necessary task in the infrared imaging system. In this paper, by defining the contrast in terms of the area between adjacent non-zero histogram, a novel analytical model is proposed to enlarge the areas so that the contrast can be increased. In addition, the analytical model is regularized by a penalty term based on the saliency value to enhance the salient regions as well. Thus, both of the whole images and salient regions can be enhanced, and the rank consistency can be preserved. The comparisons on 8-bit images show that the proposed method can enhance the infrared images with more details.

  1. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  2. Analytical and experimental studies on detection of longitudinal, L and inverted T cracks in isotropic and bi-material beams based on changes in natural frequencies

    NASA Astrophysics Data System (ADS)

    Ravi, J. T.; Nidhan, S.; Muthu, N.; Maiti, S. K.

    2018-02-01

    An analytical method for determination of dimensions of longitudinal crack in monolithic beams, based on frequency measurements, has been extended to model L and inverted T cracks. Such cracks including longitudinal crack arise in beams made of layered isotropic or composite materials. A new formulation for modelling cracks in bi-material beams is presented. Longitudinal crack segment sizes, for L and inverted T cracks, varying from 2.7% to 13.6% of length of Euler-Bernoulli beams are considered. Both forward and inverse problems have been examined. In the forward problems, the analytical results are compared with finite element (FE) solutions. In the inverse problems, the accuracy of prediction of crack dimensions is verified using FE results as input for virtual testing. The analytical results show good agreement with the actual crack dimensions. Further, experimental studies have been done to verify the accuracy of the analytical method for prediction of dimensions of three types of crack in isotropic and bi-material beams. The results show that the proposed formulation is reliable and can be employed for crack detection in slender beam like structures in practice.

  3. Negations in syllogistic reasoning: evidence for a heuristic-analytic conflict.

    PubMed

    Stupple, Edward J N; Waterhouse, Eleanor F

    2009-08-01

    An experiment utilizing response time measures was conducted to test dominant processing strategies in syllogistic reasoning with the expanded quantifier set proposed by Roberts (2005). Through adding negations to existing quantifiers it is possible to change problem surface features without altering logical validity. Biases based on surface features such as atmosphere, matching, and the probability heuristics model (PHM; Chater & Oaksford, 1999; Wetherick & Gilhooly, 1995) would not be expected to show variance in response latencies, but participant responses should be highly sensitive to changes in the surface features of the quantifiers. In contrast, according to analytic accounts such as mental models theory and mental logic (e.g., Johnson-Laird & Byrne, 1991; Rips, 1994) participants should exhibit increased response times for negated premises, but not be overly impacted upon by the surface features of the conclusion. Data indicated that the dominant response strategy was based on a matching heuristic, but also provided evidence of a resource-demanding analytic procedure for dealing with double negatives. The authors propose that dual-process theories offer a stronger account of these data whereby participants employ competing heuristic and analytic strategies and fall back on a heuristic response when analytic processing fails.

  4. Mechanical behavior of regular open-cell porous biomaterials made of diamond lattice unit cells.

    PubMed

    Ahmadi, S M; Campoli, G; Amin Yavari, S; Sajadi, B; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A

    2014-06-01

    Cellular structures with highly controlled micro-architectures are promising materials for orthopedic applications that require bone-substituting biomaterials or implants. The availability of additive manufacturing techniques has enabled manufacturing of biomaterials made of one or multiple types of unit cells. The diamond lattice unit cell is one of the relatively new types of unit cells that are used in manufacturing of regular porous biomaterials. As opposed to many other types of unit cells, there is currently no analytical solution that could be used for prediction of the mechanical properties of cellular structures made of the diamond lattice unit cells. In this paper, we present new analytical solutions and closed-form relationships for predicting the elastic modulus, Poisson׳s ratio, critical buckling load, and yield (plateau) stress of cellular structures made of the diamond lattice unit cell. The mechanical properties predicted using the analytical solutions are compared with those obtained using finite element models. A number of solid and porous titanium (Ti6Al4V) specimens were manufactured using selective laser melting. A series of experiments were then performed to determine the mechanical properties of the matrix material and cellular structures. The experimentally measured mechanical properties were compared with those obtained using analytical solutions and finite element (FE) models. It has been shown that, for small apparent density values, the mechanical properties obtained using analytical and numerical solutions are in agreement with each other and with experimental observations. The properties estimated using an analytical solution based on the Euler-Bernoulli theory markedly deviated from experimental results for large apparent density values. The mechanical properties estimated using FE models and another analytical solution based on the Timoshenko beam theory better matched the experimental observations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. A free energy-based surface tension force model for simulation of multiphase flows by level-set method

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.

    2017-09-01

    In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.

  6. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  7. Effect of vibration on retention characteristics of screen acquisition systems. [for surface tension propellant acquisition

    NASA Technical Reports Server (NTRS)

    Tegart, J. R.; Aydelott, J. C.

    1978-01-01

    The design of surface tension propellant acquisition systems using fine-mesh screen must take into account all factors that influence the liquid pressure differentials within the system. One of those factors is spacecraft vibration. Analytical models to predict the effects of vibration have been developed. A test program to verify the analytical models and to allow a comparative evaluation of the parameters influencing the response to vibration was performed. Screen specimens were tested under conditions simulating the operation of an acquisition system, considering the effects of such parameters as screen orientation and configuration, screen support method, screen mesh, liquid flow and liquid properties. An analytical model, based on empirical coefficients, was most successful in predicting the effects of vibration.

  8. A singularity free analytical solution of artificial satellite motion with drag

    NASA Technical Reports Server (NTRS)

    Mueller, A.

    1978-01-01

    An analytical satellite theory based on the regular, canonical Poincare-Similar (PS phi) elements is described along with an accurate density model which can be implemented into the drag theory. A computationally efficient manner in which to expand the equations of motion into a fourier series is discussed.

  9. Novel approach for dam break flow modeling using computational intelligence

    NASA Astrophysics Data System (ADS)

    Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar

    2018-04-01

    A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.

  10. Study of solid state photomultiplier

    NASA Technical Reports Server (NTRS)

    Hays, K. M.; Laviolette, R. A.

    1987-01-01

    Available solid state photomultiplier (SSPM) detectors were tested under low-background, low temperature conditions to determine the conditions producing optimal sensitivity in a space-based astronomy system such as a liquid cooled helium telescope in orbit. Detector temperatures varied between 6 and 9 K, with background flux ranging from 10 to the 13th power to less than 10 to the 6th power photons/square cm-s. Measured parameters included quantum efficiency, noise, dark current, and spectral response. Experimental data were reduced, analyzed, and combined with existing data to build the SSPM data base included herein. The results were compared to analytical models of SSPM performance where appropriate models existed. Analytical models presented here were developed to be as consistent with the data base as practicable. Significant differences between the theory and data are described. Some models were developed or updated as a result of this study.

  11. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    PubMed

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  12. Micromechanics Analysis Code (MAC) User Guide: Version 1.0

    NASA Technical Reports Server (NTRS)

    Wilt, T. E.; Arnold, S. M.

    1994-01-01

    The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code, MAC, who's predictive capability rests entirely upon the fully analytical generalized method of cells, GMC, micromechanics model is described. MAC is a versatile form of research software that 'drives' the double or triple ply periodic micromechanics constitutive models based upon GMC. MAC enhances the basic capabilities of GMC by providing a modular framework wherein (1) various thermal, mechanical (stress or strain control), and thermomechanical load histories can be imposed; (2) different integration algorithms may be selected; (3) a variety of constituent constitutive models may be utilized and/or implemented; and (4) a variety of fiber architectures may be easily accessed through their corresponding representative volume elements.

  13. Micromechanics Analysis Code (MAC). User Guide: Version 2.0

    NASA Technical Reports Server (NTRS)

    Wilt, T. E.; Arnold, S. M.

    1996-01-01

    The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code's (MAC) who's predictive capability rests entirely upon the fully analytical generalized method of cells (GMC), micromechanics model is described. MAC is a versatile form of research software that 'drives' the double or triply periodic micromechanics constitutive models based upon GMC. MAC enhances the basic capabilities of GMC by providing a modular framework wherein (1) various thermal, mechanical (stress or strain control) and thermomechanical load histories can be imposed, (2) different integration algorithms may be selected, (3) a variety of constituent constitutive models may be utilized and/or implemented, and (4) a variety of fiber and laminate architectures may be easily accessed through their corresponding representative volume elements.

  14. Experimental issues related to frequency response function measurements for frequency-based substructuring

    NASA Astrophysics Data System (ADS)

    Nicgorski, Dana; Avitabile, Peter

    2010-07-01

    Frequency-based substructuring is a very popular approach for the generation of system models from component measured data. Analytically the approach has been shown to produce accurate results. However, implementation with actual test data can cause difficulties and cause problems with the system response prediction. In order to produce good results, extreme care is needed in the measurement of the drive point and transfer impedances of the structure as well as observe all the conditions for a linear time invariant system. Several studies have been conducted to show the sensitivity of the technique to small variations that often occur during typical testing of structures. These variations have been observed in actual tested configurations and have been substantiated with analytical models to replicate the problems typically encountered. The use of analytically simulated issues helps to clearly see the effects of typical measurement difficulties often observed in test data. This paper presents some of these common problems observed and provides guidance and recommendations for data to be used for this modeling approach.

  15. A Model of High-Frequency Self-Mixing in Double-Barrier Rectifier

    NASA Astrophysics Data System (ADS)

    Palma, Fabrizio; Rao, R.

    2018-03-01

    In this paper, a new model of the frequency dependence of the double-barrier THz rectifier is presented. The new structure is of interest because it can be realized by CMOS image sensor technology. Its application in a complex field such as that of THz receivers requires the availability of an analytical model, which is reliable and able to highlight the dependence on the parameters of the physical structure. The model is based on the hydrodynamic semiconductor equations, solved in the small signal approximation. The model depicts the mechanisms of the THz modulation of the charge in the depleted regions of the double-barrier device and explains the self-mixing process, the frequency dependence, and the detection capability of the structure. The model thus substantially improves the analytical models of the THz rectification available in literature, mainly based on lamped equivalent circuits.

  16. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    NASA Astrophysics Data System (ADS)

    Tan, Yimin; Lin, Kejian; Zu, Jean W.

    2018-05-01

    Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  17. Original analytical model of the hydrodynamic loads applied on the half-bridge of a circular settling tank

    NASA Astrophysics Data System (ADS)

    Oanta, Emil M.; Dascalescu, Anca-Elena; Sabau, Adrian

    2016-12-01

    The paper presents an original analytical model of the hydrodynamic loads applied on the half-bridge of a circular settling tank. The calculus domain is defined using analytical geometry and the calculus of the local dynamic pressure is based on the radius from the center of the settling tank to the current area, i.e. the relative velocity of the fluid and the depth where the current area is located, i.e. the density of the fluid. Calculus of the local drag forces uses the discrete frontal cross sectional areas of the submerged structure in contact with the fluid. In the last stage is performed the reduction of the local drag forces in the appropriate points belonging to the main beam. This class of loads is producing the flexure of the main beam in a horizontal plane and additional twisting moments along this structure. Taking into account the hydrodynamic loads, the results of the theoretical models, i.e. the analytical model and the finite element model, may have an increased accuracy.

  18. Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin

    2018-07-01

    Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.

  19. Practical limitations on the use of diurnal temperature signals to quantify groundwater upwelling

    USGS Publications Warehouse

    Briggs, Martin A.; Lautz, Laura K.; Buckley, Sean F.; Lane, John W.

    2014-01-01

    Groundwater upwelling to streams creates unique habitat by influencing stream water quality and temperature; upwelling zones also serve as vectors for contamination when groundwater is degraded. Temperature time series data acquired along vertical profiles in the streambed have been applied to simple analytical models to determine rates of vertical fluid flux. These models are based on the downward propagation characteristics (amplitude attenuation and phase-lag) of the surface diurnal signal. Despite the popularity of these models, there are few published characterizations of moderate-to-strong upwelling. We attribute this limitation to the thermodynamics of upwelling, under which the downward conductive signal transport from the streambed interface occurs opposite the upward advective fluid flux. Governing equations describing the advection–diffusion of heat within the streambed predict that under upwelling conditions, signal amplitude attenuation will increase, but, counterintuitively, phase-lag will decrease. Therefore the extinction (measurable) depth of the diurnal signal is very shallow, but phase lag is also short, yielding low signal to noise ratio and poor model sensitivity. Conversely, amplitude attenuation over similar sensor spacing is strong, yielding greater potential model sensitivity. Here we present streambed thermal time series over a range of moderate to strong upwelling sites in the Quashnet River, Cape Cod, Massachusetts. The predicted inverse relationship between phase-lag and rate of upwelling was observed in the field data over a range of conditions, but the observed phase-lags were consistently shorter than predicted. Analytical solutions for fluid flux based on signal amplitude attenuation return results consistent with numerical models and physical seepage meters, but the phase-lag analytical model results are generally unreasonable. Through numerical modeling we explore reasons why phase-lag may have been over-predicted by the analytical models, and develop guiding relations of diurnal temperature signal extinction depth based on stream diurnal signal amplitude, upwelling magnitude, and streambed thermal properties that will be useful in designing future experiments.

  20. Analytical and multibody modeling for the power analysis of standing jumps.

    PubMed

    Palmieri, G; Callegari, M; Fioretti, S

    2015-01-01

    Two methods for the power analysis of standing jumps are proposed and compared in this article. The first method is based on a simple analytical formulation which requires as input the coordinates of the center of gravity in three specified instants of the jump. The second method is based on a multibody model that simulates the jumps processing the data obtained by a three-dimensional (3D) motion capture system and the dynamometric measurements obtained by the force platforms. The multibody model is developed with OpenSim, an open-source software which provides tools for the kinematic and dynamic analyses of 3D human body models. The study is focused on two of the typical tests used to evaluate the muscular activity of lower limbs, which are the counter movement jump and the standing long jump. The comparison between the results obtained by the two methods confirms that the proposed analytical formulation is correct and represents a simple tool suitable for a preliminary analysis of total mechanical work and the mean power exerted in standing jumps.

  1. Instability of cooperative adaptive cruise control traffic flow: A macroscopic approach

    NASA Astrophysics Data System (ADS)

    Ngoduy, D.

    2013-10-01

    This paper proposes a macroscopic model to describe the operations of cooperative adaptive cruise control (CACC) traffic flow, which is an extension of adaptive cruise control (ACC) traffic flow. In CACC traffic flow a vehicle can exchange information with many preceding vehicles through wireless communication. Due to such communication the CACC vehicle can follow its leader at a closer distance than the ACC vehicle. The stability diagrams are constructed from the developed model based on the linear and nonlinear stability method for a certain model parameter set. It is found analytically that CACC vehicles enhance the stabilization of traffic flow with respect to both small and large perturbations compared to ACC vehicles. Numerical simulation is carried out to support our analytical findings. Based on the nonlinear stability analysis, we will show analytically and numerically that the CACC system better improves the dynamic equilibrium capacity over the ACC system. We have argued that in parallel to microscopic models for CACC traffic flow, the newly developed macroscopic will provide a complete insight into the dynamics of intelligent traffic flow.

  2. Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.

    PubMed

    Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina

    2016-08-25

    The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.

  3. Modelling a flows in supply chain with analytical models: Case of a chemical industry

    NASA Astrophysics Data System (ADS)

    Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said

    2016-02-01

    This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.

  4. Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.

    PubMed

    Goodin, Christopher

    2013-05-01

    The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.

  5. Exploring the Argumentation Pattern in Modeling-Based Learning about Apparent Motion of Mars

    ERIC Educational Resources Information Center

    Park, Su-Kyeong

    2016-01-01

    This study proposed an analytic framework for coding students' dialogic argumentation and investigated the characteristics of the small-group argumentation pattern observed in modeling-based learning. The participants were 122 second grade high school students in South Korea divided into an experimental and a comparison group. Modeling-based…

  6. TU-F-17A-03: An Analytical Respiratory Perturbation Model for Lung Motion Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G; Yuan, A; Wei, J

    2014-06-15

    Purpose: Breathing irregularity is common, causing unreliable prediction in tumor motion for correlation-based surrogates. Both tidal volume (TV) and breathing pattern (BP=ΔVthorax/TV, where TV=ΔVthorax+ΔVabdomen) affect lung motion in anterior-posterior and superior-inferior directions. We developed a novel respiratory motion perturbation (RMP) model in analytical form to account for changes in TV and BP in motion prediction from simulation to treatment. Methods: The RMP model is an analytical function of patient-specific anatomic and physiologic parameters. It contains a base-motion trajectory d(x,y,z) derived from a 4-dimensional computed tomography (4DCT) at simulation and a perturbation term Δd(ΔTV,ΔBP) accounting for deviation at treatment from simulation.more » The perturbation is dependent on tumor-specific location and patient-specific anatomy. Eleven patients with simulation and treatment 4DCT images were used to assess the RMP method in motion prediction from 4DCT1 to 4DCT2, and vice versa. For each patient, ten motion trajectories of corresponding points in the lower lobes were measured in both 4DCTs: one served as the base-motion trajectory and the other as the ground truth for comparison. In total, 220 motion trajectory predictions were assessed. The motion discrepancy between two 4DCTs for each patient served as a control. An established 5D motion model was used for comparison. Results: The average absolute error of RMP model prediction in superior-inferior direction is 1.6±1.8 mm, similar to 1.7±1.6 mm from the 5D model (p=0.98). Some uncertainty is associated with limited spatial resolution (2.5mm slice thickness) and temporal resolution (10-phases). Non-corrected motion discrepancy between two 4DCTs is 2.6±2.7mm, with the maximum of ±20mm, and correction is necessary (p=0.01). Conclusion: The analytical motion model predicts lung motion with accuracy similar to the 5D model. The analytical model is based on physical relationships, requires no training, and therefore is potentially more resilient to breathing irregularities. On-going investigation introduces airflow into the RMP model for improvement. This research is in part supported by NIH (U54CA137788/132378). AY would like to thank MSKCC summer medical student research program supported by National Cancer Institute and hosted by Department of Medical Physics at MSKCC.« less

  7. A Hybrid Coarse-graining Approach for Lipid Bilayers at Large Length and Time Scales

    PubMed Central

    Ayton, Gary S.; Voth, Gregory A.

    2009-01-01

    A hybrid analytic-systematic (HAS) coarse-grained (CG) lipid model is developed and employed in a large-scale simulation of a liposome. The methodology is termed hybrid analyticsystematic as one component of the interaction between CG sites is variationally determined from the multiscale coarse-graining (MS-CG) methodology, while the remaining component utilizes an analytic potential. The systematic component models the in-plane center of mass interaction of the lipids as determined from an atomistic-level MD simulation of a bilayer. The analytic component is based on the well known Gay-Berne ellipsoid of revolution liquid crystal model, and is designed to model the highly anisotropic interactions at a highly coarse-grained level. The HAS CG approach is the first step in an “aggressive” CG methodology designed to model multi-component biological membranes at very large length and timescales. PMID:19281167

  8. Analytical insight into "breathing" crack-induced acoustic nonlinearity with an application to quantitative evaluation of contact cracks.

    PubMed

    Wang, Kai; Liu, Menglong; Su, Zhongqing; Yuan, Shenfang; Fan, Zheng

    2018-08-01

    To characterize fatigue cracks, in the undersized stage in particular, preferably in a quantitative and precise manner, a two-dimensional (2D) analytical model is developed for interpreting the modulation mechanism of a "breathing" crack on guided ultrasonic waves (GUWs). In conjunction with a modal decomposition method and a variational principle-based algorithm, the model is capable of analytically depicting the propagating and evanescent waves induced owing to the interaction of probing GUWs with a "breathing" crack, and further extracting linear and nonlinear wave features (e.g., reflection, transmission, mode conversion and contact acoustic nonlinearity (CAN)). With the model, a quantitative correlation between CAN embodied in acquired GUWs and crack parameters (e.g., location and severity) is obtained, whereby a set of damage indices is proposed via which the severity of the crack can be evaluated quantitatively. The evaluation, in principle, does not entail a benchmarking process against baseline signals. As validation, the results obtained from the analytical model are compared with those from finite element simulation, showing good consistency. This has demonstrated accuracy of the developed analytical model in interpreting contact crack-induced CAN, and spotlighted its application to quantitative evaluation of fatigue damage. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  10. Prediction of relative and absolute permeabilities for gas and water from soil water retention curves using a pore-scale network model

    NASA Astrophysics Data System (ADS)

    Fischer, Ulrich; Celia, Michael A.

    1999-04-01

    Functional relationships for unsaturated flow in soils, including those between capillary pressure, saturation, and relative permeabilities, are often described using analytical models based on the bundle-of-tubes concept. These models are often limited by, for example, inherent difficulties in prediction of absolute permeabilities, and in incorporation of a discontinuous nonwetting phase. To overcome these difficulties, an alternative approach may be formulated using pore-scale network models. In this approach, the pore space of the network model is adjusted to match retention data, and absolute and relative permeabilities are then calculated. A new approach that allows more general assignments of pore sizes within the network model provides for greater flexibility to match measured data. This additional flexibility is especially important for simultaneous modeling of main imbibition and drainage branches. Through comparisons between the network model results, analytical model results, and measured data for a variety of both undisturbed and repacked soils, the network model is seen to match capillary pressure-saturation data nearly as well as the analytical model, to predict water phase relative permeabilities equally well, and to predict gas phase relative permeabilities significantly better than the analytical model. The network model also provides very good estimates for intrinsic permeability and thus for absolute permeabilities. Both the network model and the analytical model lost accuracy in predicting relative water permeabilities for soils characterized by a van Genuchten exponent n≲3. Overall, the computational results indicate that reliable predictions of both relative and absolute permeabilities are obtained with the network model when the model matches the capillary pressure-saturation data well. The results also indicate that measured imbibition data are crucial to good predictions of the complete hysteresis loop.

  11. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.

  12. Ray Tracing and Modal Methods for Modeling Radio Propagation in Tunnels With Rough Walls

    PubMed Central

    Zhou, Chenming

    2017-01-01

    At the ultrahigh frequencies common to portable radios, tunnels such as mine entries are often modeled by hollow dielectric waveguides. The roughness condition of the tunnel walls has an influence on radio propagation, and therefore should be taken into account when an accurate power prediction is needed. This paper investigates how wall roughness affects radio propagation in tunnels, and presents a unified ray tracing and modal method for modeling radio propagation in tunnels with rough walls. First, general analytical formulas for modeling the influence of the wall roughness are derived, based on the modal method and the ray tracing method, respectively. Second, the equivalence of the ray tracing and modal methods in the presence of wall roughnesses is mathematically proved, by showing that the ray tracing-based analytical formula can converge to the modal-based formula through the Poisson summation formula. The derivation and findings are verified by simulation results based on ray tracing and modal methods. PMID:28935995

  13. Geophysical technique for mineral exploration and discrimination based on electromagnetic methods and associated systems

    DOEpatents

    Zhdanov,; Michael, S [Salt Lake City, UT

    2008-01-29

    Mineral exploration needs a reliable method to distinguish between uneconomic mineral deposits and economic mineralization. A method and system includes a geophysical technique for subsurface material characterization, mineral exploration and mineral discrimination. The technique introduced in this invention detects induced polarization effects in electromagnetic data and uses remote geophysical observations to determine the parameters of an effective conductivity relaxation model using a composite analytical multi-phase model of the rock formations. The conductivity relaxation model and analytical model can be used to determine parameters related by analytical expressions to the physical characteristics of the microstructure of the rocks and minerals. These parameters are ultimately used for the discrimination of different components in underground formations, and in this way provide an ability to distinguish between uneconomic mineral deposits and zones of economic mineralization using geophysical remote sensing technology.

  14. A conceptual snow model with an analytic resolution of the heat and phase change equations

    NASA Astrophysics Data System (ADS)

    Riboust, Philippe; Le Moine, Nicolas; Thirel, Guillaume; Ribstein, Pierre

    2017-04-01

    Compared to degree-day snow models, physically-based snow models resolve more processes in an attempt to achieve a better representation of reality. Often these physically-based models resolve the heat transport equations in snow using a vertical discretization of the snowpack. The snowpack is decomposed into several layers in which the mechanical and thermal states of the snow are calculated. A higher number of layers in the snowpack allow for better accuracy but it also tends to increase the computational costs. In order to develop a snow model that estimates the temperature profile of snow with a lower computational cost, we used an analytical decomposition of the vertical profile using eigenfunctions (i.e. trigonometric functions adapted to the specific boundary conditions). The mass transfer of snow melt has also been estimated using an analytical conceptualization of runoff fingering and matrix flow. As external meteorological forcing, the model uses solar and atmospheric radiation, air temperature, atmospheric humidity and precipitations. It has been tested and calibrated at point scale at two different stations in the Alps: Col de Porte (France, 1325 m) and Weissfluhjoch (Switzerland, 2540 m). A sensitivity analysis of model parameters and model inputs will be presented together with a comparison with measured snow surface temperature, SWE, snow depth, temperature profile and snow melt data. The snow model is created in order to be ultimately coupled with hydrological models for rainfall-runoff modeling in mountainous areas. We hope to create a model faster than physically-based models but capable to estimate more physical processes than degree-day snow models. This should help to build a more reliable snow model capable of being easily calibrated by remote sensing and in situ observation or to assimilate these data for forecasting purposes.

  15. Hierarchical analytical and simulation modelling of human-machine systems with interference

    NASA Astrophysics Data System (ADS)

    Braginsky, M. Ya; Tarakanov, D. V.; Tsapko, S. G.; Tsapko, I. V.; Baglaeva, E. A.

    2017-01-01

    The article considers the principles of building the analytical and simulation model of the human operator and the industrial control system hardware and software. E-networks as the extension of Petri nets are used as the mathematical apparatus. This approach allows simulating complex parallel distributed processes in human-machine systems. The structural and hierarchical approach is used as the building method for the mathematical model of the human operator. The upper level of the human operator is represented by the logical dynamic model of decision making based on E-networks. The lower level reflects psychophysiological characteristics of the human-operator.

  16. Mated vertical ground vibration test

    NASA Technical Reports Server (NTRS)

    Ivey, E. W.

    1980-01-01

    The Mated Vertical Ground Vibration Test (MVGVT) was considered to provide an experimental base in the form of structural dynamic characteristics for the shuttle vehicle. This data base was used in developing high confidence analytical models for the prediction and design of loads, pogo controls, and flutter criteria under various payloads and operational missions. The MVGVT boost and launch program evolution, test configurations, and their suspensions are described. Test results are compared with predicted analytical results.

  17. Analytical halo model of galactic conformity

    NASA Astrophysics Data System (ADS)

    Pahwa, Isha; Paranjape, Aseem

    2017-09-01

    We present a fully analytical halo model of colour-dependent clustering that incorporates the effects of galactic conformity in a halo occupation distribution framework. The model, based on our previous numerical work, describes conformity through a correlation between the colour of a galaxy and the concentration of its parent halo, leading to a correlation between central and satellite galaxy colours at fixed halo mass. The strength of the correlation is set by a tunable 'group quenching efficiency', and the model can separately describe group-level correlations between galaxy colour (1-halo conformity) and large-scale correlations induced by assembly bias (2-halo conformity). We validate our analytical results using clustering measurements in mock galaxy catalogues, finding that the model is accurate at the 10-20 per cent level for a wide range of luminosities and length-scales. We apply the formalism to interpret the colour-dependent clustering of galaxies in the Sloan Digital Sky Survey (SDSS). We find good overall agreement between the data and a model that has 1-halo conformity at a level consistent with previous results based on an SDSS group catalogue, although the clustering data require satellites to be redder than suggested by the group catalogue. Within our modelling uncertainties, however, we do not find strong evidence of 2-halo conformity driven by assembly bias in SDSS clustering.

  18. High-Performance Data Analytics Beyond the Relational and Graph Data Models with GEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Minutoli, Marco; Bhatt, Shreyansh

    Graphs represent an increasingly popular data model for data-analytics, since they can naturally represent relationships and interactions between entities. Relational databases and their pure table-based data model are not well suitable to store and process sparse data. Consequently, graph databases have gained interest in the last few years and the Resource Description Framework (RDF) became the standard data model for graph data. Nevertheless, while RDF is well suited to analyze the relationships between the entities, it is not efficient in representing their attributes and properties. In this work we propose the adoption of a new hybrid data model, based onmore » attributed graphs, that aims at overcoming the limitations of the pure relational and graph data models. We present how we have re-designed the GEMS data-analytics framework to fully take advantage of the proposed hybrid data model. To improve analysts productivity, in addition to a C++ API for applications development, we adopt GraQL as input query language. We validate our approach implementing a set of queries on net-flow data and we compare our framework performance against Neo4j. Experimental results show significant performance improvement over Neo4j, up to several orders of magnitude when increasing the size of the input data.« less

  19. Electrical wave propagation in an anisotropic model of the left ventricle based on analytical description of cardiac architecture.

    PubMed

    Pravdin, Sergey F; Dierckx, Hans; Katsnelson, Leonid B; Solovyova, Olga; Markhasin, Vladimir S; Panfilov, Alexander V

    2014-01-01

    We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers) on wave propagation using the ten Tusscher-Panfilov (2006) ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation.

  20. Modeling of layered anisotropic composite material based on effective medium theory

    NASA Astrophysics Data System (ADS)

    Bao, Yang; Song, Jiming

    2018-04-01

    In this paper, we present an efficient method to simulate multilayered anisotropic composite material with effective medium theory. Effective permittivity, permeability and orientation angle for a layered anisotropic composite medium are extracted with this equivalent model. We also derive analytical expressions for effective parameters and orientation angle with low frequency (LF) limit, which will be shown in detail. Numerical results are shown in comparing extracted effective parameters and orientation angle with analytical results from low frequency limit. Good agreements are achieved to demonstrate the accuracy of our efficient model.

  1. An analytical drain current model for symmetric double-gate MOSFETs

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Huang, Gongyi; Lin, Wei; Xu, Chuanzhong

    2018-04-01

    An analytical surface-potential-based drain current model of symmetric double-gate (sDG) MOSFETs is described as a SPICE compatible model in this paper. The continuous surface and central potentials from the accumulation to the strong inversion regions are solved from the 1-D Poisson's equation in sDG MOSFETs. Furthermore, the drain current is derived from the charge sheet model as a function of the surface potential. Over a wide range of terminal voltages, doping concentrations, and device geometries, the surface potential calculation scheme and drain current model are verified by solving the 1-D Poisson's equation based on the least square method and using the Silvaco Atlas simulation results and experimental data, respectively. Such a model can be adopted as a useful platform to develop the circuit simulator and provide the clear understanding of sDG MOSFET device physics.

  2. Design optimization of an axial-field eddy-current magnetic coupling based on magneto-thermal analytical model

    NASA Astrophysics Data System (ADS)

    Fontchastagner, Julien; Lubin, Thierry; Mezani, Smaïl; Takorabet, Noureddine

    2018-03-01

    This paper presents a design optimization of an axial-flux eddy-current magnetic coupling. The design procedure is based on a torque formula derived from a 3D analytical model and a population algorithm method. The main objective of this paper is to determine the best design in terms of magnets volume in order to transmit a torque between two movers, while ensuring a low slip speed and a good efficiency. The torque formula is very accurate and computationally efficient, and is valid for any slip speed values. Nevertheless, in order to solve more realistic problems, and then, take into account the thermal effects on the torque value, a thermal model based on convection heat transfer coefficients is also established and used in the design optimization procedure. Results show the effectiveness of the proposed methodology.

  3. Modeling the drugs' passive transfer in the body based on their chromatographic behavior.

    PubMed

    Kouskoura, Maria G; Kachrimanis, Kyriakos G; Markopoulou, Catherine K

    2014-11-01

    One of the most challenging aims in modern analytical chemistry and pharmaceutical analysis is to create models for drugs' behavior based on simulation experiments. Since drugs' effects are closely related to their molecular properties, numerous characteristics of drugs are used in order to acquire a model of passive absorption and transfer in the human body. Importantly, such direction in innovative bioanalytical methodologies is also of stressful need in the area of personalized medicine to implement nanotechnological and genomics advancements. Simulation experiments were carried out by examining and interpreting the chromatographic behavior of 113 analytes/drugs (400 observations) in RP-HPLC. The dataset employed for this purpose included 73 descriptors which are referring to the physicochemical properties of the mobile phase mixture in different proportions, the physicochemical properties of the analytes and the structural characteristics of their molecules. A series of different software packages was used to calculate all the descriptors apart from those referring to the structure of analytes. The correlation of the descriptors with the retention time of the analytes eluted from a C4 column with an aqueous mobile phase was employed as dataset to introduce the behavior models in the human body. Their evaluation with a Partial Least Squares (PLS) software proved that the chromatographic behavior of a drug on a lipophilic stationary and a polar mobile phase is directly related to its drug-ability. At the same time, the behavior of an unknown drug in the human body can be predicted with reliability via the Artificial Neural Networks (ANNs) software. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  5. Potential formulation of sleep dynamics

    NASA Astrophysics Data System (ADS)

    Phillips, A. J. K.; Robinson, P. A.

    2009-02-01

    A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.

  6. Comparison of analytical and numerical approaches for CT-based aberration correction in transcranial passive acoustic imaging

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; Hynynen, Kullervo

    2016-01-01

    Computed tomography (CT)-based aberration corrections are employed in transcranial ultrasound both for therapy and imaging. In this study, analytical and numerical approaches for calculating aberration corrections based on CT data were compared, with a particular focus on their application to transcranial passive imaging. Two models were investigated: a three-dimensional full-wave numerical model (Connor and Hynynen 2004 IEEE Trans. Biomed. Eng. 51 1693-706) based on the Westervelt equation, and an analytical method (Clement and Hynynen 2002 Ultrasound Med. Biol. 28 617-24) similar to that currently employed by commercial brain therapy systems. Trans-skull time delay corrections calculated from each model were applied to data acquired by a sparse hemispherical (30 cm diameter) receiver array (128 piezoceramic discs: 2.5 mm diameter, 612 kHz center frequency) passively listening through ex vivo human skullcaps (n  =  4) to emissions from a narrow-band, fixed source emitter (1 mm diameter, 516 kHz center frequency). Measurements were taken at various locations within the cranial cavity by moving the source around the field using a three-axis positioning system. Images generated through passive beamforming using CT-based skull corrections were compared with those obtained through an invasive source-based approach, as well as images formed without skull corrections, using the main lobe volume, positional shift, peak sidelobe ratio, and image signal-to-noise ratio as metrics for image quality. For each CT-based model, corrections achieved by allowing for heterogeneous skull acoustical parameters in simulation outperformed the corresponding case where homogeneous parameters were assumed. Of the CT-based methods investigated, the full-wave model provided the best imaging results at the cost of computational complexity. These results highlight the importance of accurately modeling trans-skull propagation when calculating CT-based aberration corrections. Although presented in an imaging context, our results may also be applicable to the problem of transmit focusing through the skull.

  7. Experimentally validated mathematical model of analyte uptake by permeation passive samplers.

    PubMed

    Salim, F; Ioannidis, M; Górecki, T

    2017-11-15

    A mathematical model describing the sampling process in a permeation-based passive sampler was developed and evaluated numerically. The model was applied to the Waterloo Membrane Sampler (WMS), which employs a polydimethylsiloxane (PDMS) membrane as a permeation barrier, and an adsorbent as a receiving phase. Samplers of this kind are used for sampling volatile organic compounds (VOC) from air and soil gas. The model predicts the spatio-temporal variation of sorbed and free analyte concentrations within the sampler components (membrane, sorbent bed and dead volume), from which the uptake rate throughout the sampling process can be determined. A gradual decline in the uptake rate during the sampling process is predicted, which is more pronounced when sampling higher concentrations. Decline of the uptake rate can be attributed to diminishing analyte concentration gradient within the membrane, which results from resistance to mass transfer and the development of analyte concentration gradients within the sorbent bed. The effects of changing the sampler component dimensions on the rate of this decline in the uptake rate can be predicted from the model. Performance of the model was evaluated experimentally for sampling of toluene vapors under controlled conditions. The model predictions proved close to the experimental values. The model provides a valuable tool to predict changes in the uptake rate during sampling, to assign suitable exposure times at different analyte concentration levels, and to optimize the dimensions of the sampler in a manner that minimizes these changes during the sampling period.

  8. Approaching near real-time biosensing: microfluidic microsphere based biosensor for real-time analyte detection.

    PubMed

    Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania

    2015-04-15

    In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  10. Thermal management of liquid direct cooled split disk laser

    NASA Astrophysics Data System (ADS)

    Yang, Huomu; Feng, Guoying; Zhou, Shouhuan

    2015-02-01

    The thermal effects of a liquid direct cooled split disk laser are modeled and analytically solved. The analytical solutions with the consideration of longitudinal cooling liquid temperature rise have been given to describe the temperature distribution in the split disk and cooling liquid based on the hydrodynamics and heat transfer. The influence of cooling liquid, liquid flowing velocity, thickness of cooling channel and of disk gain medium can also be got from the analytical solutions.

  11. On cup anemometer rotor aerodynamics.

    PubMed

    Pindado, Santiago; Pérez, Javier; Avila-Sanchez, Sergio

    2012-01-01

    The influence of anemometer rotor shape parameters, such as the cups' front area or their center rotation radius on the anemometer's performance was analyzed. This analysis was based on calibrations performed on two different anemometers (one based on magnet system output signal, and the other one based on an opto-electronic system output signal), tested with 21 different rotors. The results were compared to the ones resulting from classical analytical models. The results clearly showed a linear dependency of both calibration constants, the slope and the offset, on the cups' center rotation radius, the influence of the front area of the cups also being observed. The analytical model of Kondo et al. was proved to be accurate if it is based on precise data related to the aerodynamic behavior of a rotor's cup.

  12. Accurate quantification of PGE2 in the polyposis in rat colon (Pirc) model by surrogate analyte-based UPLC-MS/MS.

    PubMed

    Yun, Changhong; Dashwood, Wan-Mohaiza; Kwong, Lawrence N; Gao, Song; Yin, Taijun; Ling, Qinglan; Singh, Rashim; Dashwood, Roderick H; Hu, Ming

    2018-01-30

    An accurate and reliable UPLC-MS/MS method is reported for the quantification of endogenous Prostaglandin E2 (PGE 2 ) in rat colonic mucosa and polyps. This method adopted the "surrogate analyte plus authentic bio-matrix" approach, using two different stable isotopic labeled analogs - PGE 2 -d9 as the surrogate analyte and PGE 2 -d4 as the internal standard. A quantitative standard curve was constructed with the surrogate analyte in colonic mucosa homogenate, and the method was successfully validated with the authentic bio-matrix. Concentrations of endogenous PGE 2 in both normal and inflammatory tissue homogenates were back-calculated based on the regression equation. Because of no endogenous interference on the surrogate analyte determination, the specificity was particularly good. By using authentic bio-matrix for validation, the matrix effect and exaction recovery are identically same for the quantitative standard curve and actual samples - this notably increased the assay accuracy. The method is easy, fast, robust and reliable for colon PGE 2 determination. This "surrogate analyte" approach was applied to measure the Pirc (an Apc-mutant rat kindred that models human FAP) mucosa and polyps PGE 2 , one of the strong biomarkers of colorectal cancer. A similar concept could be applied to endogenous biomarkers in other tissues. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Open and scalable analytics of large Earth observation datasets: From scenes to multidimensional arrays using SciDB and GDAL

    NASA Astrophysics Data System (ADS)

    Appel, Marius; Lahn, Florian; Buytaert, Wouter; Pebesma, Edzer

    2018-04-01

    Earth observation (EO) datasets are commonly provided as collection of scenes, where individual scenes represent a temporal snapshot and cover a particular region on the Earth's surface. Using these data in complex spatiotemporal modeling becomes difficult as soon as data volumes exceed a certain capacity or analyses include many scenes, which may spatially overlap and may have been recorded at different dates. In order to facilitate analytics on large EO datasets, we combine and extend the geospatial data abstraction library (GDAL) and the array-based data management and analytics system SciDB. We present an approach to automatically convert collections of scenes to multidimensional arrays and use SciDB to scale computationally intensive analytics. We evaluate the approach in three study cases on national scale land use change monitoring with Landsat imagery, global empirical orthogonal function analysis of daily precipitation, and combining historical climate model projections with satellite-based observations. Results indicate that the approach can be used to represent various EO datasets and that analyses in SciDB scale well with available computational resources. To simplify analyses of higher-dimensional datasets as from climate model output, however, a generalization of the GDAL data model might be needed. All parts of this work have been implemented as open-source software and we discuss how this may facilitate open and reproducible EO analyses.

  14. Using Configural Frequency Analysis as a Person-Centered Analytic Approach with Categorical Data

    ERIC Educational Resources Information Center

    Stemmler, Mark; Heine, Jörg-Henrik

    2017-01-01

    Configural frequency analysis and log-linear modeling are presented as person-centered analytic approaches for the analysis of categorical or categorized data in multi-way contingency tables. Person-centered developmental psychology, based on the holistic interactionistic perspective of the Stockholm working group around David Magnusson and Lars…

  15. Optimizing an Immersion ESL Curriculum Using Analytic Hierarchy Process

    ERIC Educational Resources Information Center

    Tang, Hui-Wen Vivian

    2011-01-01

    The main purpose of this study is to fill a substantial knowledge gap regarding reaching a uniform group decision in English curriculum design and planning. A comprehensive content-based course criterion model extracted from existing literature and expert opinions was developed. Analytical hierarchy process (AHP) was used to identify the relative…

  16. A Move-Analytic Contrastive Study on the Introductions of American and Philippine Master's Theses in Architecture

    ERIC Educational Resources Information Center

    Lintao, Rachelle B.; Erfe, Jonathan P.

    2012-01-01

    This study purports to foster the understanding of profession-based academic writing in two different cultural conventions by examining the rhetorical moves employed by American and Philippine thesis introductions in Architecture using Swales' 2004 Revised CARS move-analytic model as framework. Twenty (20) Master's thesis introductions in…

  17. Learning Analytics for Communities of Inquiry

    ERIC Educational Resources Information Center

    Kovanovic, Vitomir; Gaševic, Dragan; Hatala, Marek

    2014-01-01

    This paper describes doctoral research that focuses on the development of a learning analytics framework for inquiry-based digital learning. Building on the Community of Inquiry model (CoI)--a foundation commonly used in the research and practice of digital learning and teaching--this research builds on the existing body of knowledge in two…

  18. Aberration measurement technique based on an analytical linear model of a through-focus aerial image.

    PubMed

    Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas

    2014-03-10

    We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.

  19. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... characteristics of that basic model, and (ii) Based on engineering or statistical analysis, computer simulation or... simulation or modeling, and other analytic evaluation of performance data on which the AEDM is based... applied. (iii) If requested by the Department, the manufacturer shall conduct simulations to predict the...

  20. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... characteristics of that basic model, and (ii) Based on engineering or statistical analysis, computer simulation or... simulation or modeling, and other analytic evaluation of performance data on which the AEDM is based... applied. (iii) If requested by the Department, the manufacturer shall conduct simulations to predict the...

  1. ESTIMATION OF GROUNDWATER POLLUTION POTENTIAL BY PESTICIDES IN MID-ATLANTIC COASTAL PLAIN WATERSHEDS

    EPA Science Inventory

    A simple GIS-based transport model to estimate the potential for groundwater pollution by pesticides has been developed within the ArcView GIS environment. The pesticide leaching analytical model, which is based on one-dimensional advective-dispersive-reactive (ADR) transport, ha...

  2. 3-D Inhomogeous Radiative Transfer Model using a Planar-stratified Forward RT Model and Horizontal Perturbation Series

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Gasiewski, A. J.

    2017-12-01

    A horizontally inhomogeneous unified microwave radiative transfer (HI-UMRT) model based upon a nonspherical hydrometeor scattering model is being developed at the University of Colorado at Boulder to facilitate forward radiative simulations for 3-dimensionally inhomogeneous clouds in severe weather. The HI-UMRT 3-D analytical solution is based on incorporating a planar-stratified 1-D UMRT algorithm within a horizontally inhomogeneous iterative perturbation scheme. Single-scattering parameters are computed using the Discrete Dipole Scattering (DDSCAT v7.3) program for hundreds of carefully selected nonspherical complex frozen hydrometeors from the NASA/GSFC DDSCAT database. The required analytic factorization symmetry of transition matrix in a normalized RT equation was analytically proved and validated numerically using the DDSCAT-based full Stokes matrix of randomly oriented hydrometeors. The HI-UMRT model thus inherits the properties of unconditional numerical stability, efficiency, and accuracy from the UMRT algorithm and provides a practical 3-D two-Stokes parameter radiance solution with Jacobian to be used within microwave retrievals and data assimilation schemes. In addition, a fast forward radar reflectivity operator with Jacobian based on DDSCAT backscatter efficiency computed for large hydrometeors is incorporated into the HI-UMRT model to provide applicability to active radar sensors. The HI-UMRT will be validated strategically at two levels: 1) intercomparison of brightness temperature (Tb) results with those of several 1-D and 3-D RT models, including UMRT, CRTM and Monte Carlo models, 2) intercomparison of Tb with observed data from combined passive and active spaceborne sensors (e.g. GPM GMI and DPR). The precise expression for determining the required number of 3-D iterations to achieve an error bound on the perturbation solution will be developed to facilitate the numerical verification of the HI-UMRT code complexity and computation performance.

  3. A Machine-Learning-Driven Sky Model.

    PubMed

    Satylmys, Pynar; Bashford-Rogers, Thomas; Chalmers, Alan; Debattista, Kurt

    2017-01-01

    Sky illumination is responsible for much of the lighting in a virtual environment. A machine-learning-based approach can compactly represent sky illumination from both existing analytic sky models and from captured environment maps. The proposed approach can approximate the captured lighting at a significantly reduced memory cost and enable smooth transitions of sky lighting to be created from a small set of environment maps captured at discrete times of day. The author's results demonstrate accuracy close to the ground truth for both analytical and capture-based methods. The approach has a low runtime overhead, so it can be used as a generic approach for both offline and real-time applications.

  4. A developed nearly analytic discrete method for forward modeling in the frequency domain

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai

    2018-02-01

    High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.

  5. Analytical thermal model for end-pumped solid-state lasers

    NASA Astrophysics Data System (ADS)

    Cini, L.; Mackenzie, J. I.

    2017-12-01

    Fundamentally power-limited by thermal effects, the design challenge for end-pumped "bulk" solid-state lasers depends upon knowledge of the temperature gradients within the gain medium. We have developed analytical expressions that can be used to model the temperature distribution and thermal-lens power in end-pumped solid-state lasers. Enabled by the inclusion of a temperature-dependent thermal conductivity, applicable from cryogenic to elevated temperatures, typical pumping distributions are explored and the results compared with accepted models. Key insights are gained through these analytical expressions, such as the dependence of the peak temperature rise in function of the boundary thermal conductance to the heat sink. Our generalized expressions provide simple and time-efficient tools for parametric optimization of the heat distribution in the gain medium based upon the material and pumping constraints.

  6. A cost-performance model for ground-based optical communications receiving telescopes

    NASA Technical Reports Server (NTRS)

    Lesh, J. R.; Robinson, D. L.

    1986-01-01

    An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.

  7. Comparison of NMR simulations of porous media derived from analytical and voxelized representations.

    PubMed

    Jin, Guodong; Torres-Verdín, Carlos; Toumelin, Emmanuel

    2009-10-01

    We develop and compare two formulations of the random-walk method, grain-based and voxel-based, to simulate the nuclear-magnetic-resonance (NMR) response of fluids contained in various models of porous media. The grain-based approach uses a spherical grain pack as input, where the solid surface is analytically defined without an approximation. In the voxel-based approach, the input is a computer-tomography or computer-generated image of reconstructed porous media. Implementation of the two approaches is largely the same, except for the representation of porous media. For comparison, both approaches are applied to various analytical and digitized models of porous media: isolated spherical pore, simple cubic packing of spheres, and random packings of monodisperse and polydisperse spheres. We find that spin magnetization decays much faster in the digitized models than in their analytical counterparts. The difference in decay rate relates to the overestimation of surface area due to the discretization of the sample; it cannot be eliminated even if the voxel size decreases. However, once considering the effect of surface-area increase in the simulation of surface relaxation, good quantitative agreement is found between the two approaches. Different grain or pore shapes entail different rates of increase of surface area, whereupon we emphasize that the value of the "surface-area-corrected" coefficient may not be universal. Using an example of X-ray-CT image of Fontainebleau rock sample, we show that voxel size has a significant effect on the calculated surface area and, therefore, on the numerically simulated magnetization response.

  8. Thermal modeling for pulsed radiofrequency ablation: analytical study based on hyperbolic heat conduction.

    PubMed

    López Molina, Juan A; Rivera, María J; Trujillo, Macarena; Berjano, Enrique J

    2009-04-01

    The objectives of this study were to model the temperature progress of a pulsed radiofrequency (RF) power during RF heating of biological tissue, and to employ the hyperbolic heat transfer equation (HHTE), which takes the thermal wave behavior into account, and compare the results to those obtained using the heat transfer equation based on Fourier theory (FHTE). A theoretical model was built based on an active spherical electrode completely embedded in the biological tissue, after which HHTE and FHTE were analytically solved. We found three typical waveforms for the temperature progress depending on the relations between the dimensionless duration of the RF pulse delta(a) and the expression square root of lambda(rho-1), with lambda as the dimensionless thermal relaxation time of the tissue and rho as the dimensionless position. In the case of a unique RF pulse, the temperature at any location was the result of the overlapping of two different heat sources delayed for a duration delta(a) (each heat source being produced by a RF pulse of limitless duration). The most remarkable feature in the HHTE analytical solution was the presence of temperature peaks traveling through the medium at a finite speed. These peaks not only occurred during the RF power switch-on period but also during switch off. Finally, a physical explanation for these temperature peaks is proposed based on the interaction of forward and reverse thermal waves. All-purpose analytical solutions for FHTE and HHTE were obtained during pulsed RF heating of biological tissues, which could be used for any value of pulsing frequency and duty cycle.

  9. Distribution-centric 3-parameter thermodynamic models of partition gas chromatography.

    PubMed

    Blumberg, Leonid M

    2017-03-31

    If both parameters (the entropy, ΔS, and the enthalpy, ΔH) of the classic van't Hoff model of dependence of distribution coefficients (K) of analytes on temperature (T) are treated as the temperature-independent constants then the accuracy of the model is known to be insufficient for the needed accuracy of retention time prediction. A more accurate 3-parameter Clarke-Glew model offers a way to treat ΔS and ΔH as functions, ΔS(T) and ΔH(T), of T. A known T-centric construction of these functions is based on relating them to the reference values (ΔS ref and ΔH ref ) corresponding to a predetermined reference temperature (T ref ). Choosing a single T ref for all analytes in a complex sample or in a large database might lead to practically irrelevant values of ΔS ref and ΔH ref for those analytes that have too small or too large retention factors at T ref . Breaking all analytes in several subsets each with its own T ref leads to discontinuities in the analyte parameters. These problems are avoided in the K-centric modeling where ΔS(T) and ΔS(T) and other analyte parameters are described in relation to their values corresponding to a predetermined reference distribution coefficient (K Ref ) - the same for all analytes. In this report, the mathematics of the K-centric modeling are described and the properties of several types of K-centric parameters are discussed. It has been shown that the earlier introduced characteristic parameters of the analyte-column interaction (the characteristic temperature, T char , and the characteristic thermal constant, θ char ) are a special chromatographically convenient case of the K-centric parameters. Transformations of T-centric parameters into K-centric ones and vice-versa as well as the transformations of one set of K-centric parameters into another set and vice-versa are described. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling

    PubMed Central

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762

  11. Acid-base chemistry of white wine: analytical characterisation and chemical modelling.

    PubMed

    Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe

    2012-01-01

    A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing.

  12. Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

    PubMed Central

    Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.

    2016-01-01

    Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614

  13. A simple analytical model of coupled single flow channel over porous electrode in vanadium redox flow battery with serpentine flow channel

    NASA Astrophysics Data System (ADS)

    Ke, Xinyou; Alexander, J. Iwan D.; Prahl, Joseph M.; Savinell, Robert F.

    2015-08-01

    A simple analytical model of a layered system comprised of a single passage of a serpentine flow channel and a parallel underlying porous electrode (or porous layer) is proposed. This analytical model is derived from Navier-Stokes motion in the flow channel and Darcy-Brinkman model in the porous layer. The continuities of flow velocity and normal stress are applied at the interface between the flow channel and the porous layer. The effects of the inlet volumetric flow rate, thickness of the flow channel and thickness of a typical carbon fiber paper porous layer on the volumetric flow rate within this porous layer are studied. The maximum current density based on the electrolyte volumetric flow rate is predicted, and found to be consistent with reported numerical simulation. It is found that, for a mean inlet flow velocity of 33.3 cm s-1, the analytical maximum current density is estimated to be 377 mA cm-2, which compares favorably with experimental result reported by others of ∼400 mA cm-2.

  14. Molecular modeling of polymer composite interactions with analytes in electronic nose sensors for environmental monitoring in International Space Station

    NASA Technical Reports Server (NTRS)

    Shevade, A. V.; Ryan, M. A.; Homer, M. L.; Manfreda, A. M.; Zhou, H.; Manatt, K.

    2002-01-01

    We report a molecular modeling study to investigate the polymer-carbon black (CB) composite-analyte interactions in resistive sensors. These sensors comprise the JPL Electronic Nose (ENose) sensing array developed for monitoring breathing air in human habitats. The polymer in the composite is modeled based on its stereisomerism and sequence isomerism, while the CB is modeled as uncharged naphthalene rings (with no hydrogens). The Dreiding 2.21 force field is used for the polymer and solvent molecules and graphite parameters are assigned to the carbon black atoms. A combination of molecular mechanics (MM) and molecular dynamics (NPT-MD and NVT-MD) techniques are used to obtain the equilibrium composite structure by inserting naphthalene rings in the polymer matrix. Polymers considered for this work include poly(4- vinylphenol), polyethylene oxide, and ethyl cellulose. Analytes studied are representative of both inorganic (ammonia) and organic (methanol, toluene, hydrazine) compounds. The results are analyzed for the composite microstructure by calculating the radial distribution profiles as well as for the sensor response by predicting the interaction energies of the analytes with the composites.

  15. Molecular modeling of polymer composite-analyte interactions in electronic nose sensors

    NASA Technical Reports Server (NTRS)

    Shevade, A. V.; Ryan, M. A.; Homer, M. L.; Manfreda, A. M.; Zhou, H.; Manatt, K. S.

    2003-01-01

    We report a molecular modeling study to investigate the polymer-carbon black (CB) composite-analyte interactions in resistive sensors. These sensors comprise the JPL electronic nose (ENose) sensing array developed for monitoring breathing air in human habitats. The polymer in the composite is modeled based on its stereoisomerism and sequence isomerism, while the CB is modeled as uncharged naphthalene rings with no hydrogens. The Dreiding 2.21 force field is used for the polymer, solvent molecules and graphite parameters are assigned to the carbon black atoms. A combination of molecular mechanics (MM) and molecular dynamics (NPT-MD and NVT-MD) techniques are used to obtain the equilibrium composite structure by inserting naphthalene rings in the polymer matrix. Polymers considered for this work include poly(4-vinylphenol), polyethylene oxide, and ethyl cellulose. Analytes studied are representative of both inorganic and organic compounds. The results are analyzed for the composite microstructure by calculating the radial distribution profiles as well as for the sensor response by predicting the interaction energies of the analytes with the composites. c2003 Elsevier Science B.V. All rights reserved.

  16. Modeling of phonon scattering in n-type nanowire transistors using one-shot analytic continuation technique

    NASA Astrophysics Data System (ADS)

    Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel

    2013-10-01

    We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.

  17. Analytical methodology for determination of helicopter IFR precision approach requirements. [pilot workload and acceptance level

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.

    1980-01-01

    A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.

  18. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  19. Analytical modeling of electron energy loss spectroscopy of graphene: Ab initio study versus extended hydrodynamic model.

    PubMed

    Djordjević, Tijana; Radović, Ivan; Despoja, Vito; Lyon, Keenan; Borka, Duško; Mišković, Zoran L

    2018-01-01

    We present an analytical modeling of the electron energy loss (EEL) spectroscopy data for free-standing graphene obtained by scanning transmission electron microscope. The probability density for energy loss of fast electrons traversing graphene under normal incidence is evaluated using an optical approximation based on the conductivity of graphene given in the local, i.e., frequency-dependent form derived by both a two-dimensional, two-fluid extended hydrodynamic (eHD) model and an ab initio method. We compare the results for the real and imaginary parts of the optical conductivity in graphene obtained by these two methods. The calculated probability density is directly compared with the EEL spectra from three independent experiments and we find very good agreement, especially in the case of the eHD model. Furthermore, we point out that the subtraction of the zero-loss peak from the experimental EEL spectra has a strong influence on the analytical model for the EEL spectroscopy data. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Analytical modeling and analysis of magnetic field and torque for novel axial flux eddy current couplers with PM excitation

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin

    2017-10-01

    Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.

  1. A review of selected inorganic surface water quality-monitoring practices: are we really measuring what we think, and if so, are we doing it right?

    USGS Publications Warehouse

    Horowitz, Arthur J.

    2013-01-01

    Successful environmental/water quality-monitoring programs usually require a balance between analytical capabilities, the collection and preservation of representative samples, and available financial/personnel resources. Due to current economic conditions, monitoring programs are under increasing pressure to do more with less. Hence, a review of current sampling and analytical methodologies, and some of the underlying assumptions that form the bases for these programs seems appropriate, to see if they are achieving their intended objectives within acceptable error limits and/or measurement uncertainty, in a cost-effective manner. That evaluation appears to indicate that several common sampling/processing/analytical procedures (e.g., dip (point) samples/measurements, nitrogen determinations, total recoverable analytical procedures) are generating biased or nonrepresentative data, and that some of the underlying assumptions relative to current programs, such as calendar-based sampling and stationarity are no longer defensible. The extensive use of statistical models as well as surrogates (e.g., turbidity) also needs to be re-examined because the hydrologic interrelationships that support their use tend to be dynamic rather than static. As a result, a number of monitoring programs may need redesigning, some sampling and analytical procedures may need to be updated, and model/surrogate interrelationships may require recalibration.

  2. The vibration characteristics of a coupled helicopter rotor-fuselage by a finite element analysis

    NASA Technical Reports Server (NTRS)

    Rutkowski, M. J.

    1983-01-01

    The dynamic coupling between the rotor system and the fuselage of a simplified helicopter model in hover was analytically investigated. Mass, aerodynamic damping, and elastic and centrifugal stiffness matrices are presented for the analytical model; the model is based on a beam finite element, with polynomial mass and stiffness distributions for both the rotor and fuselage representations. For this analytical model, only symmetric fuselage and collective blade degrees of freedom are treated. Real and complex eigen-analyses are carried out to obtain coupled rotor-fuselage natural modes and frequencies as a function of rotor speed. Vibration response results are obtained for the coupled system subjected to a radially uniform, harmonic blade loading. The coupled response results are compared with response results from an uncoupled analysis in which hub loads for an isolated rotor system subjected to the same sinusoidal blade loading as the coupled system are applied to a free-free fuselage.

  3. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  4. Generalized constitutive equations for piezo-actuated compliant mechanism

    NASA Astrophysics Data System (ADS)

    Cao, Junyi; Ling, Mingxiang; Inman, Daniel J.; Lin, Jin

    2016-09-01

    This paper formulates analytical models to describe the static displacement and force interactions between generic serial-parallel compliant mechanisms and their loads by employing the matrix method. In keeping with the familiar piezoelectric constitutive equations, the generalized constitutive equations of compliant mechanism represent the input-output displacement and force relations in the form of a generalized Hooke’s law and as analytical functions of physical parameters. Also significantly, a new model of output displacement for compliant mechanism interacting with piezo-stacks and elastic loads is deduced based on the generalized constitutive equations. Some original findings differing from the well-known constitutive performance of piezo-stacks are also given. The feasibility of the proposed models is confirmed by finite element analysis and by experiments under various elastic loads. The analytical models can be an insightful tool for predicting and optimizing the performance of a wide class of compliant mechanisms that simultaneously consider the influence of loads and piezo-stacks.

  5. The effect of air entrapment on the performance of squeeze film dampers: Experiments and analysis

    NASA Astrophysics Data System (ADS)

    Diaz Briceno, Sergio Enrique

    Squeeze film dampers (SFDs) are an effective means to introduce the required damping in rotor-bearing systems. They are a standard application in jet engines and are commonly used in industrial compressors. Yet, lack of understanding of their operation has confined the design of SFDs to a costly trial and error process based on prior experience. The main factor deterring the success of analytical models for the prediction of SFDs' performance lays on the modeling of the dynamic film rupture. Usually, the cavitation models developed for journal bearings are applied to SFDs. Yet, the characteristic motion of the SFD results in the entrapment of air into the oil film, thus producing a bubbly mixture that can not be represented by these models. In this work, an extensive experimental study establishes qualitatively and---for the first time---quantitatively the differences between operation with vapor cavitation and with air entrainment. The experiments show that most operating conditions lead to air entrainment and demonstrate the paramount effect it has on the performance of SFDs, evidencing the limitation of currently available models. Further experiments address the operation of SFDs with controlled bubbly mixtures. These experiments bolster the possibility of modeling air entrapment by representing the lubricant as a homogeneous mixture of air and oil and provide a reliable data base for benchmarking such a model. An analytical model is developed based on a homogeneous mixture assumption and where the bubbles are described by the Rayleigh-Plesset equation. Good agreement is obtained between this model and the measurements performed in the SFD operating with controlled mixtures. A complementary analytical model is devised to estimate the amount of air entrained from the balance of axial flows in the film. A combination of the analytical models for prediction of the air volume fraction and of the hydrodynamic pressures renders promising results for prediction of the performance of SFDs with freely entrained air. The results of this work are of immediate engineering applicability. Furthermore, they represent a firm step to advance the understanding on the effects of air entrapment in the performance of SFD.

  6. Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; North, Chris

    2012-10-14

    With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less

  7. Importance of aggregation and small ice crystals in cirrus clouds, based on observations and an ice particle growth model

    NASA Technical Reports Server (NTRS)

    Mitchell, David L.; Chai, Steven K.; Dong, Yayi; Arnott, W. Patrick; Hallett, John

    1993-01-01

    The 1 November 1986 FIRE I case study was used to test an ice particle growth model which predicts bimodal size spectra in cirrus clouds. The model was developed from an analytically based model which predicts the height evolution of monomodal ice particle size spectra from the measured ice water content (IWC). Size spectra from the monomodal model are represented by a gamma distribution, N(D) = N(sub o)D(exp nu)exp(-lambda D), where D = ice particle maximum dimension. The slope parameter, lambda, and the parameter N(sub o) are predicted from the IWC through the growth processes of vapor diffusion and aggregation. The model formulation is analytical, computationally efficient, and well suited for incorporation into larger models. The monomodal model has been validated against two other cirrus cloud case studies. From the monomodal size spectra, the size distributions which determine concentrations of ice particles less than about 150 mu m are predicted.

  8. GIS-Based Suitability Model for Assessment of Forest Biomass Energy Potential in a Region of Portugal

    NASA Astrophysics Data System (ADS)

    Quinta-Nova, Luis; Fernandez, Paulo; Pedro, Nuno

    2017-12-01

    This work focuses on developed a decision support system based on multicriteria spatial analysis to assess the potential for generation of biomass residues from forestry sources in a region of Portugal (Beira Baixa). A set of environmental, economic and social criteria was defined, evaluated and weighted in the context of Saaty’s analytic hierarchies. The best alternatives were obtained after applying Analytic Hierarchy Process (AHP). The model was applied to the central region of Portugal where forest and agriculture are the most representative land uses. Finally, sensitivity analysis of the set of factors and their associated weights was performed to test the robustness of the model. The proposed evaluation model provides a valuable reference for decision makers in establishing a standardized means of selecting the optimal location for new biomass plants.

  9. An Update of the Analytical Groundwater Modeling to Assess Water Resource Impacts at the Afton Solar Energy Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, John J.; Greer, Christopher B.; Carr, Adrianne E.

    2014-10-01

    The purpose of this study is to update a one-dimensional analytical groundwater flow model to examine the influence of potential groundwater withdrawal in support of utility-scale solar energy development at the Afton Solar Energy Zone (SEZ) as a part of the Bureau of Land Management’s (BLM’s) Solar Energy Program. This report describes the modeling for assessing the drawdown associated with SEZ groundwater pumping rates for a 20-year duration considering three categories of water demand (high, medium, and low) based on technology-specific considerations. The 2012 modeling effort published in the Final Programmatic Environmental Impact Statement for Solar Energy Development in Sixmore » Southwestern States (Solar PEIS; BLM and DOE 2012) has been refined based on additional information described below in an expanded hydrogeologic discussion.« less

  10. A Grammar-based Approach for Modeling User Interactions and Generating Suggestions During the Data Exploration Process.

    PubMed

    Dabek, Filip; Caban, Jesus J

    2017-01-01

    Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.

  11. Analytical determination of thermal conductivity of W-UO2 and W-UN CERMET nuclear fuels

    NASA Astrophysics Data System (ADS)

    Webb, Jonathan A.; Charit, Indrajit

    2012-08-01

    The thermal conductivity of tungsten based CERMET fuels containing UO2 and UN fuel particles are determined as a function of particle geometry, stabilizer fraction and fuel-volume fraction, by using a combination of an analytical approach and experimental data collected from literature. Thermal conductivity is estimated using the Bruggeman-Fricke model. This study demonstrates that thermal conductivities of various CERMET fuels can be analytically predicted to values that are very close to the experimentally determined ones.

  12. An analytic current-voltage model for quasi-ballistic III-nitride high electron mobility transistors

    NASA Astrophysics Data System (ADS)

    Li, Kexin; Rakheja, Shaloo

    2018-05-01

    We present an analytic model to describe the DC current-voltage (I-V) relationship in scaled III-nitride high electron mobility transistors (HEMTs) in which transport within the channel is quasi-ballistic in nature. Following Landauer's transport theory and charge calculation based on two-dimensional electrostatics that incorporates negative momenta states from the drain terminal, an analytic expression for current as a function of terminal voltages is developed. The model interprets the non-linearity of access regions in non-self-aligned HEMTs. Effects of Joule heating with temperature-dependent thermal conductivity are incorporated in the model in a self-consistent manner. With a total of 26 input parameters, the analytic model offers reduced empiricism compared to existing GaN HEMT models. To verify the model, experimental I-V data of InAlN/GaN with InGaN back-barrier HEMTs with channel lengths of 42 and 105 nm are considered. Additionally, the model is validated against numerical I-V data obtained from DC hydrodynamic simulations of an unintentionally doped AlGaN-on-GaN HEMT with 50-nm gate length. The model is also verified against pulsed I-V measurements of a 150-nm T-gate GaN HEMT. Excellent agreement between the model and experimental and numerical results for output current, transconductance, and output conductance is demonstrated over a broad range of bias and temperature conditions.

  13. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  14. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  15. Limitless Analytic Elements

    NASA Astrophysics Data System (ADS)

    Strack, O. D. L.

    2018-02-01

    We present equations for new limitless analytic line elements. These elements possess a virtually unlimited number of degrees of freedom. We apply these new limitless analytic elements to head-specified boundaries and to problems with inhomogeneities in hydraulic conductivity. Applications of these new analytic elements to practical problems involving head-specified boundaries require the solution of a very large number of equations. To make the new elements useful in practice, an efficient iterative scheme is required. We present an improved version of the scheme presented by Bandilla et al. (2007), based on the application of Cauchy integrals. The limitless analytic elements are useful when modeling strings of elements, rivers for example, where local conditions are difficult to model, e.g., when a well is close to a river. The solution of such problems is facilitated by increasing the order of the elements to obtain a good solution. This makes it unnecessary to resort to dividing the element in question into many smaller elements to obtain a satisfactory solution.

  16. Boron nitride nanotube-based biosensing of various bacterium/viruses: continuum modelling-based simulation approach.

    PubMed

    Panchal, Mitesh B; Upadhyay, Sanjay H

    2014-09-01

    In this study, the feasibility of single walled boron nitride nanotube (SWBNNT)-based biosensors has been ensured considering the continuum modelling-based simulation approach, for mass-based detection of various bacterium/viruses. Various types of bacterium or viruses have been taken into consideration at the free-end of the cantilevered configuration of the SWBNNT, as a biosensor. Resonant frequency shift-based analysis has been performed with the adsorption of various bacterium/viruses considered as additional mass to the SWBNNT-based sensor system. The continuum mechanics-based analytical approach, considering effective wall thickness has been considered to validate the finite element method (FEM)-based simulation results, based on continuum volume-based modelling of the SWBNNT. As a systematic analysis approach, the FEM-based simulation results are found in excellent agreement with the analytical results, to analyse the SWBNNTs for their wide range of applications such as nanoresonators, biosensors, gas-sensors, transducers and so on. The obtained results suggest that by using the SWBNNT of smaller size the sensitivity of the sensor system can be enhanced and detection of the bacterium/virus having mass of 4.28 × 10⁻²⁴ kg can be effectively performed.

  17. Numerical and analytic models of spontaneous frequency sweeping for energetic particle-driven Alfven eigenmodes

    NASA Astrophysics Data System (ADS)

    Wang, Ge; Berk, H. L.

    2011-10-01

    The frequency chirping signal arising from spontaneous a toroidial Alfven eigenmode (TAE) excited by energetic particles is studied for both numerical and analytic models. The time-dependent numerical model is based on the 1D Vlasov equation. We use a sophisticated tracking method to lock onto the resonant structure to enable the chirping frequency to be nearly constant in the calculation frame. The accuracy of the adiabatic approximation is tested during the simulation which justifies the appropriateness of our analytic model. The analytic model uses the adiabatic approximation which allows us to solve the wave evolution equation in frequency space. Then, the resonant interactions between energetic particles and TAE yield predictions for the chirping rate, wave frequency and amplitudes vs. time. Here, an adiabatic invariant J is defined on the separatrix of a chirping mode to determine the region of confinement of the wave trapped distribution function. We examine the asymptotic behavior of the chirping signal for its long time evolution and find agreement in essential features with the results of the simulation. Work supported by Department of Energy contract DE-FC02-08ER54988.

  18. Particle contamination effects in EUVL: enhanced theory for the analytical determination of critical particle sizes

    NASA Astrophysics Data System (ADS)

    Brandstetter, Gerd; Govindjee, Sanjay

    2012-03-01

    Existing analytical and numerical methodologies are discussed and then extended in order to calculate critical contamination-particle sizes, which will result in deleterious effects during EUVL E-chucking in the face of an error budget on the image-placement-error (IPE). The enhanced analytical models include a gap dependant clamping pressure formulation, the consideration of a general material law for realistic particle crushing and the influence of frictional contact. We present a discussion of the defects of the classical de-coupled modeling approach where particle crushing and mask/chuck indentation are separated from the global computation of mask bending. To repair this defect we present a new analytic approach based on an exact Hankel transform method which allows a fully coupled solution. This will capture the contribution of the mask indentation to the image-placement-error (estimated IPE increase of 20%). A fully coupled finite element model is used to validate the analytical models and to further investigate the impact of a mask back-side CrN-layer. The models are applied to existing experimental data with good agreement. For a standard material combination, a given IPE tolerance of 1 nm and a 15 kPa closing pressure, we derive bounds for single particles of cylindrical shape (radius × height < 44 μm) and spherical shape (diameter < 12 μm).

  19. Introduction to "The Behavior-Analytic Origins of Constraint-Induced Movement Therapy: An Example of Behavioral Neurorehabilitation"

    ERIC Educational Resources Information Center

    Schaal, David W.

    2012-01-01

    This article presents an introduction to "The Behavior-Analytic Origins of Constraint-Induced Movement Therapy: An Example of Behavioral Neurorehabilitation," by Edward Taub and his colleagues (Taub, 2012). Based on extensive experimentation with animal models of peripheral nerve injury, Taub and colleagues have created an approach to overcoming…

  20. Monitoring by forward scatter radar techniques: an improved second-order analytical model

    NASA Astrophysics Data System (ADS)

    Falconi, Marta Tecla; Comite, Davide; Galli, Alessandro; Marzano, Frank S.; Pastina, Debora; Lombardo, Pierfrancesco

    2017-10-01

    In this work, a second-order phase approximation is introduced to provide an improved analytical model of the signal received in forward scatter radar systems. A typical configuration with a rectangular metallic object illuminated while crossing the baseline, in far- or near-field conditions, is considered. An improved second-order model is compared with a simplified one already proposed by the authors and based on a paraxial approximation. A phase error analysis is carried out to investigate benefits and limitations of the second-order modeling. The results are validated by developing full-wave numerical simulations implementing the relevant scattering problem on a commercial tool.

  1. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    PubMed

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  2. S-curve networks and an approximate method for estimating degree distributions of complex networks

    NASA Astrophysics Data System (ADS)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  3. Bending of an Infinite beam on a base with two parameters in the absence of a part of the base

    NASA Astrophysics Data System (ADS)

    Aleksandrovskiy, Maxim; Zaharova, Lidiya

    2018-03-01

    Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.

  4. A strategy to determine operating parameters in tissue engineering hollow fiber bioreactors

    PubMed Central

    Shipley, RJ; Davidson, AJ; Chan, K; Chaudhuri, JB; Waters, SL; Ellis, MJ

    2011-01-01

    The development of tissue engineering hollow fiber bioreactors (HFB) requires the optimal design of the geometry and operation parameters of the system. This article provides a strategy for specifying operating conditions for the system based on mathematical models of oxygen delivery to the cell population. Analytical and numerical solutions of these models are developed based on Michaelis–Menten kinetics. Depending on the minimum oxygen concentration required to culture a functional cell population, together with the oxygen uptake kinetics, the strategy dictates the model needed to describe mass transport so that the operating conditions can be defined. If cmin ≫ Km we capture oxygen uptake using zero-order kinetics and proceed analytically. This enables operating equations to be developed that allow the user to choose the medium flow rate, lumen length, and ECS depth to provide a prescribed value of cmin. When , we use numerical techniques to solve full Michaelis–Menten kinetics and present operating data for the bioreactor. The strategy presented utilizes both analytical and numerical approaches and can be applied to any cell type with known oxygen transport properties and uptake kinetics. PMID:21370228

  5. Physics of thermo-acoustic sound generation

    NASA Astrophysics Data System (ADS)

    Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.

    2013-09-01

    We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.

  6. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE PAGES

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    2017-01-30

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  7. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  8. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  9. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  10. Use of computer programs STLK1 and STWT1 for analysis of stream-aquifer hydraulic interaction

    USGS Publications Warehouse

    Desimone, Leslie A.; Barlow, Paul M.

    1999-01-01

    Quantifying the hydraulic interaction of aquifers and streams is important in the analysis of stream base fow, flood-wave effects, and contaminant transport between surface- and ground-water systems. This report describes the use of two computer programs, STLK1 and STWT1, to analyze the hydraulic interaction of streams with confined, leaky, and water-table aquifers during periods of stream-stage fuctuations and uniform, areal recharge. The computer programs are based on analytical solutions to the ground-water-flow equation in stream-aquifer settings and calculate ground-water levels, seepage rates across the stream-aquifer boundary, and bank storage that result from arbitrarily varying stream stage or recharge. Analysis of idealized, hypothetical stream-aquifer systems is used to show how aquifer type, aquifer boundaries, and aquifer and streambank hydraulic properties affect aquifer response to stresses. Published data from alluvial and stratifed-drift aquifers in Kentucky, Massachusetts, and Iowa are used to demonstrate application of the programs to field settings. Analytical models of these three stream-aquifer systems are developed on the basis of available hydrogeologic information. Stream-stage fluctuations and recharge are applied to the systems as hydraulic stresses. The models are calibrated by matching ground-water levels calculated with computer program STLK1 or STWT1 to measured ground-water levels. The analytical models are used to estimate hydraulic properties of the aquifer, aquitard, and streambank; to evaluate hydrologic conditions in the aquifer; and to estimate seepage rates and bank-storage volumes resulting from flood waves and recharge. Analysis of field examples demonstrates the accuracy and limitations of the analytical solutions and programs when applied to actual ground-water systems and the potential uses of the analytical methods as alternatives to numerical modeling for quantifying stream-aquifer interactions.

  11. Energy harvesting efficiency optimization via varying the radius of curvature of a piezoelectric THUNDER

    NASA Astrophysics Data System (ADS)

    Wang, Fengxia; Wang, Zengmei; Soroush, Mahmoudiandehkordi; Abedini, Amin

    2016-09-01

    In this work the energy harvesting performance of a piezoelectric curved energy generator (THin layer UNimorph DrivER (THUNDER)) is studied via experimental and analytical methods. The analytical model of the THUNDER is created based on the linear mechanical electrical constitutive law of the piezoelectric material, the linear elastic constitutive law of the substrate, and the Euler-Bernoulli beam theory. With these linear modal functions, the Rayleigh-Ritz approach was used to obtain the reduced mechanical-electrical coupled modulation equations. The analytical model is verified by the experimental results. Both the experimental and analytical results of the THUNDER’s AC power output, DC power output with Rectifier Bridge and a capacitor, as well as the power output with a microcontroller energy harvesting circuit are reported. Based on the theoretical model, the analytical solution of the DC power is derived in terms of the vibration amplitude, frequency, and the electrical load. To harvest energy from low-frequency vibration source by a piezoelectric generator requires the piezoelectric device possessing low resonance frequency and good flexibility. The THUNDER developed by Langley Research Center exhibits high power when it is used as an energy generator and large displacement when it is used as an actuator. Compared to the less flexible PZT, although THUNDER is more difficult to model, THUNDER has better vibration absorption capacity and higher energy recovery efficiency. The effect of the THUNDER’s radius of curvature on energy harvesting efficiency is mainly investigated. We set the THUNDER’s radius of curvature as a dynamic tuning parameter which can tune the piezoelectric generators’ frequency with the source excitation frequency.

  12. 13C-based metabolic flux analysis: fundamentals and practice.

    PubMed

    Yang, Tae Hoon

    2013-01-01

    Isotope-based metabolic flux analysis is one of the emerging technologies applied to system level metabolic phenotype characterization in metabolic engineering. Among the developed approaches, (13)C-based metabolic flux analysis has been established as a standard tool and has been widely applied to quantitative pathway characterization of diverse biological systems. To implement (13)C-based metabolic flux analysis in practice, comprehending the underlying mathematical and computational modeling fundamentals is of importance along with carefully conducted experiments and analytical measurements. Such knowledge is also crucial when designing (13)C-labeling experiments and properly acquiring key data sets essential for in vivo flux analysis implementation. In this regard, the modeling fundamentals of (13)C-labeling systems and analytical data processing are the main topics we will deal with in this chapter. Along with this, the relevant numerical optimization techniques are addressed to help implementation of the entire computational procedures aiming at (13)C-based metabolic flux analysis in vivo.

  13. Modeling of dispersed-drug delivery from planar polymeric systems: optimizing analytical solutions.

    PubMed

    Helbling, Ignacio M; Ibarra, Juan C D; Luna, Julio A; Cabrera, María I; Grau, Ricardo J A

    2010-11-15

    Analytical solutions for the case of controlled dispersed-drug release from planar non-erodible polymeric matrices, based on Refined Integral Method, are presented. A new adjusting equation is used for the dissolved drug concentration profile in the depletion zone. The set of equations match the available exact solution. In order to illustrate the usefulness of this model, comparisons with experimental profiles reported in the literature are presented. The obtained results show that the model can be employed in a broad range of applicability. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Theory and Circuit Model for Lossy Coaxial Transmission Line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genoni, T. C.; Anderson, C. N.; Clark, R. E.

    2017-04-01

    The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.

  15. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  16. Scaling Law for Cross-stream Diffusion in Microchannels under Combined Electroosmotic and Pressure Driven Flow.

    PubMed

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2013-01-01

    This paper presents an analytical study of the cross-stream diffusion of an analyte in a rectangular microchannel under combined electroosmotic flow (EOF) and pressure driven flow to investigate the heterogeneous transport behavior and spatially-dependent diffusion scaling law. An analytical model capable of accurately describing 3D steady-state convection-diffusion in microchannels with arbitrary aspect ratios is developed based on the assumption of the thin Electric Double Layer (EDL). The model is verified against high-fidelity numerical simulation in terms of flow velocity and analyte concentration profiles with excellent agreement (<0.5% relative error). An extensive parametric analysis is then undertaken to interrogate the effect of the combined flow velocity field on the transport behavior in both the positive pressure gradient (PPG) and negative pressure gradient (NPG) cases. For the first time, the evolution from the spindle-shaped concentration profile in the PPG case, via the stripe-shaped profile (pure EOF), and finally to the butterfly-shaped profile in the PPG case is obtained using the analytical model along with a quantitative depiction of the spatially-dependent diffusion layer thickness and scaling law across a wide range of the parameter space.

  17. Scaling Law for Cross-stream Diffusion in Microchannels under Combined Electroosmotic and Pressure Driven Flow

    PubMed Central

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2012-01-01

    This paper presents an analytical study of the cross-stream diffusion of an analyte in a rectangular microchannel under combined electroosmotic flow (EOF) and pressure driven flow to investigate the heterogeneous transport behavior and spatially-dependent diffusion scaling law. An analytical model capable of accurately describing 3D steady-state convection-diffusion in microchannels with arbitrary aspect ratios is developed based on the assumption of the thin Electric Double Layer (EDL). The model is verified against high-fidelity numerical simulation in terms of flow velocity and analyte concentration profiles with excellent agreement (<0.5% relative error). An extensive parametric analysis is then undertaken to interrogate the effect of the combined flow velocity field on the transport behavior in both the positive pressure gradient (PPG) and negative pressure gradient (NPG) cases. For the first time, the evolution from the spindle-shaped concentration profile in the PPG case, via the stripe-shaped profile (pure EOF), and finally to the butterfly-shaped profile in the PPG case is obtained using the analytical model along with a quantitative depiction of the spatially-dependent diffusion layer thickness and scaling law across a wide range of the parameter space. PMID:23554584

  18. Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.

  19. Analytical modeling of the mechanics of early invasion of a merozoite into a human erythrocyte.

    PubMed

    Abdalrahman, Tamer; Franz, Thomas

    2017-12-01

    In this study, we used a continuum model based on contact mechanics to understand the mechanics of merozoite invasion into human erythrocytes. This model allows us to evaluate the indentation force and work as well as the contact pressure between the merozoite and erythrocyte for an early stage of invasion (γ = 10%). The model predicted an indentation force of 1.3e -11 N and an indentation work of 1e -18 J. The present analytical model can be considered as a useful tool not only for investigations in mechanobiology and biomechanics but also to explore novel therapeutic targets for malaria and other parasite infections.

  20. Viscoelastic behavior and lifetime (durability) predictions. [for laminated fiber reinforced plastics

    NASA Technical Reports Server (NTRS)

    Brinson, R. F.

    1985-01-01

    A method for lifetime or durability predictions for laminated fiber reinforced plastics is given. The procedure is similar to but not the same as the well known time-temperature-superposition principle for polymers. The method is better described as an analytical adaptation of time-stress-super-position methods. The analytical constitutive modeling is based upon a nonlinear viscoelastic constitutive model developed by Schapery. Time dependent failure models are discussed and are related to the constitutive models. Finally, results of an incremental lamination analysis using the constitutive and failure model are compared to experimental results. Favorable results between theory and predictions are presented using data from creep tests of about two months duration.

  1. A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers

    DOE PAGES

    Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...

    2018-03-28

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less

  2. A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less

  3. Analytical and experimental comparisons of electromechanical vibration response of a piezoelectric bimorph beam for power harvesting

    NASA Astrophysics Data System (ADS)

    Lumentut, M. F.; Howard, I. M.

    2013-03-01

    Power harvesters that extract energy from vibrating systems via piezoelectric transduction show strong potential for powering smart wireless sensor devices in applications of health condition monitoring of rotating machinery and structures. This paper presents an analytical method for modelling an electromechanical piezoelectric bimorph beam with tip mass under two input base transverse and longitudinal excitations. The Euler-Bernoulli beam equations were used to model the piezoelectric bimorph beam. The polarity-electric field of the piezoelectric element is excited by the strain field caused by base input excitation, resulting in electrical charge. The governing electromechanical dynamic equations were derived analytically using the weak form of the Hamiltonian principle to obtain the constitutive equations. Three constitutive electromechanical dynamic equations based on independent coefficients of virtual displacement vectors were formulated and then further modelled using the normalised Ritz eigenfunction series. The electromechanical formulations include both the series and parallel connections of the piezoelectric bimorph. The multi-mode frequency response functions (FRFs) under varying electrical load resistance were formulated using Laplace transformation for the multi-input mechanical vibrations to provide the multi-output dynamic displacement, velocity, voltage, current and power. The experimental and theoretical validations reduced for the single mode system were shown to provide reasonable predictions. The model results from polar base excitation for off-axis input motions were validated with experimental results showing the change to the electrical power frequency response amplitude as a function of excitation angle, with relevance for practical implementation.

  4. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  5. Parsec-Scale Obscuring Accretion Disk with Large-Scale Magnetic Field in AGNs

    NASA Technical Reports Server (NTRS)

    Dorodnitsyn, A.; Kallman, T.

    2017-01-01

    A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc (parsec) -scale torus in AGNs (Active Galactic Nuclei). Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate that the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.

  6. Prediction of turning stability using receptance coupling

    NASA Astrophysics Data System (ADS)

    Jasiewicz, Marcin; Powałka, Bartosz

    2018-01-01

    This paper presents an issue of machining stability prediction of dynamic "lathe - workpiece" system evaluated using receptance coupling method. Dynamic properties of the lathe components (the spindle and the tailstock) are assumed to be constant and can be determined experimentally based on the results of the impact test. Hence, the variable of the system "machine tool - holder - workpiece" is the machined part, which can be easily modelled analytically. The method of receptance coupling enables a synthesis of experimental (spindle, tailstock) and analytical (machined part) models, so impact testing of the entire system becomes unnecessary. The paper presents methodology of analytical and experimental models synthesis, evaluation of the stability lobes and experimental validation procedure involving both the determination of the dynamic properties of the system and cutting tests. In the summary the experimental verification results would be presented and discussed.

  7. Parsec-scale Obscuring Accretion Disk with Large-scale Magnetic Field in AGNs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorodnitsyn, A.; Kallman, T.

    A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc-scale torus in AGNs. Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate thatmore » the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.« less

  8. Bridging analytical approaches for low-carbon transitions

    NASA Astrophysics Data System (ADS)

    Geels, Frank W.; Berkhout, Frans; van Vuuren, Detlef P.

    2016-06-01

    Low-carbon transitions are long-term multi-faceted processes. Although integrated assessment models have many strengths for analysing such transitions, their mathematical representation requires a simplification of the causes, dynamics and scope of such societal transformations. We suggest that integrated assessment model-based analysis should be complemented with insights from socio-technical transition analysis and practice-based action research. We discuss the underlying assumptions, strengths and weaknesses of these three analytical approaches. We argue that full integration of these approaches is not feasible, because of foundational differences in philosophies of science and ontological assumptions. Instead, we suggest that bridging, based on sequential and interactive articulation of different approaches, may generate a more comprehensive and useful chain of assessments to support policy formation and action. We also show how these approaches address knowledge needs of different policymakers (international, national and local), relate to different dimensions of policy processes and speak to different policy-relevant criteria such as cost-effectiveness, socio-political feasibility, social acceptance and legitimacy, and flexibility. A more differentiated set of analytical approaches thus enables a more differentiated approach to climate policy making.

  9. Subject-enabled analytics model on measurement statistics in health risk expert system for public health informatics.

    PubMed

    Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun

    2017-11-01

    This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. On the nonlinear dynamics of trolling-mode AFM: Analytical solution using multiple time scales method

    NASA Astrophysics Data System (ADS)

    Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza

    2018-06-01

    Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.

  11. Analytical Modeling of Triple-Metal Hetero-Dielectric DG SON TFET

    NASA Astrophysics Data System (ADS)

    Mahajan, Aman; Dash, Dinesh Kumar; Banerjee, Pritha; Sarkar, Subir Kumar

    2018-02-01

    In this paper, a 2-D analytical model of triple-metal hetero-dielectric DG TFET is presented by combining the concepts of triple material gate engineering and hetero-dielectric engineering. Three metals with different work functions are used as both front- and back gate electrodes to modulate the barrier at source/channel and channel/drain interface. In addition to this, front gate dielectric consists of high-K HfO2 at source end and low-K SiO2 at drain side, whereas back gate dielectric is replaced by air to further improve the ON current of the device. Surface potential and electric field of the proposed device are formulated solving 2-D Poisson's equation and Young's approximation. Based on this electric field expression, tunneling current is obtained by using Kane's model. Several device parameters are varied to examine the behavior of the proposed device. The analytical model is validated with TCAD simulation results for proving the accuracy of our proposed model.

  12. Analytical drain current model for symmetric dual-gate amorphous indium gallium zinc oxide thin-film transistors

    NASA Astrophysics Data System (ADS)

    Qin, Ting; Liao, Congwei; Huang, Shengxiang; Yu, Tianbao; Deng, Lianwen

    2018-01-01

    An analytical drain current model based on the surface potential is proposed for amorphous indium gallium zinc oxide (a-InGaZnO) thin-film transistors (TFTs) with a synchronized symmetric dual-gate (DG) structure. Solving the electric field, surface potential (φS), and central potential (φ0) of the InGaZnO film using the Poisson equation with the Gaussian method and Lambert function is demonstrated in detail. The compact analytical model of current-voltage behavior, which consists of drift and diffusion components, is investigated by regional integration, and voltage-dependent effective mobility is taken into account. Comparison results demonstrate that the calculation results obtained using the derived models match well with the simulation results obtained using a technology computer-aided design (TCAD) tool. Furthermore, the proposed model is incorporated into SPICE simulations using Verilog-A to verify the feasibility of using DG InGaZnO TFTs for high-performance circuit designs.

  13. Magnetically-driven medical robots: An analytical magnetic model for endoscopic capsules design

    NASA Astrophysics Data System (ADS)

    Li, Jing; Barjuei, Erfan Shojaei; Ciuti, Gastone; Hao, Yang; Zhang, Peisen; Menciassi, Arianna; Huang, Qiang; Dario, Paolo

    2018-04-01

    Magnetic-based approaches are highly promising to provide innovative solutions for the design of medical devices for diagnostic and therapeutic procedures, such as in the endoluminal districts. Due to the intrinsic magnetic properties (no current needed) and the high strength-to-size ratio compared with electromagnetic solutions, permanent magnets are usually embedded in medical devices. In this paper, a set of analytical formulas have been derived to model the magnetic forces and torques which are exerted by an arbitrary external magnetic field on a permanent magnetic source embedded in a medical robot. In particular, the authors modelled cylindrical permanent magnets as general solution often used and embedded in magnetically-driven medical devices. The analytical model can be applied to axially and diametrically magnetized, solid and annular cylindrical permanent magnets in the absence of the severe calculation complexity. Using a cylindrical permanent magnet as a selected solution, the model has been applied to a robotic endoscopic capsule as a pilot study in the design of magnetically-driven robots.

  14. Analysis of the sound field in finite length infinite baffled cylindrical ducts with vibrating walls of finite impedance.

    PubMed

    Shao, Wei; Mechefske, Chris K

    2005-04-01

    This paper describes an analytical model of finite cylindrical ducts with infinite flanges. This model is used to investigate the sound radiation characteristics of the gradient coil system of a magnetic resonance imaging (MRI) scanner. The sound field in the duct satisfies both the boundary conditions at the wall and at the open ends. The vibrating cylindrical wall of the duct is assumed to be the only sound source. Different acoustic conditions for the wall (rigid and absorptive) are used in the simulations. The wave reflection phenomenon at the open ends of the finite duct is described by general radiation impedance. The analytical model is validated by the comparison with its counterpart in a commercial code based on the boundary element method (BEM). The analytical model shows significant advantages over the BEM model with better numerical efficiency and a direct relation between the design parameters and the sound field inside the duct.

  15. Parameter identification of hyperelastic material properties of the heel pad based on an analytical contact mechanics model of a spherical indentation.

    PubMed

    Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi

    2017-01-01

    Accurate identification of the material properties of the plantar soft tissue is important for computer-aided analysis of foot pathologies and design of therapeutic footwear interventions based on subject-specific models of the foot. However, parameter identification of the hyperelastic material properties of plantar soft tissues usually requires an inverse finite element analysis due to the lack of a practical contact model of the indentation test. In the present study, we derive an analytical contact model of a spherical indentation test in order to directly estimate the material properties of the plantar soft tissue. Force-displacement curves of the heel pads are obtained through an indentation experiment. The experimental data are fit to the analytical stress-strain solution of the spherical indentation in order to obtain the parameters. A spherical indentation approach successfully predicted the non-linear material properties of the heel pad without iterative finite element calculation. The force-displacement curve obtained in the present study was found to be situated lower than those identified in previous studies. The proposed framework for identifying the hyperelastic material parameters may facilitate the development of subject-specific FE modeling of the foot for possible clinical and ergonomic applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Electrical Wave Propagation in an Anisotropic Model of the Left Ventricle Based on Analytical Description of Cardiac Architecture

    PubMed Central

    Pravdin, Sergey F.; Dierckx, Hans; Katsnelson, Leonid B.; Solovyova, Olga; Markhasin, Vladimir S.; Panfilov, Alexander V.

    2014-01-01

    We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers) on wave propagation using the ten Tusscher–Panfilov (2006) ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation. PMID:24817308

  17. Confocal Raman Microscopy for pH-Gradient Preconcentration and Quantitative Analyte Detection in Optically Trapped Phospholipid Vesicles.

    PubMed

    Hardcastle, Chris D; Harris, Joel M

    2015-08-04

    The ability of a vesicle membrane to preserve a pH gradient, while allowing for diffusion of neutral molecules across the phospholipid bilayer, can provide the isolation and preconcentration of ionizable compounds within the vesicle interior. In this work, confocal Raman microscopy is used to observe (in situ) the pH-gradient preconcentration of compounds into individual optically trapped vesicles that provide sub-femtoliter collectors for small-volume samples. The concentration of analyte accumulated in the vesicle interior is determined relative to a perchlorate-ion internal standard, preloaded into the vesicle along with a high-concentration buffer. As a guide to the experiments, a model for the transfer of analyte into the vesicle based on acid-base equilibria is developed to predict the concentration enrichment as a function of source-phase pH and analyte concentration. To test the concept, the accumulation of benzyldimethylamine (BDMA) was measured within individual 1 μm phospholipid vesicles having a stable initial pH that is 7 units lower than the source phase. For low analyte concentrations in the source phase (100 nM), a concentration enrichment into the vesicle interior of (5.2 ± 0.4) × 10(5) was observed, in agreement with the model predictions. Detection of BDMA from a 25 nM source-phase sample was demonstrated, a noteworthy result for an unenhanced Raman scattering measurement. The developed model accurately predicts the falloff of enrichment (and measurement sensitivity) at higher analyte concentrations, where the transfer of greater amounts of BDMA into the vesicle titrates the internal buffer and decreases the pH gradient. The predictable calibration response over 4 orders of magnitude in source-phase concentration makes it suitable for quantitative analysis of ionizable compounds from small-volume samples. The kinetics of analyte accumulation are relatively fast (∼15 min) and are consistent with the rate of transfer of a polar aromatic molecule across a gel-phase phospholipid membrane.

  18. The Analytical Limits of Modeling Short Diffusion Timescales

    NASA Astrophysics Data System (ADS)

    Bradshaw, R. W.; Kent, A. J.

    2016-12-01

    Chemical and isotopic zoning in minerals is widely used to constrain the timescales of magmatic processes such as magma mixing and crystal residence, etc. via diffusion modeling. Forward modeling of diffusion relies on fitting diffusion profiles to measured compositional gradients. However, an individual measurement is essentially an average composition for a segment of the gradient defined by the spatial resolution of the analysis. Thus there is the potential for the analytical spatial resolution to limit the timescales that can be determined for an element of given diffusivity, particularly where the scale of the gradient approaches that of the measurement. Here we use a probabilistic modeling approach to investigate the effect of analytical spatial resolution on estimated timescales from diffusion modeling. Our method investigates how accurately the age of a synthetic diffusion profile can be obtained by modeling an "unknown" profile derived from discrete sampling of the synthetic compositional gradient at a given spatial resolution. We also include the effects of analytical uncertainty and the position of measurements relative to the diffusion gradient. We apply this method to the spatial resolutions of common microanalytical techniques (LA-ICP-MS, SIMS, EMP, NanoSIMS). Our results confirm that for a given diffusivity, higher spatial resolution gives access to shorter timescales, and that each analytical spacing has a minimum timescale, below which it overestimates the timescale. For example, for Ba diffusion in plagioclase at 750 °C timescales are accurate (within 20%) above 10, 100, 2,600, and 71,000 years at 0.3, 1, 5, and 25 mm spatial resolution, respectively. For Sr diffusion in plagioclase at 750 °C, timescales are accurate above 0.02, 0.2, 4, and 120 years at the same spatial resolutions. Our results highlight the importance of selecting appropriate analytical techniques to estimate accurate diffusion-based timescales.

  19. Many-Particle Dephasing after a Quench

    NASA Astrophysics Data System (ADS)

    Kiendl, Thomas; Marquardt, Florian

    2017-03-01

    After a quench in a quantum many-body system, expectation values tend to relax towards long-time averages. However, temporal fluctuations remain in the long-time limit, and it is crucial to study the suppression of these fluctuations with increasing system size. The particularly important case of nonintegrable models has been addressed so far only by numerics and conjectures based on analytical bounds. In this work, we are able to derive analytical predictions for the temporal fluctuations in a nonintegrable model (the transverse Ising chain with extra terms). Our results are based on identifying a dynamical regime of "many-particle dephasing," where quasiparticles do not yet relax but fluctuations are nonetheless suppressed exponentially by weak integrability breaking.

  20. Many-Particle Dephasing after a Quench.

    PubMed

    Kiendl, Thomas; Marquardt, Florian

    2017-03-31

    After a quench in a quantum many-body system, expectation values tend to relax towards long-time averages. However, temporal fluctuations remain in the long-time limit, and it is crucial to study the suppression of these fluctuations with increasing system size. The particularly important case of nonintegrable models has been addressed so far only by numerics and conjectures based on analytical bounds. In this work, we are able to derive analytical predictions for the temporal fluctuations in a nonintegrable model (the transverse Ising chain with extra terms). Our results are based on identifying a dynamical regime of "many-particle dephasing," where quasiparticles do not yet relax but fluctuations are nonetheless suppressed exponentially by weak integrability breaking.

  1. Combining observations in the reflective solar and thermal domains for improved carbon and energy flux estimation

    USDA-ARS?s Scientific Manuscript database

    This study investigates the utility of integrating remotely sensed estimates of leaf chlorophyll (Cab) into a therma-based Two-Source Energy Balance (TSEB) model that estimates land-surface CO2 and energy fluxes using an analytical, light-use-efficiency (LUE) based model of canopy resistance. The LU...

  2. LitPathExplorer: a confidence-based visual text analytics tool for exploring literature-enriched pathway models.

    PubMed

    Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia

    2018-04-15

    Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  3. Turbomachinery noise

    NASA Astrophysics Data System (ADS)

    Groeneweg, John F.; Sofrin, Thomas G.; Rice, Edward J.; Gliebe, Phillip R.

    1991-08-01

    Summarized here are key advances in experimental techniques and theoretical applications which point the way to a broad understanding and control of turbomachinery noise. On the experimental side, the development of effective inflow control techniques makes it possible to conduct, in ground based facilities, definitive experiments in internally controlled blade row interactions. Results can now be valid indicators of flight behavior and can provide a firm base for comparison with analytical results. Inflow control coupled with detailed diagnostic tools such as blade pressure measurements can be used to uncover the more subtle mechanisms such as rotor strut interaction, which can set tone levels for some engine configurations. Initial mappings of rotor wake-vortex flow fields have provided a data base for a first generation semiempirical flow disturbance model. Laser velocimetry offers a nonintrusive method for validating and improving the model. Digital data systems and signal processing algorithms are bringing mode measurement closer to a working tool that can be frequently applied to a real machine such as a turbofan engine. On the analytical side, models of most of the links in the chain from turbomachine blade source to far field observation point have been formulated. Three dimensional lifting surface theory for blade rows, including source noncompactness and cascade effects, blade row transmission models incorporating mode and frequency scattering, and modal radiation calculations, including hybrid numerical-analytical approaches, are tools which await further application.

  4. SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal

    PubMed Central

    Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.

    2017-01-01

    Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758

  5. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  6. Development of an analytical model for estimating global terrestrial carbon assimilation using a rate-limitation framework

    NASA Astrophysics Data System (ADS)

    Donohue, Randall; Yang, Yuting; McVicar, Tim; Roderick, Michael

    2016-04-01

    A fundamental question in climate and ecosystem science is "how does climate regulate the land surface carbon budget?" To better answer that question, here we develop an analytical model for estimating mean annual terrestrial gross primary productivity (GPP), which is the largest carbon flux over land, based on a rate-limitation framework. Actual GPP (climatological mean from 1982 to 2010) is calculated as a function of the balance between two GPP potentials defined by the climate (i.e., precipitation and solar radiation) and a third parameter that encodes other environmental variables and modifies the GPP-climate relationship. The developed model was tested at three spatial scales using different GPP sources, i.e., (1) observed GPP from 94 flux-sites, (2) modelled GPP (using the model-tree-ensemble approach) at 48654 (0.5 degree) grid-cells and (3) at 32 large catchments across the globe. Results show that the proposed model could account for the spatial GPP patterns, with a root-mean-square error of 0.70, 0.65 and 0.3 g C m-2 d-1 and R2 of 0.79, 0.92 and 0.97 for the flux-site, grid-cell and catchment scales, respectively. This analytical GPP model shares a similar form with the Budyko hydroclimatological model, which opens the possibility of a general analytical framework to analyze the linked carbon-water-energy cycles.

  7. Building analytical three-field cosmological models

    NASA Astrophysics Data System (ADS)

    Santos, J. R. L.; Moraes, P. H. R. S.; Ferreira, D. A.; Neta, D. C. Vilar

    2018-02-01

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called "extension method". The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters.

  8. Modelling of resonant MEMS magnetic field sensor with electromagnetic induction sensing

    NASA Astrophysics Data System (ADS)

    Liu, Song; Xu, Huaying; Xu, Dehui; Xiong, Bin

    2017-06-01

    This paper presents an analytical model of resonant MEMS magnetic field sensor with electromagnetic induction sensing. The resonant structure vibrates in square extensional (SE) mode. By analyzing the vibration amplitude and quality factor of the resonant structure, the magnetic field sensitivity as a function of device structure parameters and encapsulation pressure is established. The developed analytical model has been verified by comparing calculated results with experiment results and the deviation between them is only 10.25%, which shows the feasibility of the proposed device model. The model can provide theoretical guidance for further design optimization of the sensor. Moreover, a quantitative study of the magnetic field sensitivity is conducted with respect to the structure parameters and encapsulation pressure based on the proposed model.

  9. Thermodynamic analysis and subscale modeling of space-based orbit transfer vehicle cryogenic propellant resupply

    NASA Technical Reports Server (NTRS)

    Defelice, David M.; Aydelott, John C.

    1987-01-01

    The resupply of the cryogenic propellants is an enabling technology for spacebased orbit transfer vehicles. As part of the NASA Lewis ongoing efforts in microgravity fluid management, thermodynamic analysis and subscale modeling techniques were developed to support an on-orbit test bed for cryogenic fluid management technologies. Analytical results have shown that subscale experimental modeling of liquid resupply can be used to validate analytical models when the appropriate target temperature is selected to relate the model to its prototype system. Further analyses were used to develop a thermodynamic model of the tank chilldown process which is required prior to the no-vent fill operation. These efforts were incorporated into two FORTRAN programs which were used to present preliminary analyticl results.

  10. Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-01-01

    Thanks to recent developments in additive manufacturing techniques, it is now possible to fabricate porous biomaterials with arbitrarily complex micro-architectures. Micro-architectures of such biomaterials determine their physical and biological properties, meaning that one could potentially improve the performance of such biomaterials through rational design of micro-architecture. The relationship between the micro-architecture of porous biomaterials and their physical and biological properties has therefore received increasing attention recently. In this paper, we studied the mechanical properties of porous biomaterials made from a relatively unexplored unit cell, namely rhombicuboctahedron. We derived analytical relationships that relate the micro-architecture of such porous biomaterials, i.e. the dimensions of the rhombicuboctahedron unit cell, to their elastic modulus, Poisson's ratio, and yield stress. Finite element models were also developed to validate the analytical solutions. Analytical and numerical results were compared with experimental data from one of our recent studies. It was found that analytical solutions and numerical results show a very good agreement particularly for smaller values of apparent density. The elastic moduli predicted by analytical and numerical models were in very good agreement with experimental observations too. While in excellent agreement with each other, analytical and numerical models somewhat over-predicted the yield stress of the porous structures as compared to experimental data. As the ratio of the vertical struts to the inclined struts, α, approaches zero and infinity, the rhombicuboctahedron unit cell respectively approaches the octahedron (or truncated cube) and cube unit cells. For those limits, the analytical solutions presented here were found to approach the analytic solutions obtained for the octahedron, truncated cube, and cube unit cells, meaning that the presented solutions are generalizations of the analytical solutions obtained for several other types of porous biomaterials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. An analytical strategy to investigate Semen Strychni nephrotoxicity based on simultaneous HILIC-ESI-MS/MS detection of Semen Strychni alkaloids, tyrosine and tyramine in HEK 293t cell lysates.

    PubMed

    Gu, Liqiang; Hou, Pengyi; Zhang, Ruowen; Liu, Ziying; Bi, Kaishun; Chen, Xiaohui

    2016-10-15

    A Previous metabolomics study has demonstrated that tyrosine metabolism might be disrupted by treating with Semen Strychni on the cell nephrotoxicity model. To investigate the relationship between Semen Strychni alkaloids (SAs) and endogenous tyrosine, tyramine under the nephrotoxicity condition, an HILIC-ESI-MS/MS based analytical strategy was applied in this study. Based on the established Semen Strychni nephrotoxicity cell model, strychnine and brucine were identified and screened as the main SAs by an HPLC-Q Exactive hybrid quadrupole Orbitrap mass system. Then, a sensitive HILIC-ESI-MS/MS method was developed to simultaneously monitor strychnine, brucine, tyrosine and tyramine in cell lysate. The analytes were separated by a Shiseido CAPCELL CORE PC (150mm×2.1mm, 2.7μm) HILIC column in an acetonitrile/0.1% formic acid gradient system. All the calibration curves were linear with regression coefficients above 0.9924. The absolute recoveries were more than 80.5% and the matrix effects were between 91.6%-107.0%. With the developed method, analytes were successfully determined in cell lysates. Decreased levels of tyrosine and tyramine were observed only in combination with increased levels of SAs, indicating that the disturbance of tyrosine metabolism might be induced by the accumulation of SAs in kidney cell after exposure of Semen Strychni. The HILIC-ESI-MS/MS based analytical strategy is a useful tool to reveal the relationships between the toxic herb components and the endogenous metabolite profiling in the toxicity investigation of herb medicines. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Impact of correlated magnetic noise on the detection of stochastic gravitational waves: Estimation based on a simple analytical model

    NASA Astrophysics Data System (ADS)

    Himemoto, Yoshiaki; Taruya, Atsushi

    2017-07-01

    After the first direct detection of gravitational waves (GW), detection of the stochastic background of GWs is an important next step, and the first GW event suggests that it is within the reach of the second-generation ground-based GW detectors. Such a GW signal is typically tiny and can be detected by cross-correlating the data from two spatially separated detectors if the detector noise is uncorrelated. It has been advocated, however, that the global magnetic fields in the Earth-ionosphere cavity produce the environmental disturbances at low-frequency bands, known as Schumann resonances, which potentially couple with GW detectors. In this paper, we present a simple analytical model to estimate its impact on the detection of stochastic GWs. The model crucially depends on the geometry of the detector pair through the directional coupling, and we investigate the basic properties of the correlated magnetic noise based on the analytic expressions. The model reproduces the major trend of the recently measured global correlation between the GW detectors via magnetometer. The estimated values of the impact of correlated noise also match those obtained from the measurement. Finally, we give an implication to the detection of stochastic GWs including upcoming detectors, KAGRA and LIGO India. The model suggests that LIGO Hanford-Virgo and Virgo-KAGRA pairs are possibly less sensitive to the correlated noise and can achieve a better sensitivity to the stochastic GW signal in the most pessimistic case.

  13. Adjustment of pesticide concentrations for temporal changes in analytical recovery, 1992–2010

    USGS Publications Warehouse

    Martin, Jeffrey D.; Eberle, Michael

    2011-01-01

    Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ("spiked" QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as a percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in apparent environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report presents data and models related to the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as "pesticides") that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 through 2010 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Models of recovery, based on robust, locally weighted scatterplot smooths (lowess smooths) of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.

  14. Quantifying Seepage Flux using Sediment Temperatures

    EPA Science Inventory

    This report provides a demonstration of different modeling approaches that use sediment temperatures to estimate the magnitude and direction of water flux across the groundwater-surface water transition zone. Analytical models based on steady-state or transient temperature solut...

  15. The Rack-Gear Tool Generation Modelling. Non-Analytical Method Developed in CATIA, Using the Relative Generating Trajectories Method

    NASA Astrophysics Data System (ADS)

    Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.

    2016-11-01

    The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.

  16. Computational Simulation of the High Strain Rate Tensile Response of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2002-01-01

    A research program is underway to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. Under these types of loading conditions, the material response can be highly strain rate dependent and nonlinear. State variable constitutive equations based on a viscoplasticity approach have been developed to model the deformation of the polymer matrix. The constitutive equations are then combined with a mechanics of materials based micromechanics model which utilizes fiber substructuring to predict the effective mechanical and thermal response of the composite. To verify the analytical model, tensile stress-strain curves are predicted for a representative composite over strain rates ranging from around 1 x 10(exp -5)/sec to approximately 400/sec. The analytical predictions compare favorably to experimentally obtained values both qualitatively and quantitatively. Effective elastic and thermal constants are predicted for another composite, and compared to finite element results.

  17. Specialized data analysis of SSME and advanced propulsion system vibration measurements

    NASA Technical Reports Server (NTRS)

    Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi

    1993-01-01

    The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.

  18. Analytical and numerical solution for wave reflection from a porous wave absorber

    NASA Astrophysics Data System (ADS)

    Magdalena, Ikha; Roque, Marian P.

    2018-03-01

    In this paper, wave reflection from a porous wave absorber is investigated theoretically and numerically. The equations that we used are based on shallow water type model. Modification of motion inside the absorber is by including linearized friction term in momentum equation and introducing a filtered velocity. Here, an analytical solution for wave reflection coefficient from a porous wave absorber over a flat bottom is derived. Numerically, we solve the equations using the finite volume method on a staggered grid. To validate our numerical model, comparison of the numerical reflection coefficient is made against the analytical solution. Further, we implement our numerical scheme to study the evolution of surface waves pass through a porous absorber over varied bottom topography.

  19. Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, I. I.; Turaev, D. V.

    2017-01-01

    We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.

  20. Concept for a Satellite-Based Advanced Air Traffic Management System : Volume 9. System and Subsystem Performance Models.

    DOT National Transportation Integrated Search

    1973-02-01

    The volume presents the models used to analyze basic features of the system, establish feasibility of techniques, and evaluate system performance. The models use analytical expressions and computer simulations to represent the relationship between sy...

  1. Commercial Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  2. Critical evaluation of connectivity-based point of care testing systems of glucose in a hospital environment.

    PubMed

    Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R

    2008-01-01

    In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.

  3. Multi-country health surveys: are the analyses misleading?

    PubMed

    Masood, Mohd; Reidpath, Daniel D

    2014-05-01

    The aim of this paper was to review the types of approaches currently utilized in the analysis of multi-country survey data, specifically focusing on design and modeling issues with a focus on analyses of significant multi-country surveys published in 2010. A systematic search strategy was used to identify the 10 multi-country surveys and the articles published from them in 2010. The surveys were selected to reflect diverse topics and foci; and provide an insight into analytic approaches across research themes. The search identified 159 articles appropriate for full text review and data extraction. The analyses adopted in the multi-country surveys can be broadly classified as: univariate/bivariate analyses, and multivariate/multivariable analyses. Multivariate/multivariable analyses may be further divided into design- and model-based analyses. Of the 159 articles reviewed, 129 articles used model-based analysis, 30 articles used design-based analyses. Similar patterns could be seen in all the individual surveys. While there is general agreement among survey statisticians that complex surveys are most appropriately analyzed using design-based analyses, most researchers continued to use the more common model-based approaches. Recent developments in design-based multi-level analysis may be one approach to include all the survey design characteristics. This is a relatively new area, however, and there remains statistical, as well as applied analytic research required. An important limitation of this study relates to the selection of the surveys used and the choice of year for the analysis, i.e., year 2010 only. There is, however, no strong reason to believe that analytic strategies have changed radically in the past few years, and 2010 provides a credible snapshot of current practice.

  4. A novel high-performance self-powered ultraviolet photodetector: Concept, analytical modeling and analysis

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.

  5. Scaling in situ cosmogenic nuclide production rates using analytical approximations to atmospheric cosmic-ray fluxes

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.

    2014-01-01

    Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.

  6. Analytical Model and Optimized Design of Power Transmitting Coil for Inductively Coupled Endoscope Robot.

    PubMed

    Ke, Quan; Luo, Weijie; Yan, Guozheng; Yang, Kai

    2016-04-01

    A wireless power transfer system based on the weakly inductive coupling makes it possible to provide the endoscope microrobot (EMR) with infinite power. To facilitate the patients' inspection with the EMR system, the diameter of the transmitting coil is enlarged to 69 cm. Due to the large transmitting range, a high quality factor of the Litz-wire transmitting coil is a necessity to ensure the intensity of magnetic field generated efficiently. Thus, this paper builds an analytical model of the transmitting coil, and then, optimizes the parameters of the coil by enlarging the quality factor. The lumped model of the transmitting coil includes three parameters: ac resistance, self-inductance, and stray capacitance. Based on the exact two-dimension solution, the accurate analytical expression of ac resistance is derived. Several transmitting coils of different specifications are utilized to verify this analytical expression, being in good agreements with the measured results except the coils with a large number of strands. Then, the quality factor of transmitting coils can be well predicted with the available analytical expressions of self- inductance and stray capacitance. Owing to the exact estimation of quality factor, the appropriate coil turns of the transmitting coil is set to 18-40 within the restrictions of transmitting circuit and human tissue issues. To supply enough energy for the next generation of the EMR equipped with a Ø9.5×10.1 mm receiving coil, the coil turns of the transmitting coil is optimally set to 28, which can transfer a maximum power of 750 mW with the remarkable delivering efficiency of 3.55%.

  7. Local Modelling of Groundwater Flow Using Analytic Element Method Three-dimensional Transient Unconfined Groundwater Flow With Partially Penetrating Wells and Ellipsoidal Inhomogeneites

    NASA Astrophysics Data System (ADS)

    Jankovic, I.; Barnes, R. J.; Soule, R.

    2001-12-01

    The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.

  8. Galaxy Formation At Extreme Redshifts: Semi-Analytic Model Predictions And Challenges For Observations

    NASA Astrophysics Data System (ADS)

    Yung, L. Y. Aaron; Somerville, Rachel S.

    2017-06-01

    The well-established Santa Cruz semi-analytic galaxy formation framework has been shown to be quite successful at explaining observations in the local Universe, as well as making predictions for low-redshift observations. Recently, metallicity-based gas partitioning and H2-based star formation recipes have been implemented in our model, replacing the legacy cold-gas based recipe. We then use our revised model to explore the high-redshift Universe and make predictions up to z = 15. Although our model is only calibrated to observations from the local universe, our predictions seem to match incredibly well with mid- to high-redshift observational constraints available-to-date, including rest-frame UV luminosity functions and the reionization history as constrained by CMB and IGM observations. We provide predictions for individual and statistical galaxy properties at a wide range of redshifts (z = 4 - 15), including objects that are too far or too faint to be detected with current facilities. And using our model predictions, we also provide forecasted luminosity functions and other observables for upcoming studies with JWST.

  9. An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer.

    PubMed

    Zhang, Qiang; Shi, Shengjun; Chen, Weishan

    2016-03-01

    An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer is proposed. The transducer is a Langevin type transducer which is composed of an exponential horn, four groups of PZT ceramics and a back beam. The exponential horn can focus the vibration energy, and can enlarge vibration amplitude and velocity efficiently. A bending vibration model of the transducer is first constructed, and subsequently an electromechanical coupling model is constructed based on the vibration model. In order to obtain the most suitable excitation position of the PZT ceramics, the effective electromechanical coupling coefficient is optimized by means of the quadratic interpolation method. When the effective electromechanical coupling coefficient reaches the peak value of 42.59%, the optimal excitation position (L1=22.52 mm) is found. The FEM method and the experimental method are used to validate the developed analytical model. Two groups of the FEM model (the Group A center bolt is not considered, and but the Group B center bolt is considered) are constructed and separately compared with the analytical model and the experimental model. Four prototype transducers around the peak value are fabricated and tested to validate the analytical model. A scanning laser Doppler vibrometer is employed to test the bending vibration shape and resonance frequency. Finally, the electromechanical coupling coefficient is tested indirectly through an impedance analyzer. Comparisons of the analytical results, FEM results and experiment results are presented, and the results show good agreement. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Analytical Model for Estimating Terrestrial Cosmic Ray Fluxes Nearly Anytime and Anywhere in the World: Extension of PARMA/EXPACS.

    PubMed

    Sato, Tatsuhiko

    2015-01-01

    By extending our previously established model, here we present a new model called "PHITS-based Analytical Radiation Model in the Atmosphere (PARMA) version 3.0," which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni), muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth's atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS) simulation performed by Particle and Heavy Ion Transport code System (PHITS). The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS). Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research.

  11. Analytical Model for Estimating Terrestrial Cosmic Ray Fluxes Nearly Anytime and Anywhere in the World: Extension of PARMA/EXPACS

    PubMed Central

    Sato, Tatsuhiko

    2015-01-01

    By extending our previously established model, here we present a new model called “PHITS-based Analytical Radiation Model in the Atmosphere (PARMA) version 3.0,” which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni), muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth’s atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS) simulation performed by Particle and Heavy Ion Transport code System (PHITS). The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R 2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS). Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research. PMID:26674183

  12. Interactive Management and Updating of Spatial Data Bases

    NASA Technical Reports Server (NTRS)

    French, P.; Taylor, M.

    1982-01-01

    The decision making process, whether for power plant siting, load forecasting or energy resource planning, invariably involves a blend of analytical methods and judgement. Management decisions can be improved by the implementation of techniques which permit an increased comprehension of results from analytical models. Even where analytical procedures are not required, decisions can be aided by improving the methods used to examine spatially and temporally variant data. How the use of computer aided planning (CAP) programs and the selection of a predominant data structure, can improve the decision making process is discussed.

  13. Nonlinear modelling of high-speed catenary based on analytical expressions of cable and truss elements

    NASA Astrophysics Data System (ADS)

    Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing

    2015-10-01

    Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.

  14. Torsional vibration of a cracked rod by variational formulation and numerical analysis

    NASA Astrophysics Data System (ADS)

    Chondros, T. G.; Labeas, G. N.

    2007-04-01

    The torsional vibration of a circumferentially cracked cylindrical shaft is studied through an "exact" analytical solution and a numerical finite element (FE) analysis. The Hu-Washizu-Barr variational formulation is used to develop the differential equation and the boundary conditions of the cracked rod. The equations of motion for a uniform cracked rod in torsional vibration are derived and solved, and the Rayleigh quotient is used to further approximate the natural frequencies of the cracked rod. Results for the problem of the torsional vibration of a cylindrical shaft with a peripheral crack are provided through an analytical solution based on variational formulation to derive the equation of motion and a numerical analysis utilizing a parametric three-dimensional (3D) solid FE model of the cracked rod. The crack is modelled as a continuous flexibility based on fracture mechanics principles. The variational formulation results are compared with the FE alternative. The sensitivity of the FE discretization with respect to the analytical results is assessed.

  15. A Probabilistic Model of Illegal Drug Trafficking Operations in the Eastern Pacific and Caribbean Sea

    DTIC Science & Technology

    2013-09-01

    partner agencies and nations, detects, tracks, and interdicts illegal drug-trafficking in this region. In this thesis, we develop a probability model based...trafficking in this region. In this thesis, we develop a probability model based on intelligence inputs to generate a spatial temporal heat map specifying the...complement and vet such complicated simulation by developing more analytically tractable models. We develop probability models to generate a heat map

  16. Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Dalton, Angela C.; Dale, Crystal

    2014-06-01

    Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less

  17. Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.

    PubMed

    Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H

    2014-01-01

    Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.

  18. Frequency Response Function Based Damage Identification for Aerospace Structures

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph Acton

    Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.

  19. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  20. Modeling of vortex generated sound in solid propellant rocket motors

    NASA Technical Reports Server (NTRS)

    Flandro, G. A.

    1980-01-01

    There is considerable evidence based on both full scale firings and cold flow simulations that hydrodynamically unstable shear flows in solid propellant rocket motors can lead to acoustic pressure fluctuations of significant amplitude. Although a comprehensive theoretical understanding of this problem does not yet exist, procedures were explored for generating useful analytical models describing the vortex shedding phenomenon and the mechanisms of coupling to the acoustic field in a rocket combustion chamber. Since combustion stability prediction procedures cannot be successful without incorporation of all acoustic gains and losses, it is clear that a vortex driving model comparable in quality to the analytical models currently employed to represent linear combustion instability must be formulated.

  1. Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.

    PubMed

    Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L

    2013-01-01

    Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.

  2. THREAT ANTICIPATION AND DECEPTIVE REASONING USING BAYESIAN BELIEF NETWORKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E

    Recent events highlight the need for tools to anticipate threats posed by terrorists. Assessing these threats requires combining information from disparate data sources such as analytic models, simulations, historical data, sensor networks, and user judgments. These disparate data can be combined in a coherent, analytically defensible, and understandable manner using a Bayesian belief network (BBN). In this paper, we develop a BBN threat anticipatory model based on a deceptive reasoning algorithm using a network engineering process that treats the probability distributions of the BBN nodes within the broader context of the system development process.

  3. Non-equilibrium many-body dynamics following a quantum quench

    NASA Astrophysics Data System (ADS)

    Vyas, Manan

    2017-12-01

    We study analytically and numerically the non-equilibrium dynamics of an isolated interacting many-body quantum system following a random quench. We model the system Hamiltonian by Embedded Gaussian Orthogonal Ensemble (EGOE) of random matrices with one plus few-body interactions for fermions. EGOE are paradigmatic models to study the crossover from integrability to chaos in interacting many-body quantum systems. We obtain a generic formulation, based on spectral variances, for describing relaxation dynamics of survival probabilities as a function of rank of interactions. Our analytical results are in good agreement with numerics.

  4. Analytical review based on statistics on good and poor financial performance of LPD in Bangli regency.

    NASA Astrophysics Data System (ADS)

    Yasa, I. B. A.; Parnata, I. K.; Susilawati, N. L. N. A. S.

    2018-01-01

    This study aims to apply analytical review model to analyze the influence of GCG, accounting conservatism, financial distress models and company size on good and poor financial performance of LPD in Bangli Regency. Ordinal regression analysis is used to perform analytical review, so that obtained the influence and relationship between variables to be considered further audit. Respondents in this study were LPDs in Bangli Regency, which amounted to 159 LPDs of that number 100 LPDs were determined as randomly selected samples. The test results found GCG and company size have a significant effect on both the good and poor financial performance, while the conservatism and financial distress model has no significant effect. The influence of the four variables on the overall financial performance of 58.8%, while the remaining 41.2% influenced by other variables. Size, FDM and accounting conservatism are variables, which are further recommended to be audited.

  5. Analysis of Mathematical Modelling on Potentiometric Biosensors

    PubMed Central

    Mehala, N.; Rajendran, L.

    2014-01-01

    A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories. PMID:25969765

  6. Analysis of mathematical modelling on potentiometric biosensors.

    PubMed

    Mehala, N; Rajendran, L

    2014-01-01

    A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories.

  7. Parametric Modeling of the Safety Effects of NextGen Terminal Maneuvering Area Conflict Scenarios

    NASA Technical Reports Server (NTRS)

    Rogers, William H.; Waldron, Timothy P.; Stroiney, Steven R.

    2011-01-01

    The goal of this work was to analytically identify and quantify the issues, challenges, technical hurdles, and pilot-vehicle interface issues associated with conflict detection and resolution (CD&R)in emerging operational concepts for a NextGen terminal aneuvering area, including surface operations. To this end, the work entailed analytical and trade studies focused on modeling the achievable safety benefits of different CD&R strategies and concepts in the current and future airport environment. In addition, crew-vehicle interface and pilot performance enhancements and potential issues were analyzed based on review of envisioned NextGen operations, expected equipage advances, and human factors expertise. The results of perturbation analysis, which quantify the high-level performance impact of changes to key parameters such as median response time and surveillance position error, show that the analytical model developed could be useful in making technology investment decisions.

  8. Analytical solution of Luedeking-Piret equation for a batch fermentation obeying Monod growth kinetics.

    PubMed

    Garnier, Alain; Gaillet, Bruno

    2015-12-01

    Not so many fermentation mathematical models allow analytical solutions of batch process dynamics. The most widely used is the combination of the logistic microbial growth kinetics with Luedeking-Piret bioproduct synthesis relation. However, the logistic equation is principally based on formalistic similarities and only fits a limited range of fermentation types. In this article, we have developed an analytical solution for the combination of Monod growth kinetics with Luedeking-Piret relation, which can be identified by linear regression and used to simulate batch fermentation evolution. Two classical examples are used to show the quality of fit and the simplicity of the method proposed. A solution for the combination of Haldane substrate-limited growth model combined with Luedeking-Piret relation is also provided. These models could prove useful for the analysis of fermentation data in industry as well as academia. © 2015 Wiley Periodicals, Inc.

  9. 3D analysis of eddy current loss in the permanent magnet coupling.

    PubMed

    Zhu, Zina; Meng, Zhuo

    2016-07-01

    This paper first presents a 3D analytical model for analyzing the radial air-gap magnetic field between the inner and outer magnetic rotors of the permanent magnet couplings by using the Amperian current model. Based on the air-gap field analysis, the eddy current loss in the isolation cover is predicted according to the Maxwell's equations. A 3D finite element analysis model is constructed to analyze the magnetic field spatial distributions and vector eddy currents, and then the simulation results obtained are analyzed and compared with the analytical method. Finally, the current losses of two types of practical magnet couplings are measured in the experiment to compare with the theoretical results. It is concluded that the 3D analytical method of eddy current loss in the magnet coupling is viable and could be used for the eddy current loss prediction of magnet couplings.

  10. A methodology to enhance electromagnetic compatibility in joint military operations

    NASA Astrophysics Data System (ADS)

    Buckellew, William R.

    The development and validation of an improved methodology to identify, characterize, and prioritize potential joint EMI (electromagnetic interference) interactions and identify and develop solutions to reduce the effects of the interference are discussed. The methodology identifies potential EMI problems using results from field operations, historical data bases, and analytical modeling. Operational expertise, engineering analysis, and testing are used to characterize and prioritize the potential EMI problems. Results can be used to resolve potential EMI during the development and acquisition of new systems and to develop engineering fixes and operational workarounds for systems already employed. The analytic modeling portion of the methodology is a predictive process that uses progressive refinement of the analysis and the operational electronic environment to eliminate noninterfering equipment pairs, defer further analysis on pairs lacking operational significance, and resolve the remaining EMI problems. Tests are conducted on equipment pairs to ensure that the analytical models provide a realistic description of the predicted interference.

  11. Analytical connection between thresholds and immunization strategies of SIS model in random networks

    NASA Astrophysics Data System (ADS)

    Zhou, Ming-Yang; Xiong, Wen-Man; Liao, Hao; Wang, Tong; Wei, Zong-Wen; Fu, Zhong-Qian

    2018-05-01

    Devising effective strategies for hindering the propagation of viruses and protecting the population against epidemics is critical for public security and health. Despite a number of studies based on the susceptible-infected-susceptible (SIS) model devoted to this topic, we still lack a general framework to compare different immunization strategies in completely random networks. Here, we address this problem by suggesting a novel method based on heterogeneous mean-field theory for the SIS model. Our method builds the relationship between the thresholds and different immunization strategies in completely random networks. Besides, we provide an analytical argument that the targeted large-degree strategy achieves the best performance in random networks with arbitrary degree distribution. Moreover, the experimental results demonstrate the effectiveness of the proposed method in both artificial and real-world networks.

  12. Augmented kludge waveforms for detecting extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Chua, Alvin J. K.; Moore, Christopher J.; Gair, Jonathan R.

    2017-08-01

    The extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes are an important class of source for the future space-based gravitational-wave detector LISA. Detecting signals from EMRIs will require waveform models that are both accurate and computationally efficient. In this paper, we present the latest implementation of an augmented analytic kludge (AAK) model, publicly available at https://github.com/alvincjk/EMRI_Kludge_Suite as part of an EMRI waveform software suite. This version of the AAK model has improved accuracy compared to its predecessors, with two-month waveform overlaps against a more accurate fiducial model exceeding 0.97 for a generic range of sources; it also generates waveforms 5-15 times faster than the fiducial model. The AAK model is well suited for scoping out data analysis issues in the upcoming round of mock LISA data challenges. A simple analytic argument shows that it might even be viable for detecting EMRIs with LISA through a semicoherent template bank method, while the use of the original analytic kludge in the same approach will result in around 90% fewer detections.

  13. Life prediction and constitutive models for engine hot section anisotropic materials program

    NASA Technical Reports Server (NTRS)

    Nissley, D. M.; Meyer, T. G.

    1992-01-01

    This report presents the results from a 35 month period of a program designed to develop generic constitutive and life prediction approaches and models for nickel-based single crystal gas turbine airfoils. The program is composed of a base program and an optional program. The base program addresses the high temperature coated single crystal regime above the airfoil root platform. The optional program investigates the low temperature uncoated single crystal regime below the airfoil root platform including the notched conditions of the airfoil attachment. Both base and option programs involve experimental and analytical efforts. Results from uniaxial constitutive and fatigue life experiments of coated and uncoated PWA 1480 single crystal material form the basis for the analytical modeling effort. Four single crystal primary orientations were used in the experiments: (001), (011), (111), and (213). Specific secondary orientations were also selected for the notched experiments in the optional program. Constitutive models for an overlay coating and PWA 1480 single crystal material were developed based on isothermal hysteresis loop data and verified using thermomechanical (TMF) hysteresis loop data. A fatigue life approach and life models were selected for TMF crack initiation of coated PWA 1480. An initial life model used to correlate smooth and notched fatigue data obtained in the option program shows promise. Computer software incorporating the overlay coating and PWA 1480 constitutive models was developed.

  14. The effects of corona on current surges induced on conducting lines by EMP: A comparison of experiment data with results of analytic corona models

    NASA Astrophysics Data System (ADS)

    Blanchard, J. P.; Tesche, F. M.; McConnell, B. W.

    1987-09-01

    An experiment to determine the interaction of an intense electromagnetic pulse (EMP), such as that produced by a nuclear detonation above the Earth's atmosphere, was performed in March, 1986 at Kirtland Air Force Base near Albuquerque, New Mexico. The results of that experiment have been published without analysis. Following an introduction of the corona phenomenon, the reason for interest in it, and a review of the experiment, this paper discusses five different analytic corona models that may model corona formation on a conducting line subjected to EMP. The results predicted by these models are compared with measured data acquired during the experiment to determine the strengths and weaknesses of each model.

  15. 2D/3D image charge for modeling field emission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Kevin L.; Shiffler, Donald A.; Harris, John R.

    Analytic image charge approximations exist for planar and spherical metal surfaces but approximations for more complex geometries, such as the conical and wirelike structures characteristic of field emitters, are lacking. Such models are the basis for the evaluation of Schottky lowering factors in equations for current density. The development of a multidimensional image charge approximation, useful for a general thermal-field emission equation used in space charge studies, is given and based on an analytical model using a prolate spheroidal geometry. A description of how the model may be adapted to be used with a line charge model appropriate for carbonmore » nanotube and carbon fiber field emitters is discussed. [http://dx.doi.org/10.1116/1.4968007]« less

  16. 2D/3D image charge for modeling field emission

    DOE PAGES

    Jensen, Kevin L.; Shiffler, Donald A.; Harris, John R.; ...

    2017-03-01

    Analytic image charge approximations exist for planar and spherical metal surfaces but approximations for more complex geometries, such as the conical and wirelike structures characteristic of field emitters, are lacking. Such models are the basis for the evaluation of Schottky lowering factors in equations for current density. The development of a multidimensional image charge approximation, useful for a general thermal-field emission equation used in space charge studies, is given and based on an analytical model using a prolate spheroidal geometry. A description of how the model may be adapted to be used with a line charge model appropriate for carbonmore » nanotube and carbon fiber field emitters is discussed. [http://dx.doi.org/10.1116/1.4968007]« less

  17. VizieR Online Data Catalog: Galaxy stellar mass assembly (Cousin+, 2015)

    NASA Astrophysics Data System (ADS)

    Cousin, M.; Lagache, G.; Bethermin, M.; Blaizot, J.; Guiderdoni, B.

    2014-11-01

    There are five fits files corresponding to the different models: - m0 : model without any regulation process - m1 : reference model (Okamoto et al., 2008MNRAS.390..920O, photo-ionization prescription) - m2 : The Okamoto et al. (2008MNRAS.390..920O) photo-ionization prescription is replaced by Gnedin (2000ApJ...542..535G) prescription - m3 : SN ejecta processes are based on Somerville et al. (2008MNRAS.391..481S) model - m4 : Model with no-star-forming gas ad-hoc modification For each model: - galaxy properties are listed in eGalICS_m*.readme - data are saved in eGalICS_m*.fits All data "fits" files are compatible with the TOPCAT software available on: http://www.star.bris.ac.uk/~mbt/topcat/ If you used data associated to eGalICS semi-analytic model, please cite the following papers: * Cousin et al.: "Galaxy stellar mass assembly: the difficulty to match observations and semi-analytical predictions" (2015A&A...575A..32C) * Cousin et al.: "Toward a new modelling of gas flows in a semi-analytical model of galaxy formation and evolution" (2015A&A...575A..33C) (11 data files).

  18. Solar radiation - to - power generation models for one-axis tracking PV system with on-site measurements from Eskisehir, Turkey

    NASA Astrophysics Data System (ADS)

    Filik, Tansu; Başaran Filik, Ümmühan; Nezih Gerek, Ömer

    2017-11-01

    In this study, new analytic models are proposed for mapping on-site global solar radiation values to electrical power output values in solar photovoltaic (PV) panels. The model extraction is achieved by simultaneously recording solar radiation and generated power from fixed and tracking panels, each with capacity of 3 kW, in Eskisehir (Turkey) region. It is shown that the relation between the solar radiation and the corresponding electric power is not only nonlinear, but it also exhibits an interesting time-varying characteristic in the form of a hysteresis function. This observed radiation-to-power relation is, then, analytically modelled with three piece-wise function parts (corresponding to morning, noon and evening times), which is another novel contribution of this work. The model is determined for both fixed panels and panels with a tracking system. Especially the panel system with a dynamic tracker produces a harmonically richer (with higher values in general) characteristic, so higher order polynomial models are necessary for the construction of analytical solar radiation models. The presented models, characteristics of the hysteresis functions, and differences in the fixed versus solar-tracking panels are expected to provide valuable insight for further model based researches.

  19. Correlating N2 and CH4 adsorption on microporous carbon using a new analytical model

    USGS Publications Warehouse

    Sun, Jielun; Chen, S.; Rood, M.J.; Rostam-Abadi, M.

    1998-01-01

    A new pore size distribution (PSD) model is developed to readily describe PSDs of microporous materials with an analytical expression. Results from this model can be used to calculate the corresponding adsorption isotherm to compare the calculated isotherm to the experimental isotherm. This aspect of the model provides another check on the validity of the model's results. The model is developed on the basis of a 3-D adsorption isotherm equation that is derived from statistical mechanical principles. Least-squares error minimization is used to solve the PSD without any preassumed distribution function. In comparison with several well-accepted analytical methods from the literature, this 3-D model offers a relatively realistic PSD description for select reference materials, including activated-carbon fibers. N2 and CH4 adsorption is correlated using the 3-D model for commercial carbons BPL and AX-21. Predicted CH4 adsorption isotherms at 296 K based on N2 adsorption at 77 K are in reasonable agreement with experimental CH4 isotherms. Use of the model is also described for characterizing PSDs of tire-derived activated carbons and coal-derived activated carbons for air-quality control applications.

  20. Numerical Problems and Agent-Based Models for a Mass Transfer Course

    ERIC Educational Resources Information Center

    Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

    2009-01-01

    Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

  1. Analytical modeling of conformal mantle cloaks for cylindrical objects using sub-wavelength printed and slotted arrays

    NASA Astrophysics Data System (ADS)

    Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea

    2012-08-01

    Following the idea of "cloaking by a surface" [A. Alù, Phys. Rev. B 80, 245115 (2009); P. Y. Chen and A. Alù, Phys. Rev. B 84, 205110 (2011)], we present a rigorous analytical model applicable to mantle cloaking of cylindrical objects using 1D and 2D sub-wavelength conformal frequency selective surface (FSS) elements. The model is based on Lorenz-Mie scattering theory which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. The FSS arrays considered in this work are composed of 1D horizontal and vertical metallic strips and 2D printed (patches, Jerusalem crosses, and cross dipoles) and slotted structures (meshes, slot-Jerusalem crosses, and slot-cross dipoles). It is shown that the analytical grid-impedance expressions derived for the planar arrays of sub-wavelength elements may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks. By properly tailoring the surface reactance of the cloak, the total scattering from the cylinder can be significantly reduced, thus rendering the object invisible over the range of frequencies of interest (i.e., at microwaves and far-infrared). The results obtained using our analytical model for mantle cloaks are validated against full-wave numerical simulations.

  2. Shuttle Communications and Tracking Systems Modeling and TDRSS Link Simulations Studies

    NASA Technical Reports Server (NTRS)

    Chie, C. M.; Dessouky, K.; Lindsey, W. C.; Tsang, C. S.; Su, Y. T.

    1985-01-01

    An analytical simulation package (LinCsim) which allows the analytical verification of data transmission performance through TDRSS satellites was modified. The work involved the modeling of the user transponder, TDRS, TDRS ground terminal, and link dynamics for forward and return links based on the TDRSS performance specifications (4) and the critical design reviews. The scope of this effort has recently been expanded to include the effects of radio frequency interference (RFI) on the bit error rate (BER) performance of the S-band return links. The RFI environment and the modified TDRSS satellite and ground station hardware are being modeled in accordance with their description in the applicable documents.

  3. Modelling of nanoscale quantum tunnelling structures using algebraic topology method

    NASA Astrophysics Data System (ADS)

    Sankaran, Krishnaswamy; Sairam, B.

    2018-05-01

    We have modelled nanoscale quantum tunnelling structures using Algebraic Topology Method (ATM). The accuracy of ATM is compared to the analytical solution derived based on the wave nature of tunnelling electrons. ATM provides a versatile, fast, and simple model to simulate complex structures. We are currently expanding the method for modelling electrodynamic systems.

  4. Stochastic modelling of the hydrologic operation of rainwater harvesting systems

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Guo, Yiping

    2018-07-01

    Rainwater harvesting (RWH) systems are an effective low impact development practice that provides both water supply and runoff reduction benefits. A stochastic modelling approach is proposed in this paper to quantify the water supply reliability and stormwater capture efficiency of RWH systems. The input rainfall series is represented as a marked Poisson process and two typical water use patterns are analytically described. The stochastic mass balance equation is solved analytically, and based on this, explicit expressions relating system performance to system characteristics are derived. The performances of a wide variety of RWH systems located in five representative climatic regions of the United States are examined using the newly derived analytical equations. Close agreements between analytical and continuous simulation results are shown for all the compared cases. In addition, an analytical equation is obtained expressing the required storage size as a function of the desired water supply reliability, average water use rate, as well as rainfall and catchment characteristics. The equations developed herein constitute a convenient and effective tool for sizing RWH systems and evaluating their performances.

  5. Analytical display design for flight tasks conducted under instrument meteorological conditions. [human factors engineering of pilot performance for display device design in instrument landing systems

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1976-01-01

    Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.

  6. Quantitative characterization of edge enhancement in phase contrast x-ray imaging.

    PubMed

    Monnin, P; Bulling, S; Hoszowska, J; Valley, J F; Meuli, R; Verdun, F R

    2004-06-01

    The aim of this study was to model the edge enhancement effect in in-line holography phase contrast imaging. A simple analytical approach was used to quantify refraction and interference contrasts in terms of beam energy and imaging geometry. The model was applied to predict the peak intensity and frequency of the edge enhancement for images of cylindrical fibers. The calculations were compared with measurements, and the relationship between the spatial resolution of the detector and the amplitude of the phase contrast signal was investigated. Calculations using the analytical model were in good agreement with experimental results for nylon, aluminum and copper wires of 50 to 240 microm diameter, and with numerical simulations based on Fresnel-Kirchhoff theory. A relationship between the defocusing distance and the pixel size of the image detector was established. This analytical model is a useful tool for optimizing imaging parameters in phase contrast in-line holography, including defocusing distance, detector resolution and beam energy.

  7. Understanding Fast and Robust Thermo-osmotic Flows through Carbon Nanotube Membranes: Thermodynamics Meets Hydrodynamics.

    PubMed

    Fu, Li; Merabia, Samy; Joly, Laurent

    2018-04-19

    Following our recent theoretical prediction of the giant thermo-osmotic response of the water-graphene interface, we explore the practical implementation of waste heat harvesting with carbon-based membranes, focusing on model membranes of carbon nanotubes (CNT). To that aim, we combine molecular dynamics simulations and an analytical model considering the details of hydrodynamics in the membrane and at the tube entrances. The analytical model and the simulation results match quantitatively, highlighting the need to take into account both thermodynamics and hydrodynamics to predict thermo-osmotic flows through membranes. We show that, despite viscous entrance effects and a thermal short-circuit mechanism, CNT membranes can generate very fast thermo-osmotic flows, which can overcome the osmotic pressure of seawater. We then show that in small tubes confinement has a complex effect on the flow and can even reverse the flow direction. Beyond CNT membranes, our analytical model can guide the search for other membranes to generate fast and robust thermo-osmotic flows.

  8. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    PubMed

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    PubMed

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  10. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  11. Steady-state protein focusing in carrier ampholyte based isoelectric focusing: Part I-Analytical solution.

    PubMed

    Shim, Jaesool; Yoo, Kisoo; Dutta, Prashanta

    2017-03-01

    The determination of an analytical solution to find the steady-state protein concentration distribution in IEF is very challenging due to the nonlinear coupling between mass and charge conservation equations. In this study, approximate analytical solutions are obtained for steady-state protein distribution in carrier ampholyte based IEF. Similar to the work of Svensson, the final concentration profile for proteins is assumed to be Gaussian, but appropriate expressions are presented in order to obtain the effective electric field and pH gradient in the focused protein band region. Analytical results are found from iterative solutions of a system of coupled algebraic equations using only several iterations for IEF separation of three plasma proteins: albumin, cardiac troponin I, and hemoglobin. The analytical results are compared with numerically predicted results for IEF, showing excellent agreement. Analytically obtained electric field and ionic conductivity distributions show significant deviation from their nominal values, which is essential in finding the protein focusing behavior at isoelectric points. These analytical solutions can be used to determine steady-state protein concentration distribution for experiment design of IEF considering any number of proteins and ampholytes. Moreover, the model presented herein can be used to find the conductivity, electric field, and pH field. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Initialization and Simulation of Three-Dimensional Aircraft Wake Vortices

    NASA Technical Reports Server (NTRS)

    Ash, Robert L.; Zheng, Z. C.

    1997-01-01

    This paper studies the effects of axial velocity profiles on vortex decay, in order to properly initialize and simulate three-dimensional wake vortex flow. Analytical relationships are obtained based on a single vortex model and computational simulations are performed for a rather practical vortex wake, which show that the single vortex analytical relations can still be applicable at certain streamwise sections of three-dimensional wake vortices.

  13. A compact two-wave dichrometer of an optical biosensor analytical system for medicine

    NASA Astrophysics Data System (ADS)

    Chulkov, D. P.; Gusev, V. M.; Kompanets, O. N.; Vereschagin, F. V.; Skuridin, S. G.; Yevdokimov, Yu. M.

    2017-01-01

    An experimental model has been developed of a compact two-wave dichrometer on the base of LEDs that is well-suited to work with "liquid" DNA nanoconstructions as biosensing units. The mobile and inexpensive device is intended for use in a biosensor analytical system for rapid determination of biologically active compounds in liquids to solve practical problems of clinic medicine and pharmacology.

  14. Description and comparison of selected models for hydrologic analysis of ground-water flow, St Joseph River basin, Indiana

    USGS Publications Warehouse

    Peters, J.G.

    1987-01-01

    The Indiana Department of Natural Resources (IDNR) is developing water-management policies designed to assess the effects of irrigation and other water uses on water supply in the basin. In support of this effort, the USGS, in cooperation with IDNR, began a study to evaluate appropriate methods for analyzing the effects of pumping on ground-water levels and streamflow in the basin 's glacial aquifer systems. Four analytical models describe drawdown for a nonleaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and partially penetrating well; and an unconfined aquifer and partially penetrating well. Analytical equations, simplifying assumptions, and methods of application are described for each model. In addition to these four models, several other analytical models were used to predict the effects of ground-water pumping on water levels in the aquifer and on streamflow in local areas with up to two pumping wells. Analytical models for a variety of other hydrogeologic conditions are cited. A digital ground-water flow model was used to describe how a numerical model can be applied to a glacial aquifer system. The numerical model was used to predict the effects of six pumping plans in 46.5 sq mi area with as many as 150 wells. Water budgets for the six pumping plans were used to estimate the effect of pumping on streamflow reduction. Results of the analytical and numerical models indicate that, in general, the glacial aquifers in the basin are highly permeable. Radial hydraulic conductivity calculated by the analytical models ranged from 280 to 600 ft/day, compared to 210 and 360 ft/day used in the numerical model. Maximum seasonal pumping for irrigation produced maximum calculated drawdown of only one-fourth of available drawdown and reduced streamflow by as much as 21%. Analytical models are useful in estimating aquifer properties and predicting local effects of pumping in areas with simple lithology and boundary conditions and with few pumping wells. Numerical models are useful in regional areas with complex hydrogeology with many pumping wells and provide detailed water budgets useful for estimating the sources of water in pumping simulations. Numerical models are useful in constructing flow nets. The choice of which type of model to use is also based on the nature and scope of questions to be answered and on the degree of accuracy required. (Author 's abstract)

  15. Developments in Impeller/Seal Secondary Flow Path Modeling for Dynamic Force Coefficients and Leakage

    NASA Technical Reports Server (NTRS)

    Palazzolo, Alan; Bhattacharya, Avijit; Athavale, Mahesh; Venkataraman, Balaji; Ryan, Steve; Funston, Kerry

    1997-01-01

    This paper highlights bulk flow and CFD-based models prepared to calculate force and leakage properties for seals and shrouded impeller leakage paths. The bulk flow approach uses a Hir's based friction model and the CFD approach solves the Navier Stoke's (NS) equation with a finite whirl orbit or via analytical perturbation. The results show good agreement in most instances with available benchmarks.

  16. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  17. Analytical and numerical solutions of the potential and electric field generated by different electrode arrays in a tumor tissue under electrotherapy.

    PubMed

    Bergues Pupo, Ana E; Reyes, Juan Bory; Bergues Cabrales, Luis E; Bergues Cabrales, Jesús M

    2011-09-24

    Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections.

  18. Dynamic calibration approach for determining catechins and gallic acid in green tea using LC-ESI/MS.

    PubMed

    Bedner, Mary; Duewer, David L

    2011-08-15

    Catechins and gallic acid are antioxidant constituents of Camellia sinensis, or green tea. Liquid chromatography with both ultraviolet (UV) absorbance and electrospray ionization mass spectrometric (ESI/MS) detection was used to determine catechins and gallic acid in three green tea matrix materials that are commonly used as dietary supplements. The results from both detection modes were evaluated with 14 quantitation models, all of which were based on the analyte response relative to an internal standard. Half of the models were static, where quantitation was achieved with calibration factors that were constant over an analysis set. The other half were dynamic, with calibration factors calculated from interpolated response factor data at each time a sample was injected to correct for potential variations in analyte response over time. For all analytes, the relatively nonselective UV responses were found to be very stable over time and independent of the calibrant concentration; comparable results with low variability were obtained regardless of the quantitation model used. Conversely, the highly selective MS responses were found to vary both with time and as a function of the calibrant concentration. A dynamic quantitation model based on polynomial data-fitting was used to reduce the variability in the quantitative results using the MS data.

  19. Initial study of thermal energy storage in unconfined aquifers. [UCATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haitjema, H.M.; Strack, O.D.L.

    1986-04-01

    Convective heat transport in unconfined aquifers is modeled in a semi-analytic way. The transient groundwater flow is modeled by superposition of analytic functions, whereby changes in the aquifer storage are represented by a network of triangles, each with a linearly varying sink distribution. This analytic formulation incorporates the nonlinearity of the differential equation for unconfined flow and eliminates numerical dispersion in modeling heat convection. The thermal losses through the aquifer base and vadose zone are modeled rather crudely. Only vertical heat conduction is considered in these boundaries, whereby a linearly varying temperature is assumed at all times. The latter assumptionmore » appears reasonable for thin aquifer boundaries. However, assuming such thin aquifer boundaries may lead to an overestimation of the thermal losses when the aquifer base is regarded as infinitely thick in reality. The approach is implemented in the computer program UCATES, which serves as a first step toward the development of a comprehensive screening tool for ATES systems in unconfined aquifers. In its present form, the program is capable of predicting the relative effects of regional flow on the efficiency of ATES systems. However, only after a more realistic heatloss mechanism is incorporated in UCATES will reliable predictions of absolute ATES efficiencies be possible.« less

  20. An uncertainty analysis of wildfire modeling [Chapter 13

    Treesearch

    Karin Riley; Matthew Thompson

    2017-01-01

    Before fire models can be understood, evaluated, and effectively applied to support decision making, model-based uncertainties must be analyzed. In this chapter, we identify and classify sources of uncertainty using an established analytical framework, and summarize results graphically in an uncertainty matrix. Our analysis facilitates characterization of the...

  1. Multilayered Word Structure Model for Assessing Spelling of Finnish Children in Shallow Orthography

    ERIC Educational Resources Information Center

    Kulju, Pirjo; Mäkinen, Marita

    2017-01-01

    This study explores Finnish children's word-level spelling by applying a linguistically based multilayered word structure model for assessing spelling performance. The model contributes to the analytical qualitative assessment approach in order to identify children's spelling performance for enhancing writing skills. The children (N = 105)…

  2. Human health risk assessment: models for predicting the effective exposure duration of on-site receptors exposed to contaminated groundwater.

    PubMed

    Baciocchi, Renato; Berardi, Simona; Verginelli, Iason

    2010-09-15

    Clean-up of contaminated sites is usually based on a risk-based approach for the definition of the remediation goals, which relies on the well known ASTM-RBCA standard procedure. In this procedure, migration of contaminants is described through simple analytical models and the source contaminants' concentration is supposed to be constant throughout the entire exposure period, i.e. 25-30 years. The latter assumption may often result over-protective of human health, leading to unrealistically low remediation goals. The aim of this work is to propose an alternative model taking in account the source depletion, while keeping the original simplicity and analytical form of the ASTM-RBCA approach. The results obtained by the application of this model are compared with those provided by the traditional ASTM-RBCA approach, by a model based on the source depletion algorithm of the RBCA ToolKit software and by a numerical model, allowing to assess its feasibility for inclusion in risk analysis procedures. The results discussed in this work are limited to on-site exposure to contaminated water by ingestion, but the approach proposed can be extended to other exposure pathways. Copyright 2010 Elsevier B.V. All rights reserved.

  3. High Technology Service Value Maximization through an MCDM-Based Innovative e-Business Model

    NASA Astrophysics Data System (ADS)

    Huang, Chi-Yo; Tzeng, Gwo-Hshiung; Ho, Wen-Rong; Chuang, Hsiu-Tyan; Lue, Yeou-Feng

    The emergence of the Internet has changed the high technology marketing channels thoroughly in the past decade while E-commerce has already become one of the most efficient channels which high technology firms may skip the intermediaries and reach end customers directly. However, defining appropriate e-business models for commercializing new high technology products or services through Internet are not that easy. To overcome the above mentioned problems, a novel analytic framework based on the concept of high technology customers’ competence set expansion by leveraging high technology service firms’ capabilities and resources as well as novel multiple criteria decision making (MCDM) techniques, will be proposed in order to define an appropriate e-business model. An empirical example study of a silicon intellectual property (SIP) commercialization e-business model based on MCDM techniques will be provided for verifying the effectiveness of this novel analytic framework. The analysis successful assisted a Taiwanese IC design service firm to define an e-business model for maximizing its customer’s SIP transactions. In the future, the novel MCDM framework can be applied successful to novel business model definitions in the high technology industry.

  4. 10 CFR 431.173 - Requirements applicable to all manufacturers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... COMMERCIAL AND INDUSTRIAL EQUIPMENT Provisions for Commercial Heating, Ventilating, Air-Conditioning and... is based on engineering or statistical analysis, computer simulation or modeling, or other analytic... method or methods used; (B) The mathematical model, the engineering or statistical analysis, computer...

  5. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    NASA Astrophysics Data System (ADS)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  6. An analytic model for acoustic scattering from an impedance cylinder placed normal to an impedance plane

    NASA Astrophysics Data System (ADS)

    Swearingen, Michelle E.

    2004-04-01

    An analytic model, developed in cylindrical coordinates, is described for the scattering of a spherical wave off a semi-infinite reight cylinder placed normal to a ground surface. The motivation for the research is to have a model with which one can simulate scattering from a single tree and which can be used as a fundamental element in a model for estimating the attenuation in a forest comprised of multiple tree trunks. Comparisons are made to the plane wave case, the transparent cylinder case, and the rigid and soft ground cases as a method of theoretically verifying the model for the contemplated range of model parameters. Agreement is regarded as excellent for these benchmark cases. Model sensitivity to five parameters is also explored. An experiment was performed to study the scattering from a cylinder normal to a ground surface. The data from the experiment is analyzed with a transfer function method to yield frequency and impulse responses, and calculations based on the analytic model are compared to the experimental data. Thesis advisor: David C. Swanson Copies of this thesis written in English can be obtained from

  7. Analytical modeling of relative luminescence efficiency of Al2O3:C optically stimulated luminescence detectors exposed to high-energy heavy charged particles.

    PubMed

    Sawakuchi, Gabriel O; Yukihara, Eduardo G

    2012-01-21

    The objective of this work is to test analytical models to calculate the luminescence efficiency of Al(2)O(3):C optically stimulated luminescence detectors (OSLDs) exposed to heavy charged particles with energies relevant to space dosimetry and particle therapy. We used the track structure model to obtain an analytical expression for the relative luminescence efficiency based on the average radial dose distribution produced by the heavy charged particle. We compared the relative luminescence efficiency calculated using seven different radial dose distribution models, including a modified model introduced in this work, with experimental data. The results obtained using the modified radial dose distribution function agreed within 20% with experimental data from Al(2)O(3):C OSLDs relative luminescence efficiency for particles with atomic number ranging from 1 to 54 and linear energy transfer in water from 0.2 up to 1368 keV µm(-1). In spite of the significant improvement over other radial dose distribution models, understanding of the underlying physical processes associated with these radial dose distribution models remain elusive and may represent a limitation of the track structure model.

  8. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  9. Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise.

    PubMed

    Hertäg, Loreen; Durstewitz, Daniel; Brunel, Nicolas

    2014-01-01

    Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.

  10. An analytical model to design circumferential clasps for laser-sintered removable partial dentures.

    PubMed

    Alsheghri, Ammar A; Alageel, Omar; Caron, Eric; Ciobanu, Ovidiu; Tamimi, Faleh; Song, Jun

    2018-06-21

    Clasps of removable partial dentures (RPDs) often suffer from plastic deformation and failure by fatigue; a common complication of RPDs. A new technology for processing metal frameworks for dental prostheses based on laser-sintering, which allows for precise fabrication of clasp geometry, has been recently developed. This study sought to propose a novel method for designing circumferential clasps for laser-sintered RPDs to avoid plastic deformation or fatigue failure. An analytical model for designing clasps with semicircular cross-sections was derived based on mechanics. The Euler-Bernoulli elastic curved beam theory and Castigliano's energy method were used to relate the stress and undercut with the clasp length, cross-sectional radius, alloy properties, tooth type, and retention force. Finite element analysis (FEA) was conducted on a case study and the resultant tensile stress and undercut were compared with the analytical model predictions. Pull-out experiments were conducted on laser-sintered cobalt-chromium (Co-Cr) dental prostheses to validate the analytical model results. The proposed circumferential clasp design model yields results in good agreement with FEA and experiments. The results indicate that Co-Cr circumferential clasps in molars that are 13mm long engaging undercuts of 0.25mm should have a cross-section radius of 1.2mm to provide a retention of 10N and to avoid plastic deformation or fatigue failure. However, shorter circumferential clasps such as those in premolars present high stresses and cannot avoid plastic deformation or fatigue failure. Laser-sintered Co-Cr circumferential clasps in molars are safe, whereas they are susceptible to failure in premolars. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  11. Why noise is useful in functional and neural mechanisms of interval timing?

    PubMed Central

    2013-01-01

    Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391

  12. A two-dimensional analytical model and experimental validation of garter stitch knitted shape memory alloy actuator architecture

    NASA Astrophysics Data System (ADS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2012-08-01

    Active knits are a unique architectural approach to meeting emerging smart structure needs for distributed high strain actuation with simultaneous force generation. This paper presents an analytical state-based model for predicting the actuation response of a shape memory alloy (SMA) garter knit textile. Garter knits generate significant contraction against moderate to large loads when heated, due to the continuous interlocked network of loops of SMA wire. For this knit architecture, the states of operation are defined on the basis of the thermal and mechanical loading of the textile, the resulting phase change of the SMA, and the load path followed to that state. Transitions between these operational states induce either stick or slip frictional forces depending upon the state and path, which affect the actuation response. A load-extension model of the textile is derived for each operational state using elastica theory and Euler-Bernoulli beam bending for the large deformations within a loop of wire based on the stress-strain behavior of the SMA material. This provides kinematic and kinetic relations which scale to form analytical transcendental expressions for the net actuation motion against an external load. This model was validated experimentally for an SMA garter knit textile over a range of applied forces with good correlation for both the load-extension behavior in each state as well as the net motion produced during the actuation cycle (250% recoverable strain and over 50% actuation). The two-dimensional analytical model of the garter stitch active knit provides the ability to predict the kinetic actuation performance, providing the basis for the design and synthesis of large stroke, large force distributed actuators that employ this novel architecture.

  13. Stomatal Opening: The Role of Cell-Wall Mechanical Anisotropy and Its Analytical Relations to the Bio-composite Characteristics

    PubMed Central

    Marom, Ziv; Shtein, Ilana; Bar-On, Benny

    2017-01-01

    Stomata are pores on the leaf surface, which are formed by a pair of curved, tubular guard cells; an increase in turgor pressure deforms the guard cells, resulting in the opening of the stomata. Recent studies employed numerical simulations, based on experimental data, to analyze the effects of various structural, chemical, and mechanical features of the guard cells on the stomatal opening characteristics; these studies all support the well-known qualitative observation that the mechanical anisotropy of the guard cells plays a critical role in stomatal opening. Here, we propose a computationally based analytical model that quantitatively establishes the relations between the degree of anisotropy of the guard cell, the bio-composite constituents of the cell wall, and the aperture and area of stomatal opening. The model introduces two non-dimensional key parameters that dominate the guard cell deformations—the inflation driving force and the anisotropy ratio—and it serves as a generic framework that is not limited to specific plant species. The modeling predictions are in line with a wide range of previous experimental studies, and its analytical formulation sheds new light on the relations between the structure, mechanics, and function of stomata. Moreover, the model provides an analytical tool to back-calculate the elastic characteristics of the matrix that composes the guard cell walls, which, to the best of our knowledge, cannot be probed by direct nano-mechanical experiments; indeed, the estimations of our model are in good agreement with recently published results of independent numerical optimization schemes. The emerging insights from the stomatal structure-mechanics “design guidelines” may promote the development of miniature, yet complex, multiscale composite actuation mechanisms for future engineering platforms. PMID:29312365

  14. Analytical and numerical analysis of frictional damage in quasi brittle materials

    NASA Astrophysics Data System (ADS)

    Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.

    2016-07-01

    Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.

  15. An explicit closed-form analytical solution for European options under the CGMY model

    NASA Astrophysics Data System (ADS)

    Chen, Wenting; Du, Meiyu; Xu, Xiang

    2017-01-01

    In this paper, we consider the analytical pricing of European path-independent options under the CGMY model, which is a particular type of pure jump Le´vy process, and agrees well with many observed properties of the real market data by allowing the diffusions and jumps to have both finite and infinite activity and variation. It is shown that, under this model, the option price is governed by a fractional partial differential equation (FPDE) with both the left-side and right-side spatial-fractional derivatives. In comparison to derivatives of integer order, fractional derivatives at a point not only involve properties of the function at that particular point, but also the information of the function in a certain subset of the entire domain of definition. This ;globalness; of the fractional derivatives has added an additional degree of difficulty when either analytical methods or numerical solutions are attempted. Albeit difficult, we still have managed to derive an explicit closed-form analytical solution for European options under the CGMY model. Based on our solution, the asymptotic behaviors of the option price and the put-call parity under the CGMY model are further discussed. Practically, a reliable numerical evaluation technique for the current formula is proposed. With the numerical results, some analyses of impacts of four key parameters of the CGMY model on European option prices are also provided.

  16. Equilibrium relations and bipolar cognitive mapping for online analytical processing with applications in international relations and strategic decision support.

    PubMed

    Zhang, Wen-Ran

    2003-01-01

    Bipolar logic, bipolar sets, and equilibrium relations are proposed for bipolar cognitive mapping and visualization in online analytical processing (OLAP) and online analytical mining (OLAM). As cognitive models, cognitive maps (CMs) hold great potential for clustering and visualization. Due to the lack of a formal mathematical basis, however, CM-based OLAP and OLAM have not gained popularity. Compared with existing approaches, bipolar cognitive mapping has a number of advantages. First, bipolar CMs are formal logical models as well as cognitive models. Second, equilibrium relations (with polarized reflexivity, symmetry, and transitivity), as bipolar generalizations and fusions of equivalence relations, provide a theoretical basis for bipolar visualization and coordination. Third, an equilibrium relation or CM induces bipolar partitions that distinguish disjoint coalition subsets not involved in any conflict, disjoint coalition subsets involved in a conflict, disjoint conflict subsets, and disjoint harmony subsets. Finally, equilibrium energy analysis leads to harmony and stability measures for strategic decision and multiagent coordination. Thus, this work bridges a gap for CM-based clustering and visualization in OLAP and OLAM. Basic ideas are illustrated with example CMs in international relations.

  17. Using learning analytics to evaluate a video-based lecture series.

    PubMed

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  18. F-14 modeling study

    NASA Technical Reports Server (NTRS)

    Levison, William H.

    1988-01-01

    This study explored application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues. The model was applied to two data bases: (1) a NASA ground based simulation of an air-to-air tracking task in which nonvisual cueing devices were explored, and (2) a ground based and inflight study performed by the Calspan Corporation to explore the effects of simulator delay on attitude tracking performance. The model predicted the major performance trends obtained in both studies. A combined analytical and experimental procedure for exploring simulator fidelity issues is outlined.

  19. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  20. A multi-species reactive transport model to estimate biogeochemical rates based on single-well push-pull test data

    NASA Astrophysics Data System (ADS)

    Phanikumar, Mantha S.; McGuire, Jennifer T.

    2010-08-01

    Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.

  1. An analytical model with flexible accuracy for deep submicron DCVSL cells

    NASA Astrophysics Data System (ADS)

    Valiollahi, Sepideh; Ardeshir, Gholamreza

    2018-07-01

    Differential cascoded voltage switch logic (DCVSL) cells are among the best candidates of circuit designers for a wide range of applications due to advantages such as low input capacitance, high switching speed, small area and noise-immunity; nevertheless, a proper model has not yet been developed to analyse them. This paper analyses deep submicron DCVSL cells based on a flexible accuracy-simplicity trade-off including the following key features: (1) the model is capable of producing closed-form expressions with an acceptable accuracy; (2) model equations can be solved numerically to offer higher accuracy; (3) the short-circuit currents occurring in high-low/low-high transitions are accounted in analysis and (4) the changes in the operating modes of transistors during transitions together with an efficient submicron I-V model, which incorporates the most important non-ideal short-channel effects, are considered. The accuracy of the proposed model is validated in IBM 0.13 µm CMOS technology through comparisons with the accurate physically based BSIM3 model. The maximum error caused by analytical solutions is below 10%, while this amount is below 7% for numerical solutions.

  2. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    NASA Astrophysics Data System (ADS)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-05-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  3. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    NASA Technical Reports Server (NTRS)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-01-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  4. The Ophidia framework: toward cloud-based data analytics for climate change

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni

    2015-04-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private cloud infrastructure of the CMCC Supercomputing Centre.

  5. A theory of the n-i-p silicon solar cell

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Weinberg, I.; Baraona, C.

    1981-01-01

    A computer model has been developed, based on an analytical theory of the high base resistivity BSF n(+)(pi)p(+) or p(+)(nu)n(+) silicon solar cell. The model makes very few assumptions and accounts for nonuniform optical generation, generation and recombination in the junction space charge region, and bandgap narrowing in the heavily doped regions. The paper presents calculated results based on this model and compares them to available experimental data. Also discussed is radiation damage in high base resistivity n(+)(pi)p(+) space solar cells.

  6. A ricin forensic profiling approach based on a complex set of biomarkers.

    PubMed

    Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister

    2018-08-15

    A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Reflectance from images: a model-based approach for human faces.

    PubMed

    Fuchs, Martin; Blanz, Volker; Lensch, Hendrik; Seidel, Hans-Peter

    2005-01-01

    In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.

  8. An Artificial Intelligence System to Predict Quality of Service in Banking Organizations

    PubMed Central

    Popovič, Aleš

    2016-01-01

    Quality of service, that is, the waiting time that customers must endure in order to receive a service, is a critical performance aspect in private and public service organizations. Providing good service quality is particularly important in highly competitive sectors where similar services exist. In this paper, focusing on banking sector, we propose an artificial intelligence system for building a model for the prediction of service quality. While the traditional approach used for building analytical models relies on theories and assumptions about the problem at hand, we propose a novel approach for learning models from actual data. Thus, the proposed approach is not biased by the knowledge that experts may have about the problem, but it is completely based on the available data. The system is based on a recently defined variant of genetic programming that allows practitioners to include the concept of semantics in the search process. This will have beneficial effects on the search process and will produce analytical models that are based only on the data and not on domain-dependent knowledge. PMID:27313604

  9. An Artificial Intelligence System to Predict Quality of Service in Banking Organizations.

    PubMed

    Castelli, Mauro; Manzoni, Luca; Popovič, Aleš

    2016-01-01

    Quality of service, that is, the waiting time that customers must endure in order to receive a service, is a critical performance aspect in private and public service organizations. Providing good service quality is particularly important in highly competitive sectors where similar services exist. In this paper, focusing on banking sector, we propose an artificial intelligence system for building a model for the prediction of service quality. While the traditional approach used for building analytical models relies on theories and assumptions about the problem at hand, we propose a novel approach for learning models from actual data. Thus, the proposed approach is not biased by the knowledge that experts may have about the problem, but it is completely based on the available data. The system is based on a recently defined variant of genetic programming that allows practitioners to include the concept of semantics in the search process. This will have beneficial effects on the search process and will produce analytical models that are based only on the data and not on domain-dependent knowledge.

  10. Heterogeneous fractionation profiles of meta-analytic coactivation networks.

    PubMed

    Laird, Angela R; Riedel, Michael C; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L; Eickhoff, Simon B; Smith, Stephen M; Fox, Peter T; Sutherland, Matthew T

    2017-04-01

    Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d=20-300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how "parent" functional brain systems decompose into constituent "child" sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Heterogeneous fractionation profiles of meta-analytic coactivation networks

    PubMed Central

    Laird, Angela R.; Riedel, Michael C.; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L.; Eickhoff, Simon B.; Smith, Stephen M.; Fox, Peter T.; Sutherland, Matthew T.

    2017-01-01

    Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d = 20 to 300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how “parent” functional brain systems decompose into constituent “child” sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. PMID:28222386

  12. EpiK: A Knowledge Base for Epidemiological Modeling and Analytics of Infectious Diseases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, S. M. Shamimul; Fox, Edward A.; Bisset, Keith

    Computational epidemiology seeks to develop computational methods to study the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. Recent advances in computing and data sciences have led to the development of innovative modeling environments to support this important goal. The datasets used to drive the dynamic models as well as the data produced by these models presents unique challenges owing to their size, heterogeneity and diversity. These datasets form the basis of effective and easy to use decision support and analytical environments. Asmore » a result, it is important to develop scalable data management systems to store, manage and integrate these datasets. In this paper, we develop EpiK—a knowledge base that facilitates the development of decision support and analytical environments to support epidemic science. An important goal is to develop a framework that links the input as well as output datasets to facilitate effective spatio-temporal and social reasoning that is critical in planning and intervention analysis before and during an epidemic. The data management framework links modeling workflow data and its metadata using a controlled vocabulary. The metadata captures information about storage, the mapping between the linked model and the physical layout, and relationships to support services. EpiK is designed to support agent-based modeling and analytics frameworks—aggregate models can be seen as special cases and are thus supported. We use semantic web technologies to create a representation of the datasets that encapsulates both the location and the schema heterogeneity. The choice of RDF as a representation language is motivated by the diversity and growth of the datasets that need to be integrated. A query bank is developed—the queries capture a broad range of questions that can be posed and answered during a typical case study pertaining to disease outbreaks. The queries are constructed using SPARQL Protocol and RDF Query Language (SPARQL) over the EpiK. EpiK can hide schema and location heterogeneity while efficiently supporting queries that span the computational epidemiology modeling pipeline: from model construction to simulation output. As a result, we show that the performance of benchmark queries varies significantly with respect to the choice of hardware underlying the database and resource description framework (RDF) engine.« less

  13. EpiK: A Knowledge Base for Epidemiological Modeling and Analytics of Infectious Diseases

    DOE PAGES

    Hasan, S. M. Shamimul; Fox, Edward A.; Bisset, Keith; ...

    2017-11-06

    Computational epidemiology seeks to develop computational methods to study the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. Recent advances in computing and data sciences have led to the development of innovative modeling environments to support this important goal. The datasets used to drive the dynamic models as well as the data produced by these models presents unique challenges owing to their size, heterogeneity and diversity. These datasets form the basis of effective and easy to use decision support and analytical environments. Asmore » a result, it is important to develop scalable data management systems to store, manage and integrate these datasets. In this paper, we develop EpiK—a knowledge base that facilitates the development of decision support and analytical environments to support epidemic science. An important goal is to develop a framework that links the input as well as output datasets to facilitate effective spatio-temporal and social reasoning that is critical in planning and intervention analysis before and during an epidemic. The data management framework links modeling workflow data and its metadata using a controlled vocabulary. The metadata captures information about storage, the mapping between the linked model and the physical layout, and relationships to support services. EpiK is designed to support agent-based modeling and analytics frameworks—aggregate models can be seen as special cases and are thus supported. We use semantic web technologies to create a representation of the datasets that encapsulates both the location and the schema heterogeneity. The choice of RDF as a representation language is motivated by the diversity and growth of the datasets that need to be integrated. A query bank is developed—the queries capture a broad range of questions that can be posed and answered during a typical case study pertaining to disease outbreaks. The queries are constructed using SPARQL Protocol and RDF Query Language (SPARQL) over the EpiK. EpiK can hide schema and location heterogeneity while efficiently supporting queries that span the computational epidemiology modeling pipeline: from model construction to simulation output. As a result, we show that the performance of benchmark queries varies significantly with respect to the choice of hardware underlying the database and resource description framework (RDF) engine.« less

  14. A meta-analytic review of school-based prevention for cannabis use.

    PubMed

    Porath-Waller, Amy J; Beasley, Erin; Beirness, Douglas J

    2010-10-01

    This investigation used meta-analytic techniques to evaluate the effectiveness of school-based prevention programming in reducing cannabis use among youth aged 12 to 19. It summarized the results from 15 studies published in peer-reviewed journals since 1999 and identified features that influenced program effectiveness. The results from the set of 15 studies indicated that these school-based programs had a positive impact on reducing students' cannabis use (d = 0.58, CI: 0.55, 0.62) compared to control conditions. Findings revealed that programs incorporating elements of several prevention models were significantly more effective than were those based on only a social influence model. Programs that were longer in duration (≥15 sessions) and facilitated by individuals other than teachers in an interactive manner also yielded stronger effects. The results also suggested that programs targeting high school students were more effective than were those aimed at middle-school students. Implications for school-based prevention programming are discussed.

  15. Modeling Patterns of Activities using Activity Curves

    PubMed Central

    Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen

    2016-01-01

    Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve, which represents an abstraction of an individual’s normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics. PMID:27346990

  16. Modeling Patterns of Activities using Activity Curves.

    PubMed

    Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen

    2016-06-01

    Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve , which represents an abstraction of an individual's normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics.

  17. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  18. Circular Functions Based Comprehensive Analysis of Plastic Creep Deformations in the Fiber Reinforced Composites

    NASA Astrophysics Data System (ADS)

    Monfared, Vahid

    2016-12-01

    Analytically based model is presented for behavioral analysis of the plastic deformations in the reinforced materials using the circular (trigonometric) functions. The analytical method is proposed to predict creep behavior of the fibrous composites based on basic and constitutive equations under a tensile axial stress. New insight of the work is to predict some important behaviors of the creeping matrix. In the present model, the prediction of the behaviors is simpler than the available methods. Principal creep strain rate behaviors are very noteworthy for designing the fibrous composites in the creeping composites. Analysis of the mentioned parameter behavior in the reinforced materials is necessary to analyze failure, fracture, and fatigue studies in the creep of the short fiber composites. Shuttles, spaceships, turbine blades and discs, and nozzle guide vanes are commonly subjected to the creep effects. Also, predicting the creep behavior is significant to design the optoelectronic and photonic advanced composites with optical fibers. As a result, the uniform behavior with constant gradient is seen in the principal creep strain rate behavior, and also creep rupture may happen at the fiber end. Finally, good agreements are found through comparing the obtained analytical and FEM results.

  19. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.

  20. Theory and observations of upward field-aligned currents at the magnetopause boundary layer.

    PubMed

    Wing, Simon; Johnson, Jay R

    2015-11-16

    The dependence of the upward field-aligned current density ( J ‖ ) at the dayside magnetopause boundary layer is well described by a simple analytic model based on a velocity shear generator. A previous observational survey confirmed that the scaling properties predicted by the analytical model are applicable between 11 and 17 MLT. We utilize the analytic model to predict field-aligned currents using solar wind and ionospheric parameters and compare with direct observations. The calculated and observed parallel currents are in excellent agreement, suggesting that the model may be useful to infer boundary layer structures. However, near noon, where velocity shear is small, the kinetic pressure gradients and thermal currents, which are not included in the model, could make a small but significant contribution to J ‖ . Excluding data from noon, our least squares fit returns log( J ‖,max_cal ) = (0.96 ± 0.04) log( J ‖_obs ) + (0.03 ± 0.01) where J ‖,max_cal = calculated J ‖,max and J ‖_obs = observed J ‖ .

  1. Analytically tractable climate-carbon cycle feedbacks under 21st century anthropogenic forcing

    NASA Astrophysics Data System (ADS)

    Lade, Steven J.; Donges, Jonathan F.; Fetzer, Ingo; Anderies, John M.; Beer, Christian; Cornell, Sarah E.; Gasser, Thomas; Norberg, Jon; Richardson, Katherine; Rockström, Johan; Steffen, Will

    2018-05-01

    Changes to climate-carbon cycle feedbacks may significantly affect the Earth system's response to greenhouse gas emissions. These feedbacks are usually analysed from numerical output of complex and arguably opaque Earth system models. Here, we construct a stylised global climate-carbon cycle model, test its output against comprehensive Earth system models, and investigate the strengths of its climate-carbon cycle feedbacks analytically. The analytical expressions we obtain aid understanding of carbon cycle feedbacks and the operation of the carbon cycle. Specific results include that different feedback formalisms measure fundamentally the same climate-carbon cycle processes; temperature dependence of the solubility pump, biological pump, and CO2 solubility all contribute approximately equally to the ocean climate-carbon feedback; and concentration-carbon feedbacks may be more sensitive to future climate change than climate-carbon feedbacks. Simple models such as that developed here also provide workbenches for simple but mechanistically based explorations of Earth system processes, such as interactions and feedbacks between the planetary boundaries, that are currently too uncertain to be included in comprehensive Earth system models.

  2. A model of freezing foods with liquid nitrogen using special functions

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-05-01

    A food freezing model is analyzed analytically. The model is based on the heat diffusion equation in the case of cylindrical shaped food frozen by liquid nitrogen; and assuming that the thermal conductivity of the cylindrical food is radially modulated. The model is solved using the Laplace transform method, the Bromwich theorem, and the residue theorem. The temperature profile in the cylindrical food is presented as an infinite series of special functions. All the required computations are performed with computer algebra software, specifically Maple. Using the numeric values of the thermal and geometric parameters for the cylindrical food, as well as the thermal parameters of the liquid nitrogen freezing system, the temporal evolution of the temperature in different regions in the interior of the cylindrical food is presented both analytically and graphically. The duration of the liquid nitrogen freezing process to achieve the specified effect on the cylindrical food is computed. The analytical results are expected to be of importance in food engineering and cooking engineering. As a future research line, the formulation and solution of freezing models with thermal memory is proposed.

  3. Residential Saudi load forecasting using analytical model and Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Al-Harbi, Ahmad Abdulaziz

    In recent years, load forecasting has become one of the main fields of study and research. Short Term Load Forecasting (STLF) is an important part of electrical power system operation and planning. This work investigates the applicability of different approaches; Artificial Neural Networks (ANNs) and hybrid analytical models to forecast residential load in Kingdom of Saudi Arabia (KSA). These two techniques are based on model human modes behavior formulation. These human modes represent social, religious, official occasions and environmental parameters impact. The analysis is carried out on residential areas for three regions in two countries exposed to distinct people activities and weather conditions. The collected data are for Al-Khubar and Yanbu industrial city in KSA, in addition to Seattle, USA to show the validity of the proposed models applied on residential load. For each region, two models are proposed. First model is next hour load forecasting while second model is next day load forecasting. Both models are analyzed using the two techniques. The obtained results for ANN next hour models yield very accurate results for all areas while relatively reasonable results are achieved when using hybrid analytical model. For next day load forecasting, the two approaches yield satisfactory results. Comparative studies were conducted to prove the effectiveness of the models proposed.

  4. Low-cost structured-light based 3D capture system design

    NASA Astrophysics Data System (ADS)

    Dong, Jing; Bengtson, Kurt R.; Robinson, Barrett F.; Allebach, Jan P.

    2014-03-01

    Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.

  5. Analytical model for effects of capsule shape on the healing efficiency in self-healing materials

    PubMed Central

    Li, Songpeng; Chen, Huisu

    2017-01-01

    The fundamental requirement for the autonomous capsule-based self-healing process to work is that cracks need to reach the capsules and break them such that the healing agent can be released. Ignoring all other aspects, the amount of healing agents released into the crack is essential to obtain a good healing. Meanwhile, from the perspective of the capsule shapes, spherical or elongated capsules (hollow tubes/fibres) are the main morphologies used in capsule-based self-healing materials. The focus of this contribution is the description of the effects of capsule shape on the efficiency of healing agent released in capsule-based self-healing material within the framework of the theory of geometrical probability and integral geometry. Analytical models are developed to characterize the amount of healing agent released per crack area from capsules for an arbitrary crack intersecting with capsules of various shapes in a virtual capsule-based self-healing material. The average crack opening distance is chosen to be a key parameter in defining the healing potential of individual cracks in the models. Furthermore, the accuracy of the developed models was verified by comparison to the data from a published numerical simulation study. PMID:29095862

  6. Calculus domains modelled using an original bool algebra based on polygons

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2016-08-01

    Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.

  7. Self-consistent multidimensional electron kinetic model for inductively coupled plasma sources

    NASA Astrophysics Data System (ADS)

    Dai, Fa Foster

    Inductively coupled plasma (ICP) sources have received increasing interest in microelectronics fabrication and lighting industry. In 2-D configuration space (r, z) and 2-D velocity domain (νθ,νz), a self- consistent electron kinetic analytic model is developed for various ICP sources. The electromagnetic (EM) model is established based on modal analysis, while the kinetic analysis gives the perturbed Maxwellian distribution of electrons by solving Boltzmann-Vlasov equation. The self- consistent algorithm combines the EM model and the kinetic analysis by updating their results consistently until the solution converges. The closed-form solutions in the analytical model provide rigorous and fast computing for the EM fields and the electron kinetic behavior. The kinetic analysis shows that the RF energy in an ICP source is extracted by a collisionless dissipation mechanism, if the electron thermovelocity is close to the RF phase velocities. A criterion for collisionless damping is thus given based on the analytic solutions. To achieve uniformly distributed plasma for plasma processing, we propose a novel discharge structure with both planar and vertical coil excitations. The theoretical results demonstrate improved uniformity for the excited azimuthal E-field in the chamber. Non-monotonic spatial decay in electric field and space current distributions was recently observed in weakly- collisional plasmas. The anomalous skin effect is found to be responsible for this phenomenon. The proposed model successfully models the non-monotonic spatial decay effect and achieves good agreements with the measurements for different applied RF powers. The proposed analytical model is compared with other theoretical models and different experimental measurements. The developed model is also applied to two kinds of ICP discharges used for electrodeless light sources. One structure uses a vertical internal coil antenna to excite plasmas and another has a metal shield to prevent the electromagnetic radiation. The theoretical results delivered by the proposed model agree quite well with the experimental measurements in many aspects. Therefore, the proposed self-consistent model provides an efficient and reliable means for designing ICP sources in various applications such as VLSI fabrication and electrodeless light sources.

  8. Variable fidelity robust optimization of pulsed laser orbital debris removal under epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan

    2016-04-01

    A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.

  9. Analytical modeling of the dynamics of brushless dc motors for aerospace applications: A conceptual framework

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A. O.

    1976-01-01

    The modes of operation of the brushless d.c. machine and its corresponding characteristics (current flow, torque-position, etc.) are presented. The foundations and basic principles on which the preliminary numerical model is based, are discussed.

  10. Analytic Modeling of Insurgencies

    DTIC Science & Technology

    2014-08-01

    Counterinsurgency, Situational Awareness, Civilians, Lanchester 1. Introduction Combat modeling is one of the oldest areas of operations research, dating...Army. The ground-breaking work of Lanchester in 1916 [1] marks the beginning of formal models of conflicts, where mathematical formulas and, later...Warfare model [3], which is a Lanchester - based mathematical model (see more details about this model later on), and McCormick’s Magic Diamond model [4

  11. Analysis of gene network robustness based on saturated fixed point attractors

    PubMed Central

    2014-01-01

    The analysis of gene network robustness to noise and mutation is important for fundamental and practical reasons. Robustness refers to the stability of the equilibrium expression state of a gene network to variations of the initial expression state and network topology. Numerical simulation of these variations is commonly used for the assessment of robustness. Since there exists a great number of possible gene network topologies and initial states, even millions of simulations may be still too small to give reliable results. When the initial and equilibrium expression states are restricted to being saturated (i.e., their elements can only take values 1 or −1 corresponding to maximum activation and maximum repression of genes), an analytical gene network robustness assessment is possible. We present this analytical treatment based on determination of the saturated fixed point attractors for sigmoidal function models. The analysis can determine (a) for a given network, which and how many saturated equilibrium states exist and which and how many saturated initial states converge to each of these saturated equilibrium states and (b) for a given saturated equilibrium state or a given pair of saturated equilibrium and initial states, which and how many gene networks, referred to as viable, share this saturated equilibrium state or the pair of saturated equilibrium and initial states. We also show that the viable networks sharing a given saturated equilibrium state must follow certain patterns. These capabilities of the analytical treatment make it possible to properly define and accurately determine robustness to noise and mutation for gene networks. Previous network research conclusions drawn from performing millions of simulations follow directly from the results of our analytical treatment. Furthermore, the analytical results provide criteria for the identification of model validity and suggest modified models of gene network dynamics. The yeast cell-cycle network is used as an illustration of the practical application of this analytical treatment. PMID:24650364

  12. Evaluation of simplified stream-aquifer depletion models for water rights administration

    USGS Publications Warehouse

    Sophocleous, Marios; Koussis, Antonis; Martin, J.L.; Perkins, S.P.

    1995-01-01

    We assess the predictive accuracy of Glover's (1974) stream-aquifer analytical solutions, which are commonly used in administering water rights, and evaluate the impact of the assumed idealizations on administrative and management decisions. To achieve these objectives, we evaluate the predictive capabilities of the Glover stream-aquifer depletion model against the MODFLOW numerical standard, which, unlike the analytical model, can handle increasing hydrogeologic complexity. We rank-order and quantify the relative importance of the various assumptions on which the analytical model is based, the three most important being: (1) streambed clogging as quantified by streambed-aquifer hydraulic conductivity contrast; (2) degree of stream partial penetration; and (3) aquifer heterogeneity. These three factors relate directly to the multidimensional nature of the aquifer flow conditions. From these considerations, future efforts to reduce the uncertainty in stream depletion-related administrative decisions should primarily address these three factors in characterizing the stream-aquifer process. We also investigate the impact of progressively coarser model grid size on numerically estimating stream leakage and conclude that grid size effects are relatively minor. Therefore, when modeling is required, coarser model grids could be used thus minimizing the input data requirements.

  13. Analytical modelling of temperature effects on an AMPA-type synapse.

    PubMed

    Kufel, Dominik S; Wojcik, Grzegorz M

    2018-05-11

    It was previously reported, that temperature may significantly influence neural dynamics on the different levels of brain function. Thus, in computational neuroscience, it would be useful to make models scalable for a wide range of various brain temperatures. However, lack of experimental data and an absence of temperature-dependent analytical models of synaptic conductance does not allow to include temperature effects at the multi-neuron modeling level. In this paper, we propose a first step to deal with this problem: A new analytical model of AMPA-type synaptic conductance, which is able to incorporate temperature effects in low-frequency stimulations. It was constructed based on Markov model description of AMPA receptor kinetics using the set of coupled ODEs. The closed-form solution for the set of differential equations was found using uncoupling assumption (introduced in the paper) with few simplifications motivated both from experimental data and from Monte Carlo simulation of synaptic transmission. The model may be used for computationally efficient and biologically accurate implementation of temperature effects on AMPA receptor conductance in large-scale neural network simulations. As a result, it may open a wide range of new possibilities for researching the influence of temperature on certain aspects of brain functioning.

  14. Evaluation of carbon nanotube based copper nanoparticle composite for the efficient detection of agroviruses

    USDA-ARS?s Scientific Manuscript database

    Nanomaterials based sensors offer sensitivity and selectivity for the detection of a specific analyte-of-the-interest. Described here is a novel assay for the detection of a DNA sequence based on nanostructured carbon nanotubes/copper nanoparticles composite. This assay was modeled on strong electro...

  15. Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  16. Modelling Complexity: Making Sense of Leadership Issues in 14-19 Education

    ERIC Educational Resources Information Center

    Briggs, Ann R. J.

    2008-01-01

    Modelling of statistical data is a well established analytical strategy. Statistical data can be modelled to represent, and thereby predict, the forces acting upon a structure or system. For the rapidly changing systems in the world of education, modelling enables the researcher to understand, to predict and to enable decisions to be based upon…

  17. Exact solution of a linear molecular motor model driven by two-step fluctuations and subject to protein friction.

    PubMed

    Fogedby, Hans C; Metzler, Ralf; Svane, Axel

    2004-08-01

    We investigate by analytical means the stochastic equations of motion of a linear molecular motor model based on the concept of protein friction. Solving the coupled Langevin equations originally proposed by Mogilner et al. [Phys. Lett. A 237, 297 (1998)], and averaging over both the two-step internal conformational fluctuations and the thermal noise, we present explicit, analytical expressions for the average motion and the velocity-force relationship. Our results allow for a direct interpretation of details of this motor model which are not readily accessible from numerical solutions. In particular, we find that the model is able to predict physiologically reasonable values for the load-free motor velocity and the motor mobility.

  18. Analytical Deriving of the Field Capacity through Soil Bundle Model

    NASA Astrophysics Data System (ADS)

    Arnone, E.; Viola, F.; Antinoro, C.; Noto, L. V.

    2015-12-01

    The concept of field capacity as soil hydraulic parameter is widely used in many hydrological applications. Althought its recurring usage, its definition is not univocal. Traditionally, field capacity has been related to the amount of water that remains in the soil after the excess water has drained away and the water downward movement experiences a significant decresase. Quantifying the drainage of excess of water may be vague and several definitions, often subjective, have been proposed. These definitions are based on fixed thresholds either of time, pressure, or flux to which the field capacity condition is associated. The flux-based definition identifies the field capacity as the soil moisture value corresponding to an arbitrary fixed threshold of free drainage flux. Recently, many works have investigated the flux-based definition by varying either the drainage threshold, the geometry setting and mainly the description of the drainage flux. Most of these methods are based on the simulation of the flux through a porous medium by using the Darcy's law or Richard's equation. Using the above-mentioned flux-based definition, in this work we propose an alternative analytical approach for deriving the field capacity based on a bundle-of-tubes model. The pore space of a porous medium is conceptualized as a bundle of capillary tubes of given length of different radii, derived from a known distribution. The drainage from a single capillary tube is given by the analytical solution of the differential equation describing the water height evolution within the capillary tube. This equation is based on the Poiseuille's law and describes the drainage flux with time as a function of tube radius. The drainage process is then integrated for any portion of soil taking into account the tube radius distribution which in turns depends on the soil type. This methodology allows to analytically derive the dynamics of drainage water flux for any soil type and consequently to define the soil field capacity as the latter reachs a given threshold value. The theoretical model also accounts for the tortuosity which characterizes the water pathways in real soils, but neglects the voids mutual interconnections.

  19. A Colloidal Route to Detection of Organic Molecules Based on Surface-Enhanced Raman Spectroscopy Using Nanostructured Substrate Derived from Aerosols

    NASA Astrophysics Data System (ADS)

    Gen, Masao; Kakuta, Hideo; Kamimoto, Yoshihito; Wuled Lenggoro, I.

    2011-06-01

    A detection method based on the surface-enhanced Raman spectroscopy (SERS)-active substrate derived from aerosol nanoparticles and a colloidal suspension for detecting organic molecules of a model analyte (a pesticide) is proposed. This approach can detect the molecules of the derived from its solution with the concentration levels of ppb. For substrate fabrication, a gas-phase method is used to directly deposit Ag nanoparticles on to a silicon substrate having pyramidal structures. By mixing the target analyte with a suspension of Ag colloids purchased in advance, clotianidin analyte on Ag colloid can exist in junctions of co-aggregated Ag colloids. Using (i) a nanostructured substrate made from aerosol nanoparticles and (ii) colloidal suspension can increase the number of activity spots.

  20. Analytical assessment of woven fabrics under vertical stabbing - The role of protective clothing.

    PubMed

    Hejazi, Sayyed Mahdi; Kadivar, Nastaran; Sajjadi, Ali

    2016-02-01

    Knives are being used more commonly in street fights and muggings. Therefore, this work presents an analytical model for woven fabrics under vertical stabbing loads. The model is based on energy method and the fabric is assumed to be unidirectional comprised of N layers. Thus, the ultimate stab resistance of fabric was determined based on structural parameters of fabric and geometrical characteristics of blade. Moreover, protective clothing is nowadays considered as a strategic branch in technical textile industry. The main idea of the present work is improving the stab resistance of woven textiles by using metal coating method. In the final, a series of vertical stabbing tests were conducted on cotton, polyester and polyamide fabrics. Consequently, it was found that the model predicts with a good accuracy the ultimate stab resistance of the sample fabrics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Comparing an analytical spacetime metric for a merging binary to a fully nonlinear numerical evolution using curvature scalars

    NASA Astrophysics Data System (ADS)

    Sadiq, Jam; Zlochower, Yosef; Nakano, Hiroyuki

    2018-04-01

    We introduce a new geometrically invariant prescription for comparing two different spacetimes based on geodesic deviation. We use this method to compare a family of recently introduced analytical spacetime representing inspiraling black-hole binaries to fully nonlinear numerical solutions to the Einstein equations. Our method can be used to improve analytical spacetime models by providing a local measure of the effects that violations of the Einstein equations will have on timelike geodesics, and indirectly, gas dynamics. We also discuss the advantages and limitations of this method.

  2. Parachute-deployment-parameter identification based on an analytical simulation of Viking BLDT AV-4

    NASA Technical Reports Server (NTRS)

    Talay, T. A.

    1974-01-01

    A six-degree-of-freedom analytical simulation of parachute deployment dynamics developed at the Langley Research Center is presented. A comparison study was made using flight results from the Viking Balloon Launched Decelerator Test (BLDT) AV-4. Since there are significant voids in the knowledge of vehicle and decelerator aerodynamics and suspension system physical properties, a set of deployment-parameter input has been defined which may be used as a basis for future studies of parachute deployment dynamics. The study indicates the analytical model is sufficiently sophisticated to investigate parachute deployment dynamics with reasonable accuracy.

  3. Fast-slow asymptotic for semi-analytical ignition criteria in FitzHugh-Nagumo system.

    PubMed

    Bezekci, B; Biktashev, V N

    2017-09-01

    We study the problem of initiation of excitation waves in the FitzHugh-Nagumo model. Our approach follows earlier works and is based on the idea of approximating the boundary between basins of attraction of propagating waves and of the resting state as the stable manifold of a critical solution. Here, we obtain analytical expressions for the essential ingredients of the theory by singular perturbation using two small parameters, the separation of time scales of the activator and inhibitor and the threshold in the activator's kinetics. This results in a closed analytical expression for the strength-duration curve.

  4. IT vendor selection model by using structural equation model & analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  5. An analytical solution for two-dimensional vacuum preloading combined with electro-osmosis consolidation using EKG electrodes

    PubMed Central

    Qiu, Chenchen; Li, Yande

    2017-01-01

    China is a country with vast territory, but economic development and population growth have reduced the usable land resources in recent years. Therefore, reclamation by pumping and filling is carried out in eastern coastal regions of China in order to meet the needs of urbanization. However, large areas of reclaimed land need rapid drainage consolidation treatment. Based on past researches on how to improve the treatment efficiency of soft clay using vacuum preloading combined with electro-osmosis, a two-dimensional drainage plane model was proposed according to the Terzaghi and Esrig consolidation theory. However, the analytical solution using two-dimensional plane model was never involved. Current analytical solutions can’t have a thorough theoretical analysis of practical engineering and give relevant guidance. Considering the smearing effect and the rectangle arrangement pattern, an analytical solution is derived to describe the behavior of pore-water and the consolidation process by using EKG (electro-kinetic geo synthetics) materials. The functions of EKG materials include drainage, electric conduction and corrosion resistance. Comparison with test results is carried out to verify the analytical solution. It is found that the measured value is larger than the applied vacuum degree because of the stacking effect of the vacuum preloading and electro-osmosis. The trends of the mean measured value and the mean analytical value processes are comparable. Therefore, the consolidation model can accurately assess the change in pore-water pressure and the consolidation process during vacuum preloading combined with electro-osmosis. PMID:28771496

  6. Analytical Modelling of the Spread of Disease in Confined and Crowded Spaces

    NASA Astrophysics Data System (ADS)

    Goscé, Lara; Barton, David A. W.; Johansson, Anders

    2014-05-01

    Since 1927 and until recently, most models describing the spread of disease have been of compartmental type, based on the assumption that populations are homogeneous and well-mixed. Recent models have utilised agent-based models and complex networks to explicitly study heterogeneous interaction patterns, but this leads to an increasing computational complexity. Compartmental models are appealing because of their simplicity, but their parameters, especially the transmission rate, are complex and depend on a number of factors, which makes it hard to predict how a change of a single environmental, demographic, or epidemiological factor will affect the population. Therefore, in this contribution we propose a middle ground, utilising crowd-behaviour research to improve compartmental models in crowded situations. We show how both the rate of infection as well as the walking speed depend on the local crowd density around an infected individual. The combined effect is that the rate of infection at a population scale has an analytically tractable non-linear dependency on crowd density. We model the spread of a hypothetical disease in a corridor and compare our new model with a typical compartmental model, which highlights the regime in which current models may not produce credible results.

  7. An enhanced beam model for constrained layer damping and a parameter study of damping contribution

    NASA Astrophysics Data System (ADS)

    Xie, Zhengchao; Shepard, W. Steve, Jr.

    2009-01-01

    An enhanced analytical model is presented based on an extension of previous models for constrained layer damping (CLD) in beam-like structures. Most existing CLD models are based on the assumption that shear deformation in the core layer is the only source of damping in the structure. However, previous research has shown that other types of deformation in the core layer, such as deformations from longitudinal extension and transverse compression, can also be important. In the enhanced analytical model developed here, shear, extension, and compression deformations are all included. This model can be used to predict the natural frequencies and modal loss factors. The numerical study shows that compared to other models, this enhanced model is accurate in predicting the dynamic characteristics. As a result, the model can be accepted as a general computation model. With all three types of damping included and the formulation used here, it is possible to study the impact of the structure's geometry and boundary conditions on the relative contribution of each type of damping. To that end, the relative contributions in the frequency domain for a few sample cases are presented.

  8. Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC): User Guide. Version 3

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Bednarcyk, B. A.; Wilt, T. E.; Trowbridge, D.

    1999-01-01

    The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code, MAC, who's predictive capability rests entirely upon the fully analytical generalized method of cells, GMC, micromechanics model is described. MAC/ GMC is a versatile form of research software that "drives" the double or triply periodic micromechanics constitutive models based upon GMC. MAC/GMC enhances the basic capabilities of GMC by providing a modular framework wherein 1) various thermal, mechanical (stress or strain control) and thermomechanical load histories can be imposed, 2) different integration algorithms may be selected, 3) a variety of material constitutive models (both deformation and life) may be utilized and/or implemented, and 4) a variety of fiber architectures (both unidirectional, laminate and woven) may be easily accessed through their corresponding representative volume elements contained within the supplied library of RVEs or input directly by the user, and 5) graphical post processing of the macro and/or micro field quantities is made available.

  9. Building the analytical response in frequency domain of AC biased bolometers. Application to Planck/HFI

    NASA Astrophysics Data System (ADS)

    Sauvé, Alexandre; Montier, Ludovic

    2016-12-01

    Context: Bolometers are high sensitivity detector commonly used in Infrared astronomy. The HFI instrument of the Planck satellite makes extensive use of them, but after the satellite launch two electronic related problems revealed critical. First an unexpected excess response of detectors at low optical excitation frequency for ν < 1 Hz, and secondly the Analog To digital Converter (ADC) component had been insufficiently characterized on-ground. These two problems require an exquisite knowledge of detector response. However bolometers have highly nonlinear characteristics, coming from their electrical and thermal coupling making them very difficult to model. Goal: We present a method to build the analytical transfer function in frequency domain which describe the voltage response of an Alternative Current (AC) biased bolometer to optical excitation, based on the standard bolometer model. This model is built using the setup of the Planck/HFI instrument and offers the major improvement of being based on a physical model rather than the currently in use had-hoc model based on Direct Current (DC) bolometer theory. Method: The analytical transfer function expression will be presented in matrix form. For this purpose, we build linearized versions of the bolometer electro thermal equilibrium. A custom description of signals in frequency is used to solve the problem with linear algebra. The model performances is validated using time domain simulations. Results: The provided expression is suitable for calibration and data processing. It can also be used to provide constraints for fitting optical transfer function using real data from steady state electronic response and optical response. The accurate description of electronic response can also be used to improve the ADC nonlinearity correction for quickly varying optical signals.

  10. Energy distributions and radiation transport in uranium plasmas

    NASA Technical Reports Server (NTRS)

    Miley, G. H.; Bathke, C.; Maceda, E.; Choi, C.

    1976-01-01

    An approximate analytic model, based on continuous electron slowing, has been used for survey calculations. Where more accuracy is required, a Monte Carlo technique is used which combines an analytic representation of Coulombic collisions with a random walk treatment of inelastic collisions. The calculated electron distributions have been incorporated into another code that evaluates both the excited atomic state densities within the plasma and the radiative flux emitted from the plasma.

  11. "Dip-and-read" paper-based analytical devices using distance-based detection with color screening.

    PubMed

    Yamada, Kentaro; Citterio, Daniel; Henry, Charles S

    2018-05-15

    An improved paper-based analytical device (PAD) using color screening to enhance device performance is described. Current detection methods for PADs relying on the distance-based signalling motif can be slow due to the assay time being limited by capillary flow rates that wick fluid through the detection zone. For traditional distance-based detection motifs, analysis can take up to 45 min for a channel length of 5 cm. By using a color screening method, quantification with a distance-based PAD can be achieved in minutes through a "dip-and-read" approach. A colorimetric indicator line deposited onto a paper substrate using inkjet-printing undergoes a concentration-dependent colorimetric response for a given analyte. This color intensity-based response has been converted to a distance-based signal by overlaying a color filter with a continuous color intensity gradient matching the color of the developed indicator line. As a proof-of-concept, Ni quantification in welding fume was performed as a model assay. The results of multiple independent user testing gave mean absolute percentage error and average relative standard deviations of 10.5% and 11.2% respectively, which were an improvement over analysis based on simple visual color comparison with a read guide (12.2%, 14.9%). In addition to the analytical performance comparison, an interference study and a shelf life investigation were performed to further demonstrate practical utility. The developed system demonstrates an alternative detection approach for distance-based PADs enabling fast (∼10 min), quantitative, and straightforward assays.

  12. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  13. Novel concept of washing for microfluidic paper-based analytical devices based on capillary force of paper substrates.

    PubMed

    Mohammadi, Saeed; Busa, Lori Shayne Alamo; Maeki, Masatoshi; Mohamadi, Reza M; Ishida, Akihiko; Tani, Hirofumi; Tokeshi, Manabu

    2016-11-01

    A novel washing technique for microfluidic paper-based analytical devices (μPADs) that is based on the spontaneous capillary action of paper and eliminates unbound antigen and antibody in a sandwich immunoassay is reported. Liquids can flow through a porous medium (such as paper) in the absence of external pressure as a result of capillary action. Uniform results were achieved when washing a paper substrate in a PDMS holder which was integrated with a cartridge absorber acting as a porous medium. Our study demonstrated that applying this washing technique would allow μPADs to become the least expensive microfluidic device platform with high reproducibility and sensitivity. In a model μPAD assay that utilized this novel washing technique, C-reactive protein (CRP) was detected with a limit of detection (LOD) of 5 μg mL -1 . Graphical Abstract A novel washing technique for microfluidic paper-based analytical devices (μPADs) that is based on the spontaneous capillary action of paper and eliminates unbound antigen and antibody in a sandwich immunoassay is reported.

  14. Development of computer-based analytical tool for assessing physical protection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardhi, Alim, E-mail: alim-m@batan.go.id; Chulalongkorn University, Faculty of Engineering, Nuclear Engineering Department, 254 Phayathai Road, Pathumwan, Bangkok Thailand. 10330; Pengvanich, Phongphaeth, E-mail: ppengvan@gmail.com

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work,more » we have developed a computer–based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system’s detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.« less

  15. Development of computer-based analytical tool for assessing physical protection system

    NASA Astrophysics Data System (ADS)

    Mardhi, Alim; Pengvanich, Phongphaeth

    2016-01-01

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer-based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system's detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.

  16. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  17. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  18. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  20. An improved input shaping design for an efficient sway control of a nonlinear 3D overhead crane with friction

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.

    2017-08-01

    This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.

Top