Sample records for functional analytic approach

  1. Cognitive-analytical therapy for a patient with functional neurological symptom disorder-conversion disorder (psychogenic myopia): A case study.

    PubMed

    Nasiri, Hamid; Ebrahimi, Amrollah; Zahed, Arash; Arab, Mostafa; Samouei, Rahele

    2015-05-01

    Functional neurological symptom disorder commonly presents with symptoms and defects of sensory and motor functions. Therefore, it is often mistaken for a medical condition. It is well known that functional neurological symptom disorder more often caused by psychological factors. There are three main approaches namely analytical, cognitive and biological to manage conversion disorder. Any of such approaches can be applied through short-term treatment programs. In this case, study a 12-year-old boy with the diagnosed functional neurological symptom disorder (psychogenic myopia) was put under a cognitive-analytical treatment. The outcome of this treatment modality was proved successful.

  2. A Functional Analytic Approach to Group Psychotherapy

    ERIC Educational Resources Information Center

    Vandenberghe, Luc

    2009-01-01

    This article provides a particular view on the use of Functional Analytical Psychotherapy (FAP) in a group therapy format. This view is based on the author's experiences as a supervisor of Functional Analytical Psychotherapy Groups, including groups for women with depression and groups for chronic pain patients. The contexts in which this approach…

  3. Examining the Relations between Executive Function, Math, and Literacy during the Transition to Kindergarten: A Multi-Analytic Approach

    ERIC Educational Resources Information Center

    Schmitt, Sara A.; Geldhof, G. John; Purpura, David J.; Duncan, Robert; McClelland, Megan M.

    2017-01-01

    The present study explored the bidirectional and longitudinal associations between executive function (EF) and early academic skills (math and literacy) across 4 waves of measurement during the transition from preschool to kindergarten using 2 complementary analytical approaches: cross-lagged panel modeling and latent growth curve modeling (LCGM).…

  4. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  5. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    NASA Astrophysics Data System (ADS)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  6. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  7. Exact analytical approach for six-degree-of-freedom measurement using image-orientation-change method.

    PubMed

    Tsai, Chung-Yu

    2012-04-01

    An exact analytical approach is proposed for measuring the six-degree-of-freedom (6-DOF) motion of an object using the image-orientation-change (IOC) method. The proposed measurement system comprises two reflector systems, where each system consists of two reflectors and one position sensing detector (PSD). The IOCs of the object in the two reflector systems are described using merit functions determined from the respective PSD readings before and after motion occurs, respectively. The three rotation variables are then determined analytically from the eigenvectors of the corresponding merit functions. After determining the three rotation variables, the order of the translation equations is downgraded to a linear form. Consequently, the solution for the three translation variables can also be analytically determined. As a result, the motion transformation matrix describing the 6-DOF motion of the object is fully determined. The validity of the proposed approach is demonstrated by means of an illustrative example.

  8. "Analytical" vector-functions I

    NASA Astrophysics Data System (ADS)

    Todorov, Vladimir Todorov

    2017-12-01

    In this note we try to give a new (or different) approach to the investigation of analytical vector functions. More precisely a notion of a power xn; n ∈ ℕ+ of a vector x ∈ ℝ3 is introduced which allows to define an "analytical" function f : ℝ3 → ℝ3. Let furthermore f (ξ )= ∑n =0 ∞ anξn be an analytical function of the real variable ξ. Here we replace the power ξn of the number ξ with the power of a vector x ∈ ℝ3 to obtain a vector "power series" f (x )= ∑n =0 ∞ anxn . We research some properties of the vector series as well as some applications of this idea. Note that an "analytical" vector function does not depend of any basis, which may be used in research into some problems in physics.

  9. A new method for constructing analytic elements for groundwater flow.

    NASA Astrophysics Data System (ADS)

    Strack, O. D.

    2007-12-01

    The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.

  10. Functional Analytic Psychotherapy with Juveniles Who Have Committed Sexual Offenses

    ERIC Educational Resources Information Center

    Newring, Kirk A. B.; Wheeler, Jennifer G.

    2012-01-01

    We have previously discussed the application of Functional Analytic Psychotherapy (FAP) with adults who have committed sexual offense behaviors (Newring & Wheeler, 2010). The present entry borrows heavily from the foundation presented in that chapter, and extends this approach to working with adolescents, youth, and juveniles with sexual offense…

  11. Promoting Efficacy Research on Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Maitland, Daniel W. M.; Gaynor, Scott T.

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is a form of therapy grounded in behavioral principles that utilizes therapist reactions to shape target behavior. Despite a growing literature base, there is a paucity of research to establish the efficacy of FAP. As a general approach to psychotherapy, and how the therapeutic relationship produces change,…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binotti, M.; Zhu, G.; Gray, A.

    An analytical approach, as an extension of one newly developed method -- First-principle OPTical Intercept Calculation (FirstOPTIC) -- is proposed to treat the geometrical impact of three-dimensional (3-D) effects on parabolic trough optical performance. The mathematical steps of this analytical approach are presented and implemented numerically as part of the suite of FirstOPTIC code. In addition, the new code has been carefully validated against ray-tracing simulation results and available numerical solutions. This new analytical approach to treating 3-D effects will facilitate further understanding and analysis of the optical performance of trough collectors as a function of incidence angle.

  13. A Simpli ed, General Approach to Simulating from Multivariate Copula Functions

    Treesearch

    Barry Goodwin

    2012-01-01

    Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses \\probability{...

  14. Modeling Choice Under Uncertainty in Military Systems Analysis

    DTIC Science & Technology

    1991-11-01

    operators rather than fuzzy operators. This is suggested for further research. 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) In AHP , objectives, functions and...14 4.1 IMPRECISELY SPECIFIED MULTIPLE A’ITRIBUTE UTILITY THEORY... 14 4.2 FUZZY DECISION ANALYSIS...14 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) ................................... 14 4.4 SUBJECTIVE TRANSFER FUNCTION APPROACH

  15. Analytic evaluation of the weighting functions for remote sensing of blackbody planetary atmospheres : the case of limb viewing geometry

    NASA Technical Reports Server (NTRS)

    Ustinov, Eugene A.

    2006-01-01

    In a recent publication (Ustinov, 2002), we proposed an analytic approach to evaluation of radiative and geophysical weighting functions for remote sensing of a blackbody planetary atmosphere, based on general linearization approach applied to the case of nadir viewing geometry. In this presentation, the general linearization approach is applied to the limb viewing geometry. The expressions, similar to those obtained in (Ustinov, 2002), are obtained for weighting functions with respect to the distance along the line of sight. Further on, these expressions are converted to the expressions for weighting functions with respect to the vertical coordinate in the atmosphere. Finally, the numerical representation of weighting functions in the form of matrices of partial derivatives of grid limb radiances with respect to the grid values of atmospheric parameters is used for a convolution with the finite field of view of the instrument.

  16. A note on a simplified and general approach to simulating from multivariate copula functions

    Treesearch

    Barry K. Goodwin

    2013-01-01

    Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses ‘Probability-...

  17. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  18. Does subtype matter? Assessing the effects of maltreatment on functioning in preadolescent youth in out-of-home care

    PubMed Central

    Petrenko, Christie L. M.; Friend, Angela; Garrido, Edward F.; Taussig, Heather N.; Culhane, Sara E.

    2012-01-01

    Objectives Attempts to understand the effects of maltreatment subtypes on childhood functioning are complicated by the fact that children often experience multiple subtypes. This study assessed the effects of maltreatment subtypes on the cognitive, academic, and mental health functioning of preadolescent youth in out-of-home care using both “variable-centered” and “person-centered” statistical analytic approaches to modeling multiple subtypes of maltreatment. Methods Participants included 334 preadolescent youth (ages 9 to 11) placed in out-of-home care due to maltreatment. The occurrence and severity of maltreatment subtypes (physical abuse, sexual abuse, physical neglect, and supervisory neglect) were coded from child welfare records. The relationships between maltreatment subtypes and children’s cognitive, academic, and mental health functioning were evaluated with the following approaches: “Variable-centered” analytic methods: Regression approach: Multiple regression was used to estimate the effects of each maltreatment subtype (separate analyses for occurrence and severity), controlling for the other subtypes. Hierarchical approach: Contrast coding was used in regression analyses to estimate the effects of discrete maltreatment categories that were assigned based on a subtype occurrence hierarchy (sexual abuse > physical abuse > physical neglect > supervisory neglect). “Person-centered” analytic method: Latent class analysis was used to group children with similar maltreatment severity profiles into discrete classes. The classes were then compared to determine if they differed in terms of their ability to predict functioning. Results The approaches identified similar relationships between maltreatment subtypes and children’s functioning. The most consistent findings indicated that maltreated children who experienced physical or sexual abuse were at highest risk for caregiver-reported externalizing behavior problems, and those who experienced physical abuse and/or physical neglect were more likely to have higher levels of caregiver-reported internalizing problems. Children experiencing predominantly low severity supervisory neglect had relatively better functioning than other maltreated youth. Conclusions Many of the maltreatment subtype differences identified within the maltreated sample in the current study are consistent with those from previous research comparing maltreated youth to non-maltreated comparison groups. Results do not support combining supervisory and physical neglect. The “variable-centered” and “person-centered” analytic approaches produced complementary results. Advantages and disadvantages of each approach are discussed. PMID:22947490

  19. A Factor Analytic and Regression Approach to Functional Age: Potential Effects of Race.

    ERIC Educational Resources Information Center

    Colquitt, Alan L.; And Others

    Factor analysis and multiple regression are two major approaches used to look at functional age, which takes account of the extensive variation in the rate of physiological and psychological maturation throughout life. To examine the role of racial or cultural influences on the measurement of functional age, a battery of 12 tests concentrating on…

  20. Fast analytic solver of rational Bethe equations

    NASA Astrophysics Data System (ADS)

    Marboe, C.; Volin, D.

    2017-05-01

    In this note we propose an approach for a fast analytic determination of all possible sets of Bethe roots corresponding to eigenstates of rational {GL}({N}\\vert {M}) integrable spin chains of given not too large length, in terms of Baxter Q-functions. We observe that all exceptional solutions, if any, are automatically correctly accounted. The key intuition behind the approach is that the equations on the Q-functions are determined solely by the Young diagram, and not by the choice of the rank of the {GL} symmetry. Hence we can choose arbitrary {N} and {M} that accommodate the desired representation. Then we consider all distinguished Q-functions at once, not only those following a certain Kac-Dynkin path.

  1. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.

  2. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  3. An Analytic Approach to Projectile Motion in a Linear Resisting Medium

    ERIC Educational Resources Information Center

    Stewart, Sean M.

    2006-01-01

    The time of flight, range and the angle which maximizes the range of a projectile in a linear resisting medium are expressed in analytic form in terms of the recently defined Lambert W function. From the closed-form solutions a number of results characteristic to the motion of the projectile in a linear resisting medium are analytically confirmed,…

  4. Goal-Oriented Probability Density Function Methods for Uncertainty Quantification

    DTIC Science & Technology

    2015-12-11

    approximations or data-driven approaches. We investigated the accuracy of analytical tech- niques based Kubo -Van Kampen operator cumulant expansions for...analytical techniques based Kubo -Van Kampen operator cumulant expansions for Langevin equations driven by fractional Brownian motion and other noises

  5. Behavior analytic approaches to problem behavior in intellectual disabilities.

    PubMed

    Hagopian, Louis P; Gregory, Meagan K

    2016-03-01

    The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.

  6. Two Approaches to Estimating the Effect of Parenting on the Development of Executive Function in Early Childhood

    ERIC Educational Resources Information Center

    Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.

    2014-01-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory…

  7. Siewert solutions of transcendental equations, generalized Lambert functions and physical applications

    NASA Astrophysics Data System (ADS)

    Barsan, Victor

    2018-05-01

    Several classes of transcendental equations, mainly eigenvalue equations associated to non-relativistic quantum mechanical problems, are analyzed. Siewert's systematic approach of such equations is discussed from the perspective of the new results recently obtained in the theory of generalized Lambert functions and of algebraic approximations of various special or elementary functions. Combining exact and approximate analytical methods, quite precise analytical outputs are obtained for apparently untractable problems. The results can be applied in quantum and classical mechanics, magnetism, elasticity, solar energy conversion, etc.

  8. Synthesis of Feedback Controller for Chaotic Systems by Means of Evolutionary Techniques

    NASA Astrophysics Data System (ADS)

    Senkerik, Roman; Oplatkova, Zuzana; Zelinka, Ivan; Davendra, Donald; Jasek, Roman

    2011-06-01

    This research deals with a synthesis of control law for three selected discrete chaotic systems by means of analytic programming. The novality of the approach is that a tool for symbolic regression—analytic programming—is used for such kind of difficult problem. The paper consists of the descriptions of analytic programming as well as chaotic systems and used cost function. For experimentation, Self-Organizing Migrating Algorithm (SOMA) with analytic programming was used.

  9. New vistas in refractive laser beam shaping with an analytic design approach

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Thienpont, Hugo

    2014-05-01

    Many commercial, medical and scientific applications of the laser have been developed since its invention. Some of these applications require a specific beam irradiance distribution to ensure optimal performance. Often, it is possible to apply geometrical methods to design laser beam shapers. This common design approach is based on the ray mapping between the input plane and the output beam. Geometric ray mapping designs with two plano-aspheric lenses have been thoroughly studied in the past. Even though analytic expressions for various ray mapping functions do exist, the surface profiles of the lenses are still calculated numerically. In this work, we present an alternative novel design approach that allows direct calculation of the rotational symmetric lens profiles described by analytic functions. Starting from the example of a basic beam expander, a set of functional differential equations is derived from Fermat's principle. This formalism allows calculating the exact lens profiles described by Taylor series coefficients up to very high orders. To demonstrate the versatility of this new approach, two further cases are solved: a Gaussian to at-top irradiance beam shaping system, and a beam shaping system that generates a more complex dark-hollow Gaussian (donut-like) irradiance profile with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of all calculated solutions and indicate the potential of this design approach for refractive beam shaping applications.

  10. Applying SF-Based Genre Approaches to English Writing Class

    ERIC Educational Resources Information Center

    Wu, Yan; Dong, Hailin

    2009-01-01

    By exploring genre approaches in systemic functional linguistics and examining the analytic tools that can be applied to the process of English learning and teaching, this paper seeks to find a way of applying genre approaches to English writing class.

  11. Analytical Description of the H/D Exchange Kinetic of Macromolecule.

    PubMed

    Kostyukevich, Yury; Kononikhin, Alexey; Popov, Igor; Nikolaev, Eugene

    2018-04-17

    We present the accurate analytical solution obtained for the system of rate equations describing the isotope exchange process for molecules containing an arbitrary number of equivalent labile atoms. The exact solution was obtained using Mathematica 7.0 software, and this solution has the form of the time-dependent Gaussian distribution. For the case when forward exchange considerably overlaps the back exchange, it is possible to estimate the activation energy of the reaction by obtaining a temperature dependence of the reaction degree. Using a previously developed approach for performing H/D exchange directly in the ESI source, we have estimated the activation energies for ions with different functional groups and they were found to be in a range 0.04-0.3 eV. Since the value of the activation energy depends on the type of functional group, the developed approach can have potential analytical applications for determining types of functional groups in complex mixtures, such as petroleum, humic substances, bio-oil, and so on.

  12. Earthdata Cloud Analytics Project

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Lynnes, Chris

    2018-01-01

    This presentation describes a nascent project in NASA to develop a framework to support end-user analytics of NASA's Earth science data in the cloud. The chief benefit of migrating EOSDIS (Earth Observation System Data and Information Systems) data to the cloud is to position the data next to enormous computing capacity to allow end users to process data at scale. The Earthdata Cloud Analytics project will user a service-based approach to facilitate the infusion of evolving analytics technology and the integration with non-NASA analytics or other complementary functionality at other agencies and in other nations.

  13. Visual analytics of brain networks.

    PubMed

    Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming

    2012-05-15

    Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Graphical analysis for gel morphology II. New mathematical approach for stretched exponential function with β>1

    NASA Astrophysics Data System (ADS)

    Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu

    2005-10-01

    A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.

  15. A Functional Approach to Reducing Runaway Behavior and Stabilizing Placements for Adolescents in Foster Care

    ERIC Educational Resources Information Center

    Clark, Hewitt B.; Crosland, Kimberly A.; Geller, David; Cripe, Michael; Kenney, Terresa; Neff, Bryon; Dunlap, Glen

    2008-01-01

    Teenagers' running from foster placement is a significant problem in the field of child protection. This article describes a functional, behavior analytic approach to reducing running away through assessing the motivations for running, involving the youth in the assessment process, and implementing interventions to enhance the reinforcing value of…

  16. Soft x-ray continuum radiation transmitted through metallic filters: an analytical approach to fast electron temperature measurements.

    PubMed

    Delgado-Aparicio, L; Tritz, K; Kramer, T; Stutman, D; Finkenthal, M; Hill, K; Bitter, M

    2010-10-01

    A new set of analytic formulas describes the transmission of soft x-ray continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler et al., Rev. Sci. Instrum. 70, 599 (1999)]. The new analytic formulas can improve the interpretation of the experimental results and thus contribute in obtaining fast temperature measurements in between intermittent Thomson scattering data.

  17. Free and Forced Vibrations of Thick-Walled Anisotropic Cylindrical Shells

    NASA Astrophysics Data System (ADS)

    Marchuk, A. V.; Gnedash, S. V.; Levkovskii, S. A.

    2017-03-01

    Two approaches to studying the free and forced axisymmetric vibrations of cylindrical shell are proposed. They are based on the three-dimensional theory of elasticity and division of the original cylindrical shell with concentric cross-sectional circles into several coaxial cylindrical shells. One approach uses linear polynomials to approximate functions defined in plan and across the thickness. The other approach also uses linear polynomials to approximate functions defined in plan, but their variation with thickness is described by the analytical solution of a system of differential equations. Both approaches have approximation and arithmetic errors. When determining the natural frequencies by the semi-analytical finite-element method in combination with the divide and conqure method, it is convenient to find the initial frequencies by the finite-element method. The behavior of the shell during free and forced vibrations is analyzed in the case where the loading area is half the shell thickness

  18. Longitudinal dielectric function and dispersion relation of electrostatic waves in relativistic plasmas

    NASA Astrophysics Data System (ADS)

    Touil, B.; Bendib, A.; Bendib-Kalache, K.

    2017-02-01

    The longitudinal dielectric function is derived analytically from the relativistic Vlasov equation for arbitrary values of the relevant parameters z = m c 2 / T , where m is the rest electron mass, c is the speed of light, and T is the electron temperature in energy units. A new analytical approach based on the Legendre polynomial expansion and continued fractions was used. Analytical expression of the electron distribution function was derived. The real part of the dispersion relation and the damping rate of electron plasma waves are calculated both analytically and numerically in the whole range of the parameter z . The results obtained improve significantly the previous results reported in the literature. For practical purposes, explicit expressions of the real part of the dispersion relation and the damping rate in the range z > 30 and strongly relativistic regime are also proposed.

  19. Image correlation and sampling study

    NASA Technical Reports Server (NTRS)

    Popp, D. J.; Mccormack, D. S.; Sedwick, J. L.

    1972-01-01

    The development of analytical approaches for solving image correlation and image sampling of multispectral data is discussed. Relevant multispectral image statistics which are applicable to image correlation and sampling are identified. The general image statistics include intensity mean, variance, amplitude histogram, power spectral density function, and autocorrelation function. The translation problem associated with digital image registration and the analytical means for comparing commonly used correlation techniques are considered. General expressions for determining the reconstruction error for specific image sampling strategies are developed.

  20. Two approaches to estimating the effect of parenting on the development of executive function in early childhood.

    PubMed

    Blair, Clancy; Raver, C Cybele; Berry, Daniel J

    2014-02-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience.

  1. Dynamic behaviour of a planar micro-beam loaded by a fluid-gap: Analytical and numerical approach in a high frequency range, benchmark solutions

    NASA Astrophysics Data System (ADS)

    Novak, A.; Honzik, P.; Bruneau, M.

    2017-08-01

    Miniaturized vibrating MEMS devices, active (receivers or emitters) or passive devices, and their use for either new applications (hearing, meta-materials, consumer devices,…) or metrological purposes under non-standard conditions, are involved today in several acoustic domains. More in-depth characterisation than the classical ones available until now are needed. In this context, the paper presents analytical and numerical approaches for describing the behaviour of three kinds of planar micro-beams of rectangular shape (suspended rigid or clamped elastic planar beam) loaded by a backing cavity or a fluid-gap, surrounded by very thin slits, and excited by an incident acoustic field. The analytical approach accounts for the coupling between the vibrating structure and the acoustic field in the backing cavity, the thermal and viscous diffusion processes in the boundary layers in the slits and the cavity, the modal behaviour for the vibrating structure, and the non-uniformity of the acoustic field in the backing cavity which is modelled in using an integral formulation with a suitable Green's function. Benchmark solutions are proposed in terms of beam motion (from which the sensitivity, input impedance, and pressure transfer function can be calculated). A numerical implementation (FEM) is handled against which the analytical results are tested.

  2. Flexible aircraft dynamic modeling for dynamic analysis and control synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.

    1989-01-01

    The linearization and simplification of a nonlinear, literal model for flexible aircraft is highlighted. Areas of model fidelity that are critical if the model is to be used for control system synthesis are developed and several simplification techniques that can deliver the necessary model fidelity are discussed. These techniques include both numerical and analytical approaches. An analytical approach, based on first-order sensitivity theory is shown to lead not only to excellent numerical results, but also to closed-form analytical expressions for key system dynamic properties such as the pole/zero factors of the vehicle transfer-function matrix. The analytical results are expressed in terms of vehicle mass properties, vibrational characteristics, and rigid-body and aeroelastic stability derivatives, thus leading to the underlying causes for critical dynamic characteristics.

  3. Evaluation of Analytical Modeling Functions for the Phonation Onset Process.

    PubMed

    Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael

    2016-01-01

    The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.

  4. Analytical functions for beta and gamma absorbed fractions of iodine-131 in spherical and ellipsoidal volumes.

    PubMed

    Mowlavi, Ali Asghar; Fornasier, Maria Rossa; Mirzaei, Mohammd; Bregant, Paola; de Denaro, Mario

    2014-10-01

    The beta and gamma absorbed fractions in organs and tissues are the important key factors of radionuclide internal dosimetry based on Medical Internal Radiation Dose (MIRD) approach. The aim of this study is to find suitable analytical functions for beta and gamma absorbed fractions in spherical and ellipsoidal volumes with a uniform distribution of iodine-131 radionuclide. MCNPX code has been used to calculate the energy absorption from beta and gamma rays of iodine-131 uniformly distributed inside different ellipsoids and spheres, and then the absorbed fractions have been evaluated. We have found the fit parameters of a suitable analytical function for the beta absorbed fraction, depending on a generalized radius for ellipsoid based on the radius of sphere, and a linear fit function for the gamma absorbed fraction. The analytical functions that we obtained from fitting process in Monte Carlo data can be used for obtaining the absorbed fractions of iodine-131 beta and gamma rays for any volume of the thyroid lobe. Moreover, our results for the spheres are in good agreement with the results of MIRD and other scientific literatures.

  5. Analytical approach for the fractional differential equations by using the extended tanh method

    NASA Astrophysics Data System (ADS)

    Pandir, Yusuf; Yildirim, Ayse

    2018-07-01

    In this study, we consider analytical solutions of space-time fractional derivative foam drainage equation, the nonlinear Korteweg-de Vries equation with time and space-fractional derivatives and time-fractional reaction-diffusion equation by using the extended tanh method. The fractional derivatives are defined in the modified Riemann-Liouville context. As a result, various exact analytical solutions consisting of trigonometric function solutions, kink-shaped soliton solutions and new exact solitary wave solutions are obtained.

  6. Peculiarities of the momentum distribution functions of strongly correlated charged fermions

    NASA Astrophysics Data System (ADS)

    Larkin, A. S.; Filinov, V. S.; Fortov, V. E.

    2018-01-01

    New numerical version of the Wigner approach to quantum thermodynamics of strongly coupled systems of particles has been developed for extreme conditions, when analytical approximations based on different kinds of perturbation theories cannot be applied. An explicit analytical expression of the Wigner function has been obtained in linear and harmonic approximations. Fermi statistical effects are accounted for by effective pair pseudopotential depending on coordinates, momenta and degeneracy parameter of particles and taking into account Pauli blocking of fermions. A new quantum Monte-Carlo method for calculations of average values of arbitrary quantum operators has been developed. Calculations of the momentum distribution functions and the pair correlation functions of degenerate ideal Fermi gas have been carried out for testing the developed approach. Comparison of the obtained momentum distribution functions of strongly correlated Coulomb systems with the Maxwell-Boltzmann and the Fermi distributions shows the significant influence of interparticle interaction both at small momenta and in high energy quantum ‘tails’.

  7. Technosocial Predictive Analytics in Support of Naturalistic Decision Making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Cowell, Andrew J.; Malone, Elizabeth L.

    2009-06-23

    A main challenge we face in fostering sustainable growth is to anticipate outcomes through predictive and proactive across domains as diverse as energy, security, the environment, health and finance in order to maximize opportunities, influence outcomes and counter adversities. The goal of this paper is to present new methods for anticipatory analytical thinking which address this challenge through the development of a multi-perspective approach to predictive modeling as a core to a creative decision making process. This approach is uniquely multidisciplinary in that it strives to create decision advantage through the integration of human and physical models, and leverages knowledgemore » management and visual analytics to support creative thinking by facilitating the achievement of interoperable knowledge inputs and enhancing the user’s cognitive access. We describe a prototype system which implements this approach and exemplify its functionality with reference to a use case in which predictive modeling is paired with analytic gaming to support collaborative decision-making in the domain of agricultural land management.« less

  8. Transfer function concept for ultrasonic characterization of material microstructures

    NASA Technical Reports Server (NTRS)

    Vary, A.; Kautz, H. E.

    1986-01-01

    The approach given depends on treating material microstructures as elastomechanical filters that have analytically definable transfer functions. These transfer functions can be defined in terms of the frequency dependence of the ultrasonic attenuation coefficient. The transfer function concept provides a basis for synthesizing expressions that characterize polycrystalline materials relative to microstructural factors such as mean grain size, grain-size distribution functions, and grain boundary energy transmission. Although the approach is nonrigorous, it leads to a rational basis for combining the previously mentioned diverse and fragmented equations for ultrasonic attenuation coefficients.

  9. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  10. Interactively Open Autonomy Unifies Two Approaches to Function

    NASA Astrophysics Data System (ADS)

    Collier, John

    2004-08-01

    Functionality is essential to any form of anticipation beyond simple directedness at an end. In the literature on function in biology, there are two distinct approaches. One, the etiological view, places the origin of function in selection, while the other, the organizational view, individuates function by organizational role. Both approaches have well-known advantages and disadvantages. I propose a reconciliation of the two approaches, based in an interactivist approach to the individuation and stability of organisms. The approach was suggested by Kant in the Critique of Judgment, but since it requires, on his account, the identification a new form of causation, it has not been accessible by analytical techniques. I proceed by construction of the required concept to fit certain design requirements. This construction builds on concepts introduced in my previous four talks to these meetings.

  11. On the first crossing distributions in fractional Brownian motion and the mass function of dark matter haloes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiotelis, Nicos; Popolo, Antonino Del, E-mail: adelpopolo@oact.inaf.it, E-mail: hiotelis@ipta.demokritos.gr

    We construct an integral equation for the first crossing distributions for fractional Brownian motion in the case of a constant barrier and we present an exact analytical solution. Additionally we present first crossing distributions derived by simulating paths from fractional Brownian motion. We compare the results of the analytical solutions with both those of simulations and those of some approximated solutions which have been used in the literature. Finally, we present multiplicity functions for dark matter structures resulting from our analytical approach and we compare with those resulting from N-body simulations. We show that the results of analytical solutions aremore » in good agreement with those of path simulations but differ significantly from those derived from approximated solutions. Additionally, multiplicity functions derived from fractional Brownian motion are poor fits of the those which result from N-body simulations. We also present comparisons with other models which are exist in the literature and we discuss different ways of improving the agreement between analytical results and N-body simulations.« less

  12. In situ intracellular spectroscopy with surface enhanced Raman spectroscopy (SERS)-enabled nanopipettes.

    PubMed

    Vitol, Elina A; Orynbayeva, Zulfiya; Bouchard, Michael J; Azizkhan-Clifford, Jane; Friedman, Gary; Gogotsi, Yury

    2009-11-24

    We report on a new analytical approach to intracellular chemical sensing that utilizes a surface-enhanced Raman spectroscopy (SERS)-enabled nanopipette. The probe is comprised of a glass capillary with a 100-500 nm tip coated with gold nanoparticles. The fixed geometry of the gold nanoparticles allows us to overcome the limitations of the traditional approach for intracellular SERS using metal colloids. We demonstrate that the SERS-enabled nanopipettes can be used for in situ analysis of living cell function in real time. In addition, SERS functionality of these probes allows tracking of their localization in a cell. The developed probes can also be applied for highly sensitive chemical analysis of nanoliter volumes of chemicals in a variety of environmental and analytical applications.

  13. Approximate analytical solutions in the analysis of thin elastic plates

    NASA Astrophysics Data System (ADS)

    Goloskokov, Dmitriy P.; Matrosov, Alexander V.

    2018-05-01

    Two approaches to the construction of approximate analytical solutions for bending of a rectangular thin plate are presented: the superposition method based on the method of initial functions (MIF) and the one built using the Green's function in the form of orthogonal series. Comparison of two approaches is carried out by analyzing a square plate clamped along its contour. Behavior of the moment and the shear force in the neighborhood of the corner points is discussed. It is shown that both solutions give identical results at all points of the plate except for the neighborhoods of the corner points. There are differences in the values of bending moments and generalized shearing forces in the neighborhoods of the corner points.

  14. ANALYTIC MODELING OF STARSHADES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cash, Webster

    2011-09-01

    External occulters, otherwise known as starshades, have been proposed as a solution to one of the highest priority yet technically vexing problems facing astrophysics-the direct imaging and characterization of terrestrial planets around other stars. New apodization functions, developed over the past few years, now enable starshades of just a few tens of meters diameter to occult central stars so efficiently that the orbiting exoplanets can be revealed and other high-contrast imaging challenges addressed. In this paper, an analytic approach to the analysis of these apodization functions is presented. It is used to develop a tolerance analysis suitable for use inmore » designing practical starshades. The results provide a mathematical basis for understanding starshades and a quantitative approach to setting tolerances.« less

  15. Two Approaches to Estimating the Effect of Parenting on the Development of Executive Function in Early Childhood

    PubMed Central

    Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.

    2015-01-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience. PMID:23834294

  16. Analytical determination of thermal conductivity of W-UO2 and W-UN CERMET nuclear fuels

    NASA Astrophysics Data System (ADS)

    Webb, Jonathan A.; Charit, Indrajit

    2012-08-01

    The thermal conductivity of tungsten based CERMET fuels containing UO2 and UN fuel particles are determined as a function of particle geometry, stabilizer fraction and fuel-volume fraction, by using a combination of an analytical approach and experimental data collected from literature. Thermal conductivity is estimated using the Bruggeman-Fricke model. This study demonstrates that thermal conductivities of various CERMET fuels can be analytically predicted to values that are very close to the experimentally determined ones.

  17. Fuzzy Linear Programming and its Application in Home Textile Firm

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2011-06-01

    In this paper, new fuzzy linear programming (FLP) based methodology using a specific membership function, named as modified logistic membership function is proposed. The modified logistic membership function is first formulated and its flexibility in taking up vagueness in parameter is established by an analytical approach. The developed methodology of FLP has provided a confidence in applying to real life industrial production planning problem. This approach of solving industrial production planning problem can have feedback with the decision maker, the implementer and the analyst.

  18. Target analyte quantification by isotope dilution LC-MS/MS directly referring to internal standard concentrations--validation for serum cortisol measurement.

    PubMed

    Maier, Barbara; Vogeser, Michael

    2013-04-01

    Isotope dilution LC-MS/MS methods used in the clinical laboratory typically involve multi-point external calibration in each analytical series. Our aim was to test the hypothesis that determination of target analyte concentrations directly derived from the relation of the target analyte peak area to the peak area of a corresponding stable isotope labelled internal standard compound [direct isotope dilution analysis (DIDA)] may be not inferior to conventional external calibration with respect to accuracy and reproducibility. Quality control samples and human serum pools were analysed in a comparative validation protocol for cortisol as an exemplary analyte by LC-MS/MS. Accuracy and reproducibility were compared between quantification either involving a six-point external calibration function, or a result calculation merely based on peak area ratios of unlabelled and labelled analyte. Both quantification approaches resulted in similar accuracy and reproducibility. For specified analytes, reliable analyte quantification directly derived from the ratio of peak areas of labelled and unlabelled analyte without the need for a time consuming multi-point calibration series is possible. This DIDA approach is of considerable practical importance for the application of LC-MS/MS in the clinical laboratory where short turnaround times often have high priority.

  19. Spin–orbit DFT with Analytic Gradients and Applications to Heavy Element Compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhiyong

    We have implemented the unrestricted DFT approach with one-electron spin–orbit operators in the massively parallel NWChem program. Also implemented is the analytic gradient in the DFT approach with spin–orbit interactions. The current capabilities include single-point calculations and geometry optimization. Vibrational frequencies can be calculated numerically from the analytically calculated gradients. The implementation is based on the spin–orbit interaction operator derived from the effective core potential approach. The exchange functionals used in the implementation are functionals derived for non-spin–orbit calculations, including GGA as well as hybrid functionals. Spin–orbit Hartree–Fock calculations can also be carried out. We have applied the spin–orbit DFTmore » methods to the Uranyl aqua complexes. We have optimized the structures and calculated the vibrational frequencies of both (UO2 2+)aq and (UO2 +)aq with and without spin–orbit effects. The effects of the spin–orbit interaction on the structures and frequencies of these two complexes are discussed. We also carried out calculations for Th2, and several low-lying electronic states are calculated. Our results indicate that, for open-shell systems, there are significant effects due to the spin–orbit effects and the electronic configurations with and without spin–orbit interactions could change due to the occupation of orbitals of larger spin–orbit interactions.« less

  20. Correction for isotopic interferences between analyte and internal standard in quantitative mass spectrometry by a nonlinear calibration function.

    PubMed

    Rule, Geoffrey S; Clark, Zlatuse D; Yue, Bingfang; Rockwood, Alan L

    2013-04-16

    Stable isotope-labeled internal standards are of great utility in providing accurate quantitation in mass spectrometry (MS). An implicit assumption has been that there is no "cross talk" between signals of the internal standard and the target analyte. In some cases, however, naturally occurring isotopes of the analyte do contribute to the signal of the internal standard. This phenomenon becomes more pronounced for isotopically rich compounds, such as those containing sulfur, chlorine, or bromine, higher molecular weight compounds, and those at high analyte/internal standard concentration ratio. This can create nonlinear calibration behavior that may bias quantitative results. Here, we propose the use of a nonlinear but more accurate fitting of data for these situations that incorporates one or two constants determined experimentally for each analyte/internal standard combination and an adjustable calibration parameter. This fitting provides more accurate quantitation in MS-based assays where contributions from analyte to stable labeled internal standard signal exist. It can also correct for the reverse situation where an analyte is present in the internal standard as an impurity. The practical utility of this approach is described, and by using experimental data, the approach is compared to alternative fits.

  1. An Analytical Model for Two-Order Asperity Degradation of Rock Joints Under Constant Normal Stiffness Conditions

    NASA Astrophysics Data System (ADS)

    Li, Yingchun; Wu, Wei; Li, Bo

    2018-05-01

    Jointed rock masses during underground excavation are commonly located under the constant normal stiffness (CNS) condition. This paper presents an analytical formulation to predict the shear behaviour of rough rock joints under the CNS condition. The dilatancy and deterioration of two-order asperities are quantified by considering the variation of normal stress. We separately consider the dilation angles of waviness and unevenness, which decrease to zero as the normal stress approaches the transitional stress. The sinusoidal function naturally yields the decay of dilation angle as a function of relative normal stress. We assume that the magnitude of transitional stress is proportionate to the square root of asperity geometric area. The comparison between the analytical prediction and experimental data shows the reliability of the analytical model. All the parameters involved in the analytical model possess explicit physical meanings and are measurable from laboratory tests. The proposed model is potentially practicable for assessing the stability of underground structures at various field scales.

  2. A complete analytical solution of the Fokker-Planck and balance equations for nucleation and growth of crystals

    NASA Astrophysics Data System (ADS)

    Makoveeva, Eugenya V.; Alexandrov, Dmitri V.

    2018-01-01

    This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  3. ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction

    PubMed Central

    Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo

    2015-01-01

    By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302

  4. Comparison of Three Methods for Wind Turbine Capacity Factor Estimation

    PubMed Central

    Ditkovich, Y.; Kuperman, A.

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755

  5. Differentiation from First Principles Using Spreadsheets

    ERIC Educational Resources Information Center

    Lim, Kieran F.

    2008-01-01

    In the teaching of calculus, the algebraic derivation of the derivative (gradient function) enables the student to obtain an analytic "global" gradient function. However, to the best of this author's knowledge, all current technology-based approaches require the student to obtain the derivative (gradient) at a single point by…

  6. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  7. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Anderson, M. R.; Schmidt, D. K.

    1986-01-01

    In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.

  8. Constructing and Deriving Reciprocal Trigonometric Relations: A Functional Analytic Approach

    ERIC Educational Resources Information Center

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K.; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed…

  9. Functional Communication Training: A Contemporary Behavior Analytic Intervention for Problem Behaviors.

    ERIC Educational Resources Information Center

    Durand, V. Mark; Merges, Eileen

    2001-01-01

    This article describes functional communication training (FCT) with students who have autism. FCT involves teaching alternative communication strategies to replace problem behaviors. The article reviews the conditions under which this intervention is successful and compares the method with other behavioral approaches. It concludes that functional…

  10. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  11. On the Utility of Content Analysis in Author Attribution: "The Federalist."

    ERIC Educational Resources Information Center

    Martindale, Colin; McKenzie, Dean

    1995-01-01

    Compares the success of lexical statistics, content analysis, and function words in determining the true author of "The Federalist." The function word approach proved most successful in attributing the papers to James Madison. Lexical statistics contributed nothing, while content analytic measures resulted in some success. (MJP)

  12. Transfer function verification and block diagram simplification of a very high-order distributed pole closed-loop servo by means of non-linear time-response simulation

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1975-01-01

    Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.

  13. An analytical method for assessing the spatial and temporal variation of juvenile Atlantic salmon habitat in an upland Scottish river.

    NASA Astrophysics Data System (ADS)

    Buddendorf, B.; Fabris, L.; Malcolm, I.; Lazzaro, G.; Tetzlaff, D.; Botter, G.; Soulsby, C.

    2016-12-01

    Wild Atlantic salmon populations in Scottish rivers constitute an important economic and recreational resource, as well as being a key component of biodiversity. Salmon have specific habitat requirements at different life stages and their distribution is therefore strongly influenced by a complex suite of biological and physical controls. Stream hydrodynamics have a strong influence on habitat quality and affect the distribution and density of juvenile salmon. As stream hydrodynamics directly relate to stream flow variability and channel morphology, the effects of hydroclimatic drivers on the spatial and temporal variability of habitat suitability can be assessed. Critical Displacement Velocity (CDV), which describes the velocity at which fish can no longer hold station, is one potential approach for characterising habitat suitability. CDV is obtained using an empirical formula that depends on fish size and stream temperature. By characterising the proportion of a reach below CDV it is possible to assess the suitable area. We demonstrate that a generic analytical approach based on field survey and hydraulic modelling can provide insights on the interactions between flow regime and average suitable area (SA) for juvenile salmon that could be extended to other aquatic species. Analytical functions are used to model the pdf of stream flow p(q) and the relationship between flow and suitable area SA(q). Theoretically these functions can assume any form. Here we used a gamma distribution to model p(q) and a gamma function to model SA(q). Integrating the product of these functions we obtain an analytical expression of SA. Since parameters of p(q) can be estimated from meteorological and flow measurements, they can be used directly to predict the effect of flow regime on SA. We show the utility of the approach with reference to 6 electrofishing sites in a single river system where long term (50 years) data on spatially distributed juvenile salmon densities are available.

  14. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  15. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  16. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  17. Unimolecular diffusion-mediated reactions with a nonrandom time-modulated absorbing barrier

    NASA Technical Reports Server (NTRS)

    Bashford, D.; Weaver, D. L.

    1986-01-01

    A diffusion-reaction model with time-dependent reactivity is formulated and applied to unimolecular reactions. The model is solved exactly numerically and approximately analytically for the unreacted fraction as a function of time. It is shown that the approximate analytical solution is valid even when the system is far from equilibrium, and when the reactivity probability is more complicated than a square-wave function of time. A discussion is also given of an approach to problems of this type using a stochastically fluctuating reactivity, and the first-passage time for a particular example is derived.

  18. A New Computational Method to Fit the Weighted Euclidean Distance Model.

    ERIC Educational Resources Information Center

    De Leeuw, Jan; Pruzansky, Sandra

    1978-01-01

    A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)

  19. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  20. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  1. Chemoselective synthesis and analysis of naturally occurring phosphorylated cysteine peptides

    PubMed Central

    Bertran-Vicente, Jordi; Penkert, Martin; Nieto-Garcia, Olaia; Jeckelmann, Jean-Marc; Schmieder, Peter; Krause, Eberhard; Hackenberger, Christian P. R.

    2016-01-01

    In contrast to protein O-phosphorylation, studying the function of the less frequent N- and S-phosphorylation events have lagged behind because they have chemical features that prevent their manipulation through standard synthetic and analytical methods. Here we report on the development of a chemoselective synthetic method to phosphorylate Cys side-chains in unprotected peptides. This approach makes use of a reaction between nucleophilic phosphites and electrophilic disulfides accessible by standard methods. We achieve the stereochemically defined phosphorylation of a Cys residue and verify the modification using electron-transfer higher-energy dissociation (EThcD) mass spectrometry. To demonstrate the use of the approach in resolving biological questions, we identify an endogenous Cys phosphorylation site in IICBGlc, which is known to be involved in the carbohydrate uptake from the bacterial phosphotransferase system (PTS). This new chemical and analytical approach finally allows further investigating the functions and significance of Cys phosphorylation in a wide range of crucial cellular processes. PMID:27586301

  2. A Variational Approach to the Analysis of Dissipative Electromechanical Systems

    PubMed Central

    Allison, Andrew; Pearce, Charles E. M.; Abbott, Derek

    2014-01-01

    We develop a method for systematically constructing Lagrangian functions for dissipative mechanical, electrical, and electromechanical systems. We derive the equations of motion for some typical electromechanical systems using deterministic principles that are strictly variational. We do not use any ad hoc features that are added on after the analysis has been completed, such as the Rayleigh dissipation function. We generalise the concept of potential, and define generalised potentials for dissipative lumped system elements. Our innovation offers a unified approach to the analysis of electromechanical systems where there are energy and power terms in both the mechanical and electrical parts of the system. Using our novel technique, we can take advantage of the analytic approach from mechanics, and we can apply these powerful analytical methods to electrical and to electromechanical systems. We can analyse systems that include non-conservative forces. Our methodology is deterministic, and does does require any special intuition, and is thus suitable for automation via a computer-based algebra package. PMID:24586221

  3. Almost analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2017-11-01

    We present an almost analytical new approach to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of solving this matrix eigenvalue problem purely numerically, which may suffer from the computational inaccuracy for big data, first, we consider a pair of integral and differential equations, which are related to the so-called prolate spheroidal wave functions (PSWF). For the PSWF differential equation, the pair of the eigenvectors (PSWF) and eigenvalues can be obtained from a relatively small number of analytical Legendre functions. Then, the eigenvalues in the PSWF integral equation are expressed in terms of functional values of the PSWF and the eigenvalues of the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data; ordinary irregular waves and rogue waves. We found that the present almost analytical method is better than the conventional data-independent Fourier representation and, also, the conventional direct numerical K-L representation in terms of both accuracy and computational cost. This work was supported by the National Research Foundation of Korea (NRF). (NRF-2017R1D1A1B03028299).

  4. Universal analytical scattering form factor for shell-, core-shell, or homogeneous particles with continuously variable density profile shape.

    PubMed

    Foster, Tobias

    2011-09-01

    A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society

  5. Basic emotion processing and the adolescent brain: Task demands, analytic approaches, and trajectories of changes.

    PubMed

    Del Piero, Larissa B; Saxbe, Darby E; Margolin, Gayla

    2016-06-01

    Early neuroimaging studies suggested that adolescents show initial development in brain regions linked with emotional reactivity, but slower development in brain structures linked with emotion regulation. However, the increased sophistication of adolescent brain research has made this picture more complex. This review examines functional neuroimaging studies that test for differences in basic emotion processing (reactivity and regulation) between adolescents and either children or adults. We delineated different emotional processing demands across the experimental paradigms in the reviewed studies to synthesize the diverse results. The methods for assessing change (i.e., analytical approach) and cohort characteristics (e.g., age range) were also explored as potential factors influencing study results. Few unifying dimensions were found to successfully distill the results of the reviewed studies. However, this review highlights the potential impact of subtle methodological and analytic differences between studies, need for standardized and theory-driven experimental paradigms, and necessity of analytic approaches that are can adequately test the trajectories of developmental change that have recently been proposed. Recommendations for future research highlight connectivity analyses and non-linear developmental trajectories, which appear to be promising approaches for measuring change across adolescence. Recommendations are made for evaluating gender and biological markers of development beyond chronological age. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Theory for the three-dimensional Mercedes-Benz model of water.

    PubMed

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A

    2009-11-21

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.

  7. Theory for the three-dimensional Mercedes-Benz model of water

    PubMed Central

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.

    2009-01-01

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the “right answer,” we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim’s Ornstein–Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation. PMID:19929057

  8. Theory for the three-dimensional Mercedes-Benz model of water

    NASA Astrophysics Data System (ADS)

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.

    2009-11-01

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.

  9. Basic emotion processing and the adolescent brain: Task demands, analytic approaches, and trajectories of changes

    PubMed Central

    Del Piero, Larissa B.; Saxbe, Darby E.; Margolin, Gayla

    2016-01-01

    Early neuroimaging studies suggested that adolescents show initial development in brain regions linked with emotional reactivity, but slower development in brain structures linked with emotion regulation. However, the increased sophistication of adolescent brain research has made this picture more complex. This review examines functional neuroimaging studies that test for differences in basic emotion processing (reactivity and regulation) between adolescents and either children or adults. We delineated different emotional processing demands across the experimental paradigms in the reviewed studies to synthesize the diverse results. The methods for assessing change (i.e., analytical approach) and cohort characteristics (e.g., age range) were also explored as potential factors influencing study results. Few unifying dimensions were found to successfully distill the results of the reviewed studies. However, this review highlights the potential impact of subtle methodological and analytic differences between studies, need for standardized and theory-driven experimental paradigms, and necessity of analytic approaches that are can adequately test the trajectories of developmental change that have recently been proposed. Recommendations for future research highlight connectivity analyses and nonlinear developmental trajectories, which appear to be promising approaches for measuring change across adolescence. Recommendations are made for evaluating gender and biological markers of development beyond chronological age. PMID:27038840

  10. An Evidence-Driven, Solution-Focused Approach to Functional Behavior Assessment Report Writing

    ERIC Educational Resources Information Center

    Farmer, Ryan L.; Floyd, Randy G.

    2016-01-01

    School-based practitioners are to implement and report functional behavior assessments (FBAs) that are consistent with both the research literature and the law, both federal and state. However, the literature regarding how best to document the FBA process is underdeveloped. A review of applied behavior analytic and school psychology literature as…

  11. Quantifying wall turbulence via a symmetry approach: A Lie group theory

    NASA Astrophysics Data System (ADS)

    She, Zhen-Su; Chen, Xi; Hussain, Fazle

    2017-11-01

    We present a symmetry-based approach which yields analytic expressions for the mean velocity and kinetic energy profiles from a Lie-group analysis. After verifying the dilation-group invariance of the Reynolds averaged Navier-Stokes equation in the presence of a wall, we select a stress and energy length function as similarity variables which are assumed to have a simple dilation-invariant form. Three kinds of (local) invariant forms of the length functions are postulated, a combination of which yields a multi-layer formula giving its distribution in the entire flow region normal to the wall. The mean velocity profile is then predicted using the mean momentum equation, which yields, in particular, analytic expressions for the (universal) wall function and separate wake functions for pipe and channel - which are validated by data from direct numerical simulations (DNS). Future applications to a variety of wall flows such as flows around flat plate or airfoil, in a Rayleigh-Benard cell or Taylor-Couette system, etc., are discussed, for which the dilation group invariance is valid in the wall-normal direction.

  12. Molecular properties of excited electronic state: Formalism, implementation, and applications of analytical second energy derivatives within the framework of the time-dependent density functional theory/molecular mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Qiao; Liang, WanZhen, E-mail: liangwz@xmu.edu.cn; Liu, Jie

    2014-05-14

    This work extends our previous works [J. Liu and W. Z. Liang, J. Chem. Phys. 135, 014113 (2011); J. Liu and W. Z. Liang, J. Chem. Phys. 135, 184111 (2011)] on analytical excited-state energy Hessian within the framework of time-dependent density functional theory (TDDFT) to couple with molecular mechanics (MM). The formalism, implementation, and applications of analytical first and second energy derivatives of TDDFT/MM excited state with respect to the nuclear and electric perturbations are presented. Their performances are demonstrated by the calculations of adiabatic excitation energies, and excited-state geometries, harmonic vibrational frequencies, and infrared intensities for a number ofmore » benchmark systems. The consistent results with the full quantum mechanical method and other hybrid theoretical methods indicate the reliability of the current numerical implementation of developed algorithms. The computational accuracy and efficiency of the current analytical approach are also checked and the computational efficient strategies are suggested to speed up the calculations of complex systems with many MM degrees of freedom. Finally, we apply the current analytical approach in TDDFT/MM to a realistic system, a red fluorescent protein chromophore together with part of its nearby protein matrix. The calculated results indicate that the rearrangement of the hydrogen bond interactions between the chromophore and the protein matrix is responsible for the large Stokes shift.« less

  13. Modeling of phonon scattering in n-type nanowire transistors using one-shot analytic continuation technique

    NASA Astrophysics Data System (ADS)

    Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel

    2013-10-01

    We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.

  14. Experimental issues related to frequency response function measurements for frequency-based substructuring

    NASA Astrophysics Data System (ADS)

    Nicgorski, Dana; Avitabile, Peter

    2010-07-01

    Frequency-based substructuring is a very popular approach for the generation of system models from component measured data. Analytically the approach has been shown to produce accurate results. However, implementation with actual test data can cause difficulties and cause problems with the system response prediction. In order to produce good results, extreme care is needed in the measurement of the drive point and transfer impedances of the structure as well as observe all the conditions for a linear time invariant system. Several studies have been conducted to show the sensitivity of the technique to small variations that often occur during typical testing of structures. These variations have been observed in actual tested configurations and have been substantiated with analytical models to replicate the problems typically encountered. The use of analytically simulated issues helps to clearly see the effects of typical measurement difficulties often observed in test data. This paper presents some of these common problems observed and provides guidance and recommendations for data to be used for this modeling approach.

  15. Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation

    NASA Astrophysics Data System (ADS)

    Quiroz, Gregory

    Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.

  16. Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows

    NASA Technical Reports Server (NTRS)

    He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.

  17. Multi-gas interaction modeling on decorated semiconductor interfaces: A novel Fermi distribution-based response isotherm and the inverse hard/soft acid/base concept

    NASA Astrophysics Data System (ADS)

    Laminack, William; Gole, James

    2015-12-01

    A unique MEMS/NEMS approach is presented for the modeling of a detection platform for mixed gas interactions. Mixed gas analytes interact with nanostructured decorating metal oxide island sites supported on a microporous silicon substrate. The Inverse Hard/Soft acid/base (IHSAB) concept is used to assess a diversity of conductometric responses for mixed gas interactions as a function of these nanostructured metal oxides. The analyte conductometric responses are well represented using a combination diffusion/absorption-based model for multi-gas interactions where a newly developed response absorption isotherm, based on the Fermi distribution function is applied. A further coupling of this model with the IHSAB concept describes the considerations in modeling of multi-gas mixed analyte-interface, and analyte-analyte interactions. Taking into account the molecular electronic interaction of both the analytes with each other and an extrinsic semiconductor interface we demonstrate how the presence of one gas can enhance or diminish the reversible interaction of a second gas with the extrinsic semiconductor interface. These concepts demonstrate important considerations in the array-based formats for multi-gas sensing and its applications.

  18. Generalized plasma dispersion function: One-solve-all treatment, visualizations, and application to Landau damping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Hua-Sheng

    2013-09-15

    A unified, fast, and effective approach is developed for numerical calculation of the well-known plasma dispersion function with extensions from Maxwellian distribution to almost arbitrary distribution functions, such as the δ, flat top, triangular, κ or Lorentzian, slowing down, and incomplete Maxwellian distributions. The singularity and analytic continuation problems are also solved generally. Given that the usual conclusion γ∝∂f{sub 0}/∂v is only a rough approximation when discussing the distribution function effects on Landau damping, this approach provides a useful tool for rigorous calculations of the linear wave and instability properties of plasma for general distribution functions. The results are alsomore » verified via a linear initial value simulation approach. Intuitive visualizations of the generalized plasma dispersion function are also provided.« less

  19. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

  20. Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Sen, S. K.

    2007-01-01

    Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *

  1. Stimulus overselectivity four decades later: a review of the literature and its implications for current research in autism spectrum disorder.

    PubMed

    Ploog, Bertram O

    2010-11-01

    This review of several topics related to "stimulus overselectivity" (Lovaas et al., J Abnormal Psychol 77:211-222, 1971) has three main purposes: (1) To outline the factors that may contribute to overselectivity; (2) to link the behavior-analytical notion of overselectivity to current nonbehavior-analytical research and theory; and (3) to suggest remedial strategies based on the behavior-analytical approach. While it is clear that overselectivity is not specific to autism spectrum disorder (ASD) and also that not all persons with ASD exhibit overselectivity, it is prevalent in ASD and has critical implications for symptoms, treatment, research, and theory. Weak Central Coherence and Enhanced Perceptual Functioning theories are briefly considered. The research areas addressed here include theory of mind, joint attention, language development, and executive function.

  2. Bessel beam CARS of axially structured samples

    NASA Astrophysics Data System (ADS)

    Heuke, Sandro; Zheng, Juanjuan; Akimov, Denis; Heintzmann, Rainer; Schmitt, Michael; Popp, Jürgen

    2015-06-01

    We report about a Bessel beam CARS approach for axial profiling of multi-layer structures. This study presents an experimental implementation for the generation of CARS by Bessel beam excitation using only passive optical elements. Furthermore, an analytical expression is provided describing the generated anti-Stokes field by a homogeneous sample. Based on the concept of coherent transfer functions, the underling resolving power of axially structured geometries is investigated. It is found that through the non-linearity of the CARS process in combination with the folded illumination geometry continuous phase-matching is achieved starting from homogeneous samples up to spatial sample frequencies at twice of the pumping electric field wave. The experimental and analytical findings are modeled by the implementation of the Debye Integral and scalar Green function approach. Finally, the goal of reconstructing an axially layered sample is demonstrated on the basis of the numerically simulated modulus and phase of the anti-Stokes far-field radiation pattern.

  3. Bessel beam CARS of axially structured samples.

    PubMed

    Heuke, Sandro; Zheng, Juanjuan; Akimov, Denis; Heintzmann, Rainer; Schmitt, Michael; Popp, Jürgen

    2015-06-05

    We report about a Bessel beam CARS approach for axial profiling of multi-layer structures. This study presents an experimental implementation for the generation of CARS by Bessel beam excitation using only passive optical elements. Furthermore, an analytical expression is provided describing the generated anti-Stokes field by a homogeneous sample. Based on the concept of coherent transfer functions, the underling resolving power of axially structured geometries is investigated. It is found that through the non-linearity of the CARS process in combination with the folded illumination geometry continuous phase-matching is achieved starting from homogeneous samples up to spatial sample frequencies at twice of the pumping electric field wave. The experimental and analytical findings are modeled by the implementation of the Debye Integral and scalar Green function approach. Finally, the goal of reconstructing an axially layered sample is demonstrated on the basis of the numerically simulated modulus and phase of the anti-Stokes far-field radiation pattern.

  4. Enhancement of Raman scattering in dielectric nanostructures with electric and magnetic Mie resonances

    NASA Astrophysics Data System (ADS)

    Frizyuk, Kristina; Hasan, Mehedi; Krasnok, Alex; Alú, Andrea; Petrov, Mihail

    2018-02-01

    Resonantly enhanced Raman scattering in dielectric nanostructures has been recently proven to be an efficient tool for nanothermometry and for the experimental determination of their mode composition. In this paper we develop a rigorous analytical theory based on the Green's function approach to calculate the Raman emission from crystalline high-index dielectric nanoparticles. As an example, we consider silicon nanoparticles which have a strong Raman response due to active optical phonon modes. We relate enhancement of Raman signal emission to the Purcell effect due to the excitation of Mie modes inside the nanoparticles. We also employ our numerical approach to calculate inelastic Raman emission in more sophisticated geometries, which do not allow a straightforward analytical form of the Green's function. The Raman response from a silicon nanodisk has been analyzed with the proposed method, and the contribution of various Mie modes has been revealed.

  5. Energy absorption due to spatial resonance of Alfven waves at continuum tip

    NASA Astrophysics Data System (ADS)

    Chen, Eugene; Berk, Herb; Breizman, Boris; Zheng, Linjin

    2011-10-01

    We investigate the response of tokamak plasma to an external driving source. An impedance-like function depending on the driving frequency that is growing at a small rate, is calculated and interpreted with different source profiles. Special attention is devoted to the case where driving frequency approaches that of the TAE continuum tip. The calculation can be applied to the estimation of TAE damping rate by analytically continuing the inverse of the impedance function to the lower half plane. The root of the analytic continuation corresponds to the existence of a quasi-mode, from which the damping rate can be found.

  6. Brownian systems with spatially inhomogeneous activity

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Brader, J. M.

    2017-09-01

    We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.

  7. Checking Equity: Why Differential Item Functioning Analysis Should Be a Routine Part of Developing Conceptual Assessments

    ERIC Educational Resources Information Center

    Martinková, Patricia; Drabinová, Adéla; Liaw, Yuan-Ling; Sanders, Elizabeth A.; McFarland, Jenny L.; Price, Rebecca M.

    2017-01-01

    We provide a tutorial on differential item functioning (DIF) analysis, an analytic method useful for identifying potentially biased items in assessments. After explaining a number of methodological approaches, we test for gender bias in two scenarios that demonstrate why DIF analysis is crucial for developing assessments, particularly because…

  8. Convolved substructure: analytically decorrelating jet substructure observables

    NASA Astrophysics Data System (ADS)

    Moult, Ian; Nachman, Benjamin; Neill, Duff

    2018-05-01

    A number of recent applications of jet substructure, in particular searches for light new particles, require substructure observables that are decorrelated with the jet mass. In this paper we introduce the Convolved SubStructure (CSS) approach, which uses a theoretical understanding of the observable to decorrelate the complete shape of its distribution. This decorrelation is performed by convolution with a shape function whose parameters and mass dependence are derived analytically. We consider in detail the case of the D 2 observable and perform an illustrative case study using a search for a light hadronically decaying Z'. We find that the CSS approach completely decorrelates the D 2 observable over a wide range of masses. Our approach highlights the importance of improving the theoretical understanding of jet substructure observables to exploit increasingly subtle features for performance.

  9. Planetary spectra for anisotropic scattering

    NASA Technical Reports Server (NTRS)

    Chamberlain, J. W.

    1976-01-01

    Some effects on planetary spectra that would be produced by departures from isotropic scattering are examined. The phase function is the simplest departure to handle analytically and the only phase function, other than the isotropic one, that can be incorporated into a Chandrasekhar first approximation. This approach has the advantage of illustrating effects resulting from anisotropies while retaining the simplicity that yields analytic solutions. The curve of growth is the sine qua non of planetary spectroscopy. The discussion emphasizes the difficulties and importance of ascertaining curves of growth as functions of observing geometry. A plea is made to observers to analyze their empirical curves of growth, whenever it seems feasible, in terms of coefficients of which are the leading terms in radiative-transfer analysis. An algebraic solution to the two sets of anisotropic H functions is developed which gives emergent intensities accurate to 0.3%.

  10. On finding the analytic dependencies of the external field potential on the control function when optimizing the beam dynamics

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, A. D.; Kozynchenko, S. A.; Kozynchenko, V. A.

    2017-12-01

    When developing a particle accelerator for generating the high-precision beams, the injection system design is of importance, because it largely determines the output characteristics of the beam. At the present paper we consider the injection systems consisting of electrodes with given potentials. The design of such systems requires carrying out simulation of beam dynamics in the electrostatic fields. For external field simulation we use the new approach, proposed by A.D. Ovsyannikov, which is based on analytical approximations, or finite difference method, taking into account the real geometry of the injection system. The software designed for solving the problems of beam dynamics simulation and optimization in the injection system for non-relativistic beams has been developed. Both beam dynamics and electric field simulations in the injection system which use analytical approach and finite difference method have been made and the results presented in this paper.

  11. Gradient retention prediction of acid-base analytes in reversed phase liquid chromatography: a simplified approach for acetonitrile-water mobile phases.

    PubMed

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2014-11-28

    In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Influence of analytical bias and imprecision on the number of false positive results using Guideline-Driven Medical Decision Limits.

    PubMed

    Hyltoft Petersen, Per; Klee, George G

    2014-03-20

    Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Reprint of "Influence of analytical bias and imprecision on the number of false positive results using Guideline-Driven Medical Decision Limits".

    PubMed

    Hyltoft Petersen, Per; Klee, George G

    2014-05-15

    Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2014. Published by Elsevier B.V.

  14. An Approach for Integrating the Prioritization of Functional and Nonfunctional Requirements

    PubMed Central

    Dabbagh, Mohammad; Lee, Sai Peck

    2014-01-01

    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches. PMID:24982987

  15. An approach for integrating the prioritization of functional and nonfunctional requirements.

    PubMed

    Dabbagh, Mohammad; Lee, Sai Peck

    2014-01-01

    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.

  16. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  17. Classical Dynamics of Fullerenes

    NASA Astrophysics Data System (ADS)

    Sławianowski, Jan J.; Kotowski, Romuald K.

    2017-06-01

    The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.

  18. An analytical approach for the calculation of stress-intensity factors in transformation-toughened ceramics

    NASA Astrophysics Data System (ADS)

    Müller, W. H.

    1990-12-01

    Stress-induced transformation toughening in Zirconia-containing ceramics is described analytically by means of a quantitative model: A Griffith crack which interacts with a transformed, circular Zirconia inclusion. Due to its volume expansion, a ZrO2-particle compresses its flanks, whereas a particle in front of the crack opens the flanks such that the crack will be attracted and finally absorbed. Erdogan's integral equation technique is applied to calculate the dislocation functions and the stress-intensity-factors which correspond to these situations. In order to derive analytical expressions, the elastic constants of the inclusion and the matrix are assumed to be equal.

  19. 76 FR 71029 - Coordination of Functions; Memorandum of Understanding

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ... Attorneys will meet, not less than biannually, to review enforcement priorities, systemic investigations of... evaluations; systemic and individual investigation files; and conciliation agreements and settlements; (ii) Consistent analytical approaches to identifying and remedying employment discrimination under Title VII; (iii...

  20. Analytical approaches to the determination of spin-dependent parton distribution functions at NNLO approximation

    NASA Astrophysics Data System (ADS)

    Salajegheh, Maral; Nejad, S. Mohammad Moosavi; Khanpour, Hamzeh; Tehrani, S. Atashbar

    2018-05-01

    In this paper, we present SMKA18 analysis, which is a first attempt to extract the set of next-to-next-leading-order (NNLO) spin-dependent parton distribution functions (spin-dependent PDFs) and their uncertainties determined through the Laplace transform technique and Jacobi polynomial approach. Using the Laplace transformations, we present an analytical solution for the spin-dependent Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution equations at NNLO approximation. The results are extracted using a wide range of proton g1p(x ,Q2) , neutron g1n(x ,Q2) , and deuteron g1d(x ,Q2) spin-dependent structure functions data set including the most recent high-precision measurements from COMPASS16 experiments at CERN, which are playing an increasingly important role in global spin-dependent fits. The careful estimations of uncertainties have been done using the standard Hessian error propagation. We will compare our results with the available spin-dependent inclusive deep inelastic scattering data set and other results for the spin-dependent PDFs in literature. The results obtained for the spin-dependent PDFs as well as spin-dependent structure functions are clearly explained both in the small and large values of x .

  1. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  2. Ringo: Interactive Graph Analytics on Big-Memory Machines

    PubMed Central

    Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure

    2016-01-01

    We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads. PMID:27081215

  3. Ringo: Interactive Graph Analytics on Big-Memory Machines.

    PubMed

    Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure

    2015-01-01

    We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads.

  4. The role of proteomics in studies of protein moonlighting.

    PubMed

    Beynon, Robert J; Hammond, Dean; Harman, Victoria; Woolerton, Yvonne

    2014-12-01

    The increasing acceptance that proteins may exert multiple functions in the cell brings with it new analytical challenges that will have an impact on the field of proteomics. Many proteomics workflows begin by destroying information about the interactions between different proteins, and the reduction of a complex protein mixture to constituent peptides also scrambles information about the combinatorial potential of post-translational modifications. To bring the focus of proteomics on to the domain of protein moonlighting will require novel analytical and quantitative approaches.

  5. Variable-centered and person-centered approaches to studying Mexican-origin mother-daughter cultural orientation dissonance.

    PubMed

    Bámaca-Colbert, Mayra Y; Gayles, Jochebed G

    2010-11-01

    The overall aim of the current study was to identify the methodological approach and corresponding analytic procedure that best elucidated the associations among Mexican-origin mother-daughter cultural orientation dissonance, family functioning, and adolescent adjustment. To do so, we employed, and compared, two methodological approaches (i.e., variable-centered and person-centered) via four analytic procedures (i.e., difference score, interactive, matched/mismatched grouping, and latent profiles). The sample consisted of 319 girls in the 7th or 10th grade and their mother or mother figure from a large Southwestern, metropolitan area in the US. Family factors were found to be important predictors of adolescent adjustment in all models. Although some findings were similar across all models, overall, findings suggested that the latent profile procedure best elucidated the associations among the variables examined in this study. In addition, associations were present across early and middle adolescents, with a few findings being only present for one group. Implications for using these analytic procedures in studying cultural and family processes are discussed.

  6. Analytical solutions for two-dimensional Stokes flow singularities in a no-slip wedge of arbitrary angle

    PubMed Central

    Brzezicki, Samuel J.

    2017-01-01

    An analytical method to find the flow generated by the basic singularities of Stokes flow in a wedge of arbitrary angle is presented. Specifically, we solve a biharmonic equation for the stream function of the flow generated by a point stresslet singularity and satisfying no-slip boundary conditions on the two walls of the wedge. The method, which is readily adapted to any other singularity type, takes full account of any transcendental singularities arising at the corner of the wedge. The approach is also applicable to problems of plane strain/stress of an elastic solid where the biharmonic equation also governs the Airy stress function. PMID:28690412

  7. Analytical solutions for two-dimensional Stokes flow singularities in a no-slip wedge of arbitrary angle.

    PubMed

    Crowdy, Darren G; Brzezicki, Samuel J

    2017-06-01

    An analytical method to find the flow generated by the basic singularities of Stokes flow in a wedge of arbitrary angle is presented. Specifically, we solve a biharmonic equation for the stream function of the flow generated by a point stresslet singularity and satisfying no-slip boundary conditions on the two walls of the wedge. The method, which is readily adapted to any other singularity type, takes full account of any transcendental singularities arising at the corner of the wedge. The approach is also applicable to problems of plane strain/stress of an elastic solid where the biharmonic equation also governs the Airy stress function.

  8. The calculation of transport properties in quantum liquids using the maximum entropy numerical analytic continuation method: Application to liquid para-hydrogen

    PubMed Central

    Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.

    2002-01-01

    We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656

  9. Approach of Decision Making Based on the Analytic Hierarchy Process for Urban Landscape Management

    NASA Astrophysics Data System (ADS)

    Srdjevic, Zorica; Lakicevic, Milena; Srdjevic, Bojan

    2013-03-01

    This paper proposes a two-stage group decision making approach to urban landscape management and planning supported by the analytic hierarchy process. The proposed approach combines an application of the consensus convergence model and the weighted geometric mean method. The application of the proposed approach is shown on a real urban landscape planning problem with a park-forest in Belgrade, Serbia. Decision makers were policy makers, i.e., representatives of several key national and municipal institutions, and experts coming from different scientific fields. As a result, the most suitable management plan from the set of plans is recognized. It includes both native vegetation renewal in degraded areas of park-forest and continued maintenance of its dominant tourism function. Decision makers included in this research consider the approach to be transparent and useful for addressing landscape management tasks. The central idea of this paper can be understood in a broader sense and easily applied to other decision making problems in various scientific fields.

  10. Approach of decision making based on the analytic hierarchy process for urban landscape management.

    PubMed

    Srdjevic, Zorica; Lakicevic, Milena; Srdjevic, Bojan

    2013-03-01

    This paper proposes a two-stage group decision making approach to urban landscape management and planning supported by the analytic hierarchy process. The proposed approach combines an application of the consensus convergence model and the weighted geometric mean method. The application of the proposed approach is shown on a real urban landscape planning problem with a park-forest in Belgrade, Serbia. Decision makers were policy makers, i.e., representatives of several key national and municipal institutions, and experts coming from different scientific fields. As a result, the most suitable management plan from the set of plans is recognized. It includes both native vegetation renewal in degraded areas of park-forest and continued maintenance of its dominant tourism function. Decision makers included in this research consider the approach to be transparent and useful for addressing landscape management tasks. The central idea of this paper can be understood in a broader sense and easily applied to other decision making problems in various scientific fields.

  11. New Ways of Treating Data for Diatomic Molecule 'shelf' and Double-Minimum States

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.; Tao, Jason; Khanna, Shirin; Pashov, Asen; Tellinghuisen, Joel

    2017-06-01

    Electronic states whose potential energy functions have 'shelf' or double-minimum shapes have always presented special challenges because, as functions of vibrational quantum number, the vibrational energies/spacings and inertial rotational constants either have an abrupt change of character with discontinuous slope, or past a given point, become completely chaotic. The present work shows that a `traditional' methodology developed for deep `regular' single-well potentials can also provide accurate `parameter-fit' descriptions of the v-dependence of the vibrational energies and rotational constants of shelf-state potentials that allow a conventional RKR calculation of their Potential energy functions. It is also shown that a merging of Pashov's uniquely flexible 'spline point-wise' potential function representation with Le Roy's `Morse/Long-Range' (MLR) analytic functional form which automatically incorporates the correct theoretically known long-range form, yields an analytic function that incorporates most of the advantages of both approaches. An illustrative application of this method to data to a double-minimum state of Na_2 will be described.

  12. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Rational Selection, Criticality Assessment, and Tiering of Quality Attributes and Test Methods for Analytical Similarity Evaluation of Biosimilars.

    PubMed

    Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette

    2018-05-10

    Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.

  14. Next Generation Offline Approaches to Trace Gas-Phase Organic Compound Speciation: Sample Collection and Analysis

    NASA Astrophysics Data System (ADS)

    Sheu, R.; Marcotte, A.; Khare, P.; Ditto, J.; Charan, S.; Gentner, D. R.

    2017-12-01

    Intermediate-volatility and semi-volatile organic compounds (I/SVOCs) are major precursors to secondary organic aerosol, and contribute to tropospheric ozone formation. Their wide volatility range, chemical complexity, behavior in analytical systems, and trace concentrations present numerous hurdles to characterization. We present an integrated sampling-to-analysis system for the collection and offline analysis of trace gas-phase organic compounds with the goal of preserving and recovering analytes throughout sample collection, transport, storage, and thermal desorption for accurate analysis. Custom multi-bed adsorbent tubes are used to collect samples for offline analysis by advanced analytical detectors. The analytical instrumentation comprises an automated thermal desorption system that introduces analytes from the adsorbent tubes into a gas chromatograph, which is coupled with an electron ionization mass spectrometer (GC-EIMS) and other detectors. In order to optimize the collection and recovery for a wide range of analyte volatility and functionalization, we evaluated a variety of commercially-available materials, including Res-Sil beads, quartz wool, glass beads, Tenax TA, and silica gel. Key properties for optimization include inertness, versatile chemical capture, minimal affinity for water, and minimal artifacts or degradation byproducts; these properties were assessed with a diverse mix of traditionally-measured and functionalized analytes. Along with a focus on material selection, we provide recommendations spanning the entire sampling-and-analysis process to improve the accuracy of future comprehensive I/SVOC measurements, including oxygenated and other functionalized I/SVOCs. We demonstrate the performance of our system by providing results on speciated VOCs-SVOCs from indoor, outdoor, and chamber studies that establish the utility of our protocols and pave the way for precise laboratory characterization via a mix of detection methods.

  15. A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.

    PubMed

    Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf

    2018-01-01

    Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.

  16. A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models

    PubMed Central

    Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf

    2017-01-01

    Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977

  17. A new approach to analytic, non-perturbative and gauge-invariant QCD

    NASA Astrophysics Data System (ADS)

    Fried, H. M.; Grandou, T.; Sheu, Y.-M.

    2012-11-01

    Following a previous calculation of quark scattering in eikonal approximation, this paper presents a new, analytic and rigorous approach to the calculation of QCD phenomena. In this formulation a basic distinction between the conventional "idealistic" description of QCD and a more "realistic" description is brought into focus by a non-perturbative, gauge-invariant evaluation of the Schwinger solution for the QCD generating functional in terms of the exact Fradkin representations of Green's functional G(x,y|A) and the vacuum functional L[A]. Because quarks exist asymptotically only in bound states, their transverse coordinates can never be measured with arbitrary precision; the non-perturbative neglect of this statement leads to obstructions that are easily corrected by invoking in the basic Lagrangian a probability amplitude which describes such transverse imprecision. The second result of this non-perturbative analysis is the appearance of a new and simplifying output called "Effective Locality", in which the interactions between quarks by the exchange of a "gluon bundle"-which "bundle" contains an infinite number of gluons, including cubic and quartic gluon interactions-display an exact locality property that reduces the several functional integrals of the formulation down to a set of ordinary integrals. It should be emphasized that "non-perturbative" here refers to the effective summation of all gluons between a pair of quark lines-which may be the same quark line, as in a self-energy graph-but does not (yet) include a summation over all closed-quark loops which are tied by gluon-bundle exchange to the rest of the "Bundle Diagram". As an example of the power of these methods we offer as a first analytic calculation the quark-antiquark binding potential of a pion, and the corresponding three-quark binding potential of a nucleon, obtained in a simple way from relevant eikonal scattering approximations. A second calculation, analytic, non-perturbative and gauge-invariant, of a nucleon-nucleon binding potential to form a model deuteron, will appear separately.

  18. Analytical transition-matrix treatment of electric multipole polarizabilities of hydrogen-like atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharchenko, V.F., E-mail: vkharchenko@bitp.kiev.ua

    2015-04-15

    The direct transition-matrix approach to the description of the electric polarization of the quantum bound system of particles is used to determine the electric multipole polarizabilities of the hydrogen-like atoms. It is shown that in the case of the bound system formed by the Coulomb interaction the corresponding inhomogeneous integral equation determining an off-shell scattering function, which consistently describes virtual multiple scattering, can be solved exactly analytically for all electric multipole polarizabilities. Our method allows to reproduce the known Dalgarno–Lewis formula for electric multipole polarizabilities of the hydrogen atom in the ground state and can also be applied to determinemore » the polarizability of the atom in excited bound states. - Highlights: • A new description for electric polarization of hydrogen-like atoms. • Expression for multipole polarizabilities in terms of off-shell scattering functions. • Derivation of integral equation determining the off-shell scattering function. • Rigorous analytic solving the integral equations both for ground and excited states. • Study of contributions of virtual multiple scattering to electric polarizabilities.« less

  19. Computer-Assisted Microscopy in Science Teaching and Research.

    ERIC Educational Resources Information Center

    Radice, Gary P.

    1997-01-01

    Describes a technological approach to teaching the relationships between biological form and function. Computer-assisted image analysis was integrated into a microanatomy course. Students spend less time memorizing and more time observing, measuring, and interpreting, building technical and analytical skills. Appendices list hardware and software…

  20. MULTI-TEMPORAL REMOTE SENSING ANALYTICAL APPROACHES FOR CHARACTERIZING LANDSCAPE CHANGE

    EPA Science Inventory



    Changes in landscape composition and function result from both acute land-cover conversions and chronic landscape changes. Land-cover conversions are typically mediated by human land-use activities (e.g. conversion from forest to agriculture), while more subtle chronic l...

  1. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  2. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  3. Displacement potential solution of a guided deep beam of composite materials under symmetric three-point bending

    NASA Astrophysics Data System (ADS)

    Rahman, M. Muzibur; Ahmad, S. Reaz

    2017-12-01

    An analytical investigation of elastic fields for a guided deep beam of orthotropic composite material having three point symmetric bending is carried out using displacement potential boundary modeling approach. Here, the formulation is developed as a single function of space variables defined in terms of displacement components, which has to satisfy the mixed type of boundary conditions. The relevant displacement and stress components are derived into infinite series using Fourier integral along with suitable polynomials coincided with boundary conditions. The results are presented mainly in the form of graphs and verified with finite element solutions using ANSYS. This study shows that the analytical and numerical solutions are in good agreement and thus enhances reliability of the displacement potential approach.

  4. Effect of Vibration on Retention Characteristics of Screen Acquisition Systems

    NASA Technical Reports Server (NTRS)

    Tegart, J. R.; Park, A. C.

    1977-01-01

    An analytical and experimental investigation of the effect of vibration on the retention characteristics of screen acquisition systems was performed. The functioning of surface tension devices using fine-mesh screens requires that the pressure differential acting on the screen be less than its pressure retention capability. When exceeded, screen breakdown will occur and gas-free expulsion of propellant will no longer be possible. An analytical approach to predicting the effect of vibration was developed. This approach considers the transmission of the vibration to the screens of the device and the coupling of the liquid and the screen in establishing the screen response. A method of evaluating the transient response of the gas/liquid interface within the screen was also developed.

  5. Applications of δ-function perturbation to the pricing of derivative securities

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann; Goovaerts, Marc

    2004-11-01

    In the recent econophysics literature, the use of functional integrals is widespread for the calculation of option prices. In this paper, we extend this approach in several directions by means of δ-function perturbations. First, we show that results about infinitely repulsive δ-function are applicable to the pricing of barrier options. We also introduce functional integrals over skew paths that give rise to a new European option formula when combined with δ-function potential. We propose accurate closed-form approximations based on the theory of comonotonic risks in case the functional integrals are not analytically computable.

  6. Application of capability indices and control charts in the analytical method control strategy.

    PubMed

    Oliva, Alexis; Llabres Martinez, Matías

    2017-08-01

    In this study, we assessed the usefulness of control charts in combination with the process capability indices, C pm and C pk , in the control strategy of an analytical method. The traditional X-chart and moving range chart were used to monitor the analytical method over a 2-year period. The results confirmed that the analytical method is in-control and stable. Different criteria were used to establish the specifications limits (i.e. analyst requirements) for fixed method performance (i.e. method requirements). If the specification limits and control limits are equal in breadth, the method can be considered "capable" (C pm  = 1), but it does not satisfy the minimum method capability requirements proposed by Pearn and Shu (2003). Similar results were obtained using the C pk index. The method capability was also assessed as a function of method performance for fixed analyst requirements. The results indicate that the method does not meet the requirements of the analytical target approach. A real-example data of a SEC with light-scattering detection method was used as a model whereas previously published data were used to illustrate the applicability of the proposed approach. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Constraints on the nuclear equation of state from nuclear masses and radii in a Thomas-Fermi meta-modeling approach

    NASA Astrophysics Data System (ADS)

    Chatterjee, D.; Gulminelli, F.; Raduta, Ad. R.; Margueron, J.

    2017-12-01

    The question of correlations among empirical equation of state (EoS) parameters constrained by nuclear observables is addressed in a Thomas-Fermi meta-modeling approach. A recently proposed meta-modeling for the nuclear EoS in nuclear matter is augmented with a single finite size term to produce a minimal unified EoS functional able to describe the smooth part of the nuclear ground state properties. This meta-model can reproduce the predictions of a large variety of models, and interpolate continuously between them. An analytical approximation to the full Thomas-Fermi integrals is further proposed giving a fully analytical meta-model for nuclear masses. The parameter space is sampled and filtered through the constraint of nuclear mass reproduction with Bayesian statistical tools. We show that this simple analytical meta-modeling has a predictive power on masses, radii, and skins comparable to full Hartree-Fock or extended Thomas-Fermi calculations with realistic energy functionals. The covariance analysis on the posterior distribution shows that no physical correlation is present between the different EoS parameters. Concerning nuclear observables, a strong correlation between the slope of the symmetry energy and the neutron skin is observed, in agreement with previous studies.

  8. Analyte-driven switching of DNA charge transport: de novo creation of electronic sensors for an early lung cancer biomarker.

    PubMed

    Thomas, Jason M; Chakraborty, Banani; Sen, Dipankar; Yu, Hua-Zhong

    2012-08-22

    A general approach is described for the de novo design and construction of aptamer-based electrochemical biosensors, for potentially any analyte of interest (ranging from small ligands to biological macromolecules). As a demonstration of the approach, we report the rapid development of a made-to-order electronic sensor for a newly reported early biomarker for lung cancer (CTAP III/NAP2). The steps include the in vitro selection and characterization of DNA aptamer sequences, design and biochemical testing of wholly DNA sensor constructs, and translation to a functional electrode-bound sensor format. The working principle of this distinct class of electronic biosensors is the enhancement of DNA-mediated charge transport in response to analyte binding. We first verify such analyte-responsive charge transport switching in solution, using biochemical methods; successful sensor variants were then immobilized on gold electrodes. We show that using these sensor-modified electrodes, CTAP III/NAP2 can be detected with both high specificity and sensitivity (K(d) ~1 nM) through a direct electrochemical reading. To investigate the underlying basis of analyte binding-induced conductivity switching, we carried out Förster Resonance Energy Transfer (FRET) experiments. The FRET data establish that analyte binding-induced conductivity switching in these sensors results from very subtle structural/conformational changes, rather than large scale, global folding events. The implications of this finding are discussed with respect to possible charge transport switching mechanisms in electrode-bound sensors. Overall, the approach we describe here represents a unique design principle for aptamer-based electrochemical sensors; its application should enable rapid, on-demand access to a class of portable biosensors that offer robust, inexpensive, and operationally simplified alternatives to conventional antibody-based immunoassays.

  9. Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach

    NASA Astrophysics Data System (ADS)

    Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.

    2011-03-01

    We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.

  10. Safety and Suitability for Service Assessment Testing for Surface and Underwater Launched Munitions

    DTIC Science & Technology

    2014-12-05

    test efficiency that tend to associate the Analytical S3 Test Approach with large, complex munition systems and the Empirical S3 Test Approach with...the smaller, less complex munition systems . 8.1 ANALYTICAL S3 TEST APPROACH. The Analytical S3 test approach, as shown in Figure 3, evaluates...assets than the Analytical S3 Test approach to establish the safety margin of the system . This approach is generally applicable to small munitions

  11. A computational method for the Helmholtz equation in unbounded domains based on the minimization of an integral functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciraolo, Giulio, E-mail: g.ciraolo@math.unipa.it; Gargano, Francesco, E-mail: gargano@math.unipa.it; Sciacca, Vincenzo, E-mail: sciacca@math.unipa.it

    2013-08-01

    We study a new approach to the problem of transparent boundary conditions for the Helmholtz equation in unbounded domains. Our approach is based on the minimization of an integral functional arising from a volume integral formulation of the radiation condition. The index of refraction does not need to be constant at infinity and may have some angular dependency as well as perturbations. We prove analytical results on the convergence of the approximate solution. Numerical examples for different shapes of the artificial boundary and for non-constant indexes of refraction will be presented.

  12. Analysis of high-aspect-ratio jet-flap wings of arbitrary geometry

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    An analytical technique to compute the performance of an arbitrary jet-flapped wing is developed. The solution technique is based on the method of Maskell and Spence in which the well-known lifting-line approach is coupled with an auxiliary equation providing the extra function needed in jet-flap theory. The present method is generalized to handle straight, uncambered wings of arbitrary planform, twist, and blowing (including unsymmetrical cases). An analytical procedure is developed for continuous variations in the above geometric data with special functions to exactly treat discontinuities in any of the geometric and blowing data. A rational theory for the effect of finite wing thickness is introduced as well as simplified concepts of effective aspect ratio for rapid estimation of performance.

  13. Derived Transformation of Children's Pregambling Game Playing

    ERIC Educational Resources Information Center

    Dymond, Simon; Bateman, Helena; Dixon, Mark R.

    2010-01-01

    Contemporary behavior-analytic perspectives on gambling emphasize the impact of verbal relations, or derived relational responding and the transformation of stimulus functions, on the initiation and maintenance of gambling. Approached in this way, it is possible to undertake experimental analysis of the role of verbal/mediational variables in…

  14. Diffusion of Super-Gaussian Profiles

    ERIC Educational Resources Information Center

    Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.

    2007-01-01

    The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…

  15. Silicone rod extraction followed by liquid desorption-large volume injection-programmable temperature vaporiser-gas chromatography-mass spectrometry for trace analysis of priority organic pollutants in environmental water samples.

    PubMed

    Delgado, Alejandra; Posada-Ureta, Oscar; Olivares, Maitane; Vallejo, Asier; Etxebarria, Nestor

    2013-12-15

    In this study a priority organic pollutants usually found in environmental water samples were considered to accomplish two extraction and analysis approaches. Among those compounds organochlorine compounds, pesticides, phthalates, phenols and residues of pharmaceutical and personal care products were included. The extraction and analysis steps were based on silicone rod extraction (SR) followed by liquid desorption in combination with large volume injection-programmable temperature vaporiser (LVI-PTV) and gas chromatography-mass spectrometry (GC-MS). Variables affecting the analytical response as a function of the programmable temperature vaporiser (PTV) parameters were firstly optimised following an experimental design approach. The SR extraction and desorption conditions were assessed afterwards, including matrix modification, time extraction, and stripping solvent composition. Subsequently, the possibility of performing membrane enclosed sorptive coating extraction (MESCO) as a modified extraction approach was also evaluated. The optimised method showed low method detection limits (3-35 ng L(-1)), acceptable accuracy (78-114%) and precision values (<13%) for most of the studied analytes regardless of the aqueous matrix. Finally, the developed approach was successfully applied to the determination of target analytes in aqueous environmental matrices including estuarine and wastewater samples. © 2013 Elsevier B.V. All rights reserved.

  16. Big–deep–smart data in imaging for guiding materials design

    DOE PAGES

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    2015-09-23

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  17. Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Malsbury, T.; Atencio, A., Jr.

    1992-01-01

    A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.

  18. Big-deep-smart data in imaging for guiding materials design.

    PubMed

    Kalinin, Sergei V; Sumpter, Bobby G; Archibald, Richard K

    2015-10-01

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  19. Big-deep-smart data in imaging for guiding materials design

    NASA Astrophysics Data System (ADS)

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    2015-10-01

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  20. Big–deep–smart data in imaging for guiding materials design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  1. Combination of Bottom-up 2D-LC-MS and Semi-top-down GelFree-LC-MS Enhances Coverage of Proteome and Low Molecular Weight Short Open Reading Frame Encoded Peptides of the Archaeon Methanosarcina mazei.

    PubMed

    Cassidy, Liam; Prasse, Daniela; Linke, Dennis; Schmitz, Ruth A; Tholey, Andreas

    2016-10-07

    The recent discovery of an increasing number of small open reading frames (sORF) creates the need for suitable analytical technologies for the comprehensive identification of the corresponding gene products. For biological and functional studies the knowledge of the entire set of proteins and sORF gene products is essential. Consequently in the present study we evaluated analytical approaches that will allow for simultaneous analysis of widest parts of the proteome together with the predicted sORF. We performed a full proteome analysis of the methane producing archaeon Methanosarcina mazei strain Gö1 cytosolic proteome using a high/low pH reversed phase LC-MS bottom-up approach. The second analytical approach was based on semi-top-down strategy, encompassing a separation at intact protein level using a GelFree system, followed by digestion and LC-MS analysis. A high overlap in identified proteins was found for both approaches yielding the most comprehensive coverage of the cytosolic proteome of this organism achieved so far. The application of the second approach in combination with an adjustment of the search criteria for database searches further led to a significant increase of sORF peptide identifications, finally allowing to detect and identify 28 sORF gene products.

  2. Capillary waveguide optrodes: an approach to optical sensing in medical diagnostics

    NASA Astrophysics Data System (ADS)

    Lippitsch, Max E.; Draxler, Sonja; Kieslinger, Dietmar; Lehmann, Hartmut; Weigl, Bernhard H.

    1996-07-01

    Glass capillaries with a chemically sensitive coating on the inner surface are used as optical sensors for medical diagnostics. A capillary simultaneously serves as a sample compartment, a sensor element, and an inhomogeneous optical waveguide. Various detection schemes based on absorption, fluorescence intensity, or fluorescence lifetime are described. In absorption-based capillary waveguide optrodes the absorption in the sensor layer is analyte dependent; hence light transmission along the inhomogeneous waveguiding structure formed by the capillary wall and the sensing layer is a function of the analyte concentration. Similarly, in fluorescence-based capillary optrodes the fluorescence intensity or the fluorescence lifetime of an indicator dye fixed in the sensing layer is analyte dependent; thus the specific property of fluorescent light excited in the sensing layer and thereafter guided along the inhomogeneous waveguiding structure is a function of the analyte concentration. Both schemes are experimentally demonstrated, one with carbon dioxide as the analyte and the other one with oxygen. The device combines optical sensors with the standard glass capillaries usually applied to gather blood drops from fingertips, to yield a versatile diagnostic instrument, integrating the sample compartment, the optical sensor, and the light-collecting optics into a single piece. This ensures enhanced sensor performance as well as improved handling compared with other sensors. waveguide, blood gases, medical diagnostics.

  3. Modeling of Red Blood Cells and Related Spleen Function

    NASA Astrophysics Data System (ADS)

    Peng, Zhangli; Pivkin, Igor; Dao, Ming

    2011-11-01

    A key function of the spleen is to clear red blood cells (RBCs) with abnormal mechanical properties from the circulation. These abnormal mechanical properties may be due to RBC aging or RBC diseases, e.g., malaria and sickle cell anemia. Specifically, 10% of RBCs passing through the spleen are forced to squeeze into the narrow slits between the endothelial cells, and stiffer cells which get stuck are killed and digested by macrophages. To investigate this important physiological process, we employ three different approaches to study RBCs passage through these small slits, including analytical theory, Dissipative Particle Dynamics (DPD) simulation and Multiscale Finite Element Method (MS-FEM). By applying the analytical theory, we estimate the critical limiting geometries RBCs can pass. By using the DPD method, we study the full fluid-structure interaction problem, and compute RBC deformation under different pressure gradients. By employing the MS-FEM approach, we model the lipid bilayer and the cytoskeleton as two distinct layers, and focus on the cytoskeleton deformation and the bilayer-skeleton interaction force at the molecular level. Finally the results of these three approaches are compared to each other and correlated to the experimental observations.

  4. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  5. Gravity Field Recovery from the Cartwheel Formation by the Semi-analytical Approach

    NASA Astrophysics Data System (ADS)

    Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico; Zhong, Min; Zhou, Zebing

    2016-04-01

    Past and current gravimetric satellite missions have contributed drastically to our knowledge of the Earth's gravity field. Nevertheless, several geoscience disciplines push for even higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure. With respect to other methods, one significant advantage of the semi-analytical approach is its effective pre-mission error assessment for gravity field missions. The semi-analytical approach builds a linear analytical relationship between the Fourier spectrum of the observables and the spherical harmonic spectrum of the gravity field. The spectral link between observables and gravity field parameters is given by the transfer coefficients, which constitutes the observation model. In connection with a stochastic model, it can be used for pre-mission error assessment of gravity field mission. The cartwheel formation is formed by two satellites on elliptic orbits in the same plane. The time dependent ranging will be considered in the transfer coefficients via convolution including the series expansion of the eccentricity functions. The transfer coefficients are applied to assess the error patterns, which are caused by different orientation of the cartwheel for range-rate and range acceleration. This work will present the isotropy and magnitude of the formal errors of the gravity field coefficients, for different orientations of the cartwheel.

  6. Open and scalable analytics of large Earth observation datasets: From scenes to multidimensional arrays using SciDB and GDAL

    NASA Astrophysics Data System (ADS)

    Appel, Marius; Lahn, Florian; Buytaert, Wouter; Pebesma, Edzer

    2018-04-01

    Earth observation (EO) datasets are commonly provided as collection of scenes, where individual scenes represent a temporal snapshot and cover a particular region on the Earth's surface. Using these data in complex spatiotemporal modeling becomes difficult as soon as data volumes exceed a certain capacity or analyses include many scenes, which may spatially overlap and may have been recorded at different dates. In order to facilitate analytics on large EO datasets, we combine and extend the geospatial data abstraction library (GDAL) and the array-based data management and analytics system SciDB. We present an approach to automatically convert collections of scenes to multidimensional arrays and use SciDB to scale computationally intensive analytics. We evaluate the approach in three study cases on national scale land use change monitoring with Landsat imagery, global empirical orthogonal function analysis of daily precipitation, and combining historical climate model projections with satellite-based observations. Results indicate that the approach can be used to represent various EO datasets and that analyses in SciDB scale well with available computational resources. To simplify analyses of higher-dimensional datasets as from climate model output, however, a generalization of the GDAL data model might be needed. All parts of this work have been implemented as open-source software and we discuss how this may facilitate open and reproducible EO analyses.

  7. Green's function calculations for semi-infinite carbon nanotubes

    NASA Astrophysics Data System (ADS)

    John, D. L.; Pulfrey, D. L.

    2006-02-01

    In the modeling of nanoscale electronic devices, the non-equilibrium Green's function technique is gaining increasing popularity. One complication in this method is the need for computation of the self-energy functions that account for the interactions between the active portion of a device and its leads. In the one-dimensional case, these functions may be computed analytically. In higher dimensions, a numerical approach is required. In this work, we generalize earlier methods that were developed for tight-binding Hamiltonians, and present results for the case of a carbon nanotube.

  8. Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred

    2015-01-01

    To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.

  9. Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin

    2018-07-01

    Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.

  10. Exploration of the Memory Effect on the Photon-Assisted Tunneling via a Single Quantum Dot:. a Generalized Floquet Theoretical Approach

    NASA Astrophysics Data System (ADS)

    Chen, Hsing-Ta; Ho, Tak-San; Chu, Shih-I.

    The generalized Floquet approach is developed to study memory effect on electron transport phenomena through a periodically driven single quantum dot in an electrode-multi-level dot-electrode nanoscale quantum device. The memory effect is treated using a multi-function Lorentzian spectral density (LSD) model that mimics the spectral density of each electrode in terms of multiple Lorentzian functions. For the symmetric single-function LSD model involving a single-level dot, the underlying single-particle propagator is shown to be related to a 2×2 effective time-dependent Hamiltonian that includes both the periodic external field and the electrode memory effect. By invoking the generalized Van Vleck (GVV) nearly degenerate perturbation theory, an analytical Tien-Gordon-like expression is derived for arbitrary order multi-photon resonance d.c. tunneling current. Numerically converged simulations and the GVV analytical results are in good agreement, revealing the origin of multi-photon coherent destruction of tunneling and accounting for the suppression of the staircase jumps of d.c. current due to the memory effect. Specially, a novel blockade phenomenon is observed, showing distinctive oscillations in the field-induced current in the large bias voltage limit.

  11. Structure Shapes Dynamics and Directionality in Diverse Brain Networks: Mathematical Principles and Empirical Confirmation in Three Species

    NASA Astrophysics Data System (ADS)

    Moon, Joon-Young; Kim, Junhyeok; Ko, Tae-Wook; Kim, Minkyung; Iturria-Medina, Yasser; Choi, Jee-Hyun; Lee, Joseph; Mashour, George A.; Lee, Uncheol

    2017-04-01

    Identifying how spatially distributed information becomes integrated in the brain is essential to understanding higher cognitive functions. Previous computational and empirical studies suggest a significant influence of brain network structure on brain network function. However, there have been few analytical approaches to explain the role of network structure in shaping regional activities and directionality patterns. In this study, analytical methods are applied to a coupled oscillator model implemented in inhomogeneous networks. We first derive a mathematical principle that explains the emergence of directionality from the underlying brain network structure. We then apply the analytical methods to the anatomical brain networks of human, macaque, and mouse, successfully predicting simulation and empirical electroencephalographic data. The results demonstrate that the global directionality patterns in resting state brain networks can be predicted solely by their unique network structures. This study forms a foundation for a more comprehensive understanding of how neural information is directed and integrated in complex brain networks.

  12. Generalized Mantel-Haenszel Methods for Differential Item Functioning Detection

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Madeira, Jaqueline M.

    2008-01-01

    Mantel-Haenszel methods comprise a highly flexible methodology for assessing the degree of association between two categorical variables, whether they are nominal or ordinal, while controlling for other variables. The versatility of Mantel-Haenszel analytical approaches has made them very popular in the assessment of the differential functioning…

  13. ADHD and Reading Disabilities: A Cluster Analytic Approach for Distinguishing Subgroups.

    ERIC Educational Resources Information Center

    Bonafina, Marcela A.; Newcorn, Jeffrey H.; McKay, Kathleen E.; Koda, Vivian H.; Halperin, Jeffrey M.

    2000-01-01

    Using cluster analysis, a study empirically divided 54 children with attention-deficit/hyperactivity disorder (ADHD) based on their Full Scale IQ and reading ability. Clusters had different patterns of cognitive, behavioral, and neurochemical functions, as determined by discrepancies in Verbal-Performance IQ, academic achievement, parent…

  14. A Functional Analytic Approach to Computer-Interactive Mathematics

    ERIC Educational Resources Information Center

    Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M.; Ninness, Sharon K.

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on…

  15. Developing a New Interdisciplinary Lab Course for Undergraduate and Graduate Students: Plant Cells and Proteins

    ERIC Educational Resources Information Center

    Jez, Joseph M.; Schachtman, Daniel P.; Berg, R. Howard; Taylor, Christopher G.; Chen, Sixue; Hicks, Leslie M.; Jaworski, Jan G.; Smith, Thomas J.; Nielsen, Erik; Pikaard, Craig S.

    2007-01-01

    Studies of protein function increasingly use multifaceted approaches that span disciplines including recombinant DNA technology, cell biology, and analytical biochemistry. These studies rely on sophisticated equipment and methodologies including confocal fluorescence microscopy, mass spectrometry, and X-ray crystallography that are beyond the…

  16. The Relationship between Psychiatric Symptomatology and Motivation of Challenging Behaviour: A Preliminary Study

    ERIC Educational Resources Information Center

    Holden, Borge; Gitlesen, Jens Petter

    2008-01-01

    In addition to explaining challenging behaviour by way of behaviour analytic, functional analyses, challenging behaviour is increasingly explained by way of psychiatric symptomatology. According to some researchers, the two approaches complement each other, as psychiatric symptomatology may form a motivational basis for the individual's response…

  17. Forces between functionalized silica nanoparticles in solution

    NASA Astrophysics Data System (ADS)

    Lane, J. Matthew D.; Ismail, Ahmed E.; Chandross, Michael; Lorenz, Christian D.; Grest, Gary S.

    2009-05-01

    To prevent the flocculation and phase separation of nanoparticles in solution, nanoparticles are often functionalized with short chain surfactants. Here we present fully atomistic molecular dynamics simulations which characterize how these functional coatings affect the interactions between nanoparticles and with the surrounding solvent. For 5-nm-diameter silica nanoparticles coated with poly(ethylene oxide) (PEO) oligomers in water, we determined the hydrodynamic drag on two approaching nanoparticles moving through solvent and on a single nanoparticle as it approaches a planar surface. In most circumstances, macroscale fluid theory accurately predicts the drag on these nanoscale particles. Good agreement is seen with Brenner’s analytical solutions for wall separations larger than the soft nanoparticle radius. For two approaching coated nanoparticles, the solvent-mediated (velocity independent) and lubrication (velocity-dependent) forces are purely repulsive and do not exhibit force oscillations that are typical of uncoated rigid spheres.

  18. SU-E-I-16: Scan Length Dependency of the Radial Dose Distribution in a Long Polyethylene Cylinder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakalyar, D; McKenney, S; Feng, W

    Purpose: The area-averaged dose in the central plane of a long cylinder following a CT scan depends upon the radial dose distribution and the length of the scan. The ICRU/TG200 phantom, a polyethylene cylinder 30 cm in diameter and 60 cm long, was the subject of this study. The purpose was to develop an analytic function that could determine the dose for a scan length L at any point in the central plane of this phantom. Methods: Monte Carlo calculations were performed on a simulated ICRU/TG200 phantom under conditions of cylindrically symmetric conditions of irradiation. Thus, the radial dose distributionmore » function must be an even function that accounts for two competing effects: The direct beam makes its weakest contribution at the center while the scatter begins abruptly at the outer radius and grows as the center is approached. The scatter contribution also increases with scan length with the increase approaching its limiting value at the periphery faster than along the central axis. An analytic function was developed that fit the data and possessed these features. Results: Symmetry and continuity dictate a local extremum at the center which is a minimum for the ICRU/TG200 phantom. The relative depth of the minimum decreases as the scan length grows and an absolute maximum can occur between the center and outer edge of the cylinders. As the scan length grows, the relative dip in the center decreases so that for very long scan lengths, the dose profile is relatively flat. Conclusion: An analytic function characterizes the radial and scan length dependency of dose for long cylindrical phantoms. The function can be integrated with the results expressed in closed form. One use for this is to help determine average dose distribution over the central cylinder plane for any scan length.« less

  19. Joint multifractal analysis based on the partition function approach: analytical analysis, numerical simulation and empirical application

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing

    2015-10-01

    Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.

  20. Performance degradation of helicopter rotor in forward flight due to ice

    NASA Technical Reports Server (NTRS)

    Korkan, K. D.; Dadone, L.; Shaw, R. J.

    1985-01-01

    This study addresses the analytical assessment of the degradation in the forward flight performance of the front rotor Boeing Vertol CH47D helicopter in a rime ice natural icing encounter. The front rotor disk was divided into 24 15-deg sections and the local Mach number and angle of attack were evaluated as a function of azimuthal and radial location for a specified flight condition. Profile drag increments were then calculated as a function of azimuthal and radial position for different times of exposure to icing, and the rotor performance was re-evaluated including these drag increments. The results of the analytical prediction method, such as horsepower required to maintain a specific flight condition, as a function of icing time have been generated. The method to illustrate the value of such an approach in assessing performance changes experienced by a helicopter rotor as a result of rime ice accretion is described.

  1. Analytic derivation of the next-to-leading order proton structure function F2p(x ,Q2) based on the Laplace transformation

    NASA Astrophysics Data System (ADS)

    Khanpour, Hamzeh; Mirjalili, Abolfazl; Tehrani, S. Atashbar

    2017-03-01

    An analytical solution based on the Laplace transformation technique for the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations is presented at next-to-leading order accuracy in perturbative QCD. This technique is also applied to extract the analytical solution for the proton structure function, F2p(x ,Q2) , in the Laplace s space. We present the results for the separate parton distributions of all parton species, including valence quark densities, the antiquark and strange sea parton distribution functions (PDFs), and the gluon distribution. We successfully compare the obtained parton distribution functions and the proton structure function with the results from GJR08 [Gluck, Jimenez-Delgado, and Reya, Eur. Phys. J. C 53, 355 (2008)], 10.1140/epjc/s10052-007-0462-9 and KKT12 [Khanpour, Khorramian, and Tehrani, J. Phys. G 40, 045002 (2013)], 10.1088/0954-3899/40/4/045002 parametrization models as well as the x -space results using QCDnum code. Our calculations show a very good agreement with the available theoretical models as well as the deep inelastic scattering (DIS) experimental data throughout the small and large values of x . The use of our analytical solution to extract the parton densities and the proton structure function is discussed in detail to justify the analysis method, considering the accuracy and speed of calculations. Overall, the accuracy we obtain from the analytical solution using the inverse Laplace transform technique is found to be better than 1 part in 104 to 105. We also present a detailed QCD analysis of nonsinglet structure functions using all available DIS data to perform global QCD fits. In this regard we employ the Jacobi polynomial approach to convert the results from Laplace s space to Bjorken x space. The extracted valence quark densities are also presented and compared to the JR14, MMHT14, NNPDF, and CJ15 PDFs sets. We evaluate the numerical effects of target mass corrections (TMCs) and higher twist (HT) terms on various structure functions, and compare fits to data with and without these corrections.

  2. Aptamer Nano-Flares for Molecular Detection in Living Cells

    PubMed Central

    Zheng, Dan; Seferos, Dwight S.; Giljohann, David A.; Patel, Pinal C.; Mirkin, Chad A.

    2011-01-01

    We demonstrate a composite nanomaterial, termed an aptamer nano-flare, that can directly quantify an intracellular analyte in a living cell. Aptamer nano-flares consist of a gold nanoparticle core functionalized with a dense monolayer of nucleic acid aptamers with a high affinity for adenosine triphosphate (ATP). The probes bind selectively to target molecules and release fluorescent reporters which indicate the presence of the analyte. Additionally, these nanoconjugates are readily taken up by cells where their signal intensity can be used to quantify intracellular analyte concentration. These nanoconjugates are a promising approach for the intracellular quantification of other small molecules or proteins, or as agents that use aptamer binding to elicit a biological response in living systems. PMID:19645478

  3. Analytical approximation schemes for solving exact renormalization group equations in the local potential approximation

    NASA Astrophysics Data System (ADS)

    Bervillier, C.; Boisseau, B.; Giacomini, H.

    2008-02-01

    The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).

  4. An analytical approach to the rise velocity of periodic bubble trains in non-Newtonian fluids.

    PubMed

    Frank, X; Li, H Z; Funfschilling, D

    2005-01-01

    The present study aims at providing insight into the acceleration mechanism of a bubble chain rising in shear-thinning viscoelastic fluids. The experimental investigation by the Particle Image Velocimetry (PIV), birefringence visualisation and rheological simulation shows that two aspects are central to bubble interactions in such media: the stress creation by the passage of bubbles, and their relaxation due to the fluid's memory forming an evanescent corridor of reduced viscosity. Interactions between bubbles were taken into account mainly through a linear superposition of the stress evolution behind each bubble. An analytical approach together with the rheological consideration was developed to compute the rise velocity of a bubble chain in function of the injection period and bubble volume. The model predictions compare satisfactorily with the experimental investigation.

  5. Riccati parameterized self-similar waves in two-dimensional graded-index waveguide

    NASA Astrophysics Data System (ADS)

    Kumar De, Kanchan; Goyal, Amit; Raju, Thokala Soloman; Kumar, C. N.; Panigrahi, Prasanta K.

    2015-04-01

    An analytical method based on gauge-similarity transformation technique has been employed for mapping a (2+1)- dimensional variable coefficient coupled nonlinear Schrödinger equations (vc-CNLSE) with dispersion, nonlinearity and gain to standard NLSE. Under certain functional relations we construct a large family of self-similar waves in the form of bright similaritons, Akhmediev breathers and rogue waves. We report the effect of dispersion on the intensity of the solitary waves. Further, we illustrate the procedure to amplify the intensity of self-similar waves using isospectral Hamiltonian approach. This approach provides an efficient mechanism to generate analytically a wide class of tapering profiles and widths by exploiting the Riccati parameter. Equivalently, it enables one to control efficiently the self-similar wave structures and hence their evolution.

  6. Two-particle correlation function and dihadron correlation approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vechernin, V. V., E-mail: v.vechernin@spbu.ru; Ivanov, K. O.; Neverov, D. I.

    It is shown that, in the case of asymmetric nuclear interactions, the application of the traditional dihadron correlation approach to determining a two-particle correlation function C may lead to a form distorted in relation to the canonical pair correlation function {sub C}{sup 2}. This result was obtained both by means of exact analytic calculations of correlation functions within a simple string model for proton–nucleus and deuteron–nucleus collisions and by means of Monte Carlo simulations based on employing the HIJING event generator. It is also shown that the method based on studying multiplicity correlations in two narrow observation windows separated inmore » rapidity makes it possible to determine correctly the canonical pair correlation function C{sub 2} for all cases, including the case where the rapidity distribution of product particles is not uniform.« less

  7. Effect-Based Screening Methods for Water Quality Characterization Will Augment Conventional Analyte-by-Analyte Chemical Methods in Research As Well As Regulatory Monitoring

    EPA Science Inventory

    Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...

  8. Chemical Functionalization of Plasmonic Surface Biosensors: A Tutorial Review on Issues, Strategies, and Costs

    PubMed Central

    2017-01-01

    In an ideal plasmonic surface sensor, the bioactive area, where analytes are recognized by specific biomolecules, is surrounded by an area that is generally composed of a different material. The latter, often the surface of the supporting chip, is generally hard to be selectively functionalized, with respect to the active area. As a result, cross talks between the active area and the surrounding one may occur. In designing a plasmonic sensor, various issues must be addressed: the specificity of analyte recognition, the orientation of the immobilized biomolecule that acts as the analyte receptor, and the selectivity of surface coverage. The objective of this tutorial review is to introduce the main rational tools required for a correct and complete approach to chemically functionalize plasmonic surface biosensors. After a short introduction, the review discusses, in detail, the most common strategies for achieving effective surface functionalization. The most important issues, such as the orientation of active molecules and spatial and chemical selectivity, are considered. A list of well-defined protocols is suggested for the most common practical situations. Importantly, for the reported protocols, we also present direct comparisons in term of costs, labor demand, and risk vs benefit balance. In addition, a survey of the most used characterization techniques necessary to validate the chemical protocols is reported. PMID:28796479

  9. Confined active Brownian particles: theoretical description of propulsion-induced accumulation

    NASA Astrophysics Data System (ADS)

    Das, Shibananda; Gompper, Gerhard; Winkler, Roland G.

    2018-01-01

    The stationary-state distribution function of confined active Brownian particles (ABPs) is analyzed by computer simulations and analytical calculations. We consider a radial harmonic as well as an anharmonic confinement potential. In the simulations, the ABP is propelled with a prescribed velocity along a body-fixed direction, which is changing in a diffusive manner. For the analytical approach, the Cartesian components of the propulsion velocity are assumed to change independently; active Ornstein-Uhlenbeck particle (AOUP). This results in very different velocity distribution functions. The analytical solution of the Fokker-Planck equation for an AOUP in a harmonic potential is presented and a conditional distribution function is provided for the radial particle distribution at a given magnitude of the propulsion velocity. This conditional probability distribution facilitates the description of the coupling of the spatial coordinate and propulsion, which yields activity-induced accumulation of particles. For the anharmonic potential, a probability distribution function is derived within the unified colored noise approximation. The comparison of the simulation results with theoretical predictions yields good agreement for large rotational diffusion coefficients, e.g. due to tumbling, even for large propulsion velocities (Péclet numbers). However, we find significant deviations already for moderate Péclet number, when the rotational diffusion coefficient is on the order of the thermal one.

  10. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  11. Nonperturbative Quantum Physics from Low-Order Perturbation Theory.

    PubMed

    Mera, Héctor; Pedersen, Thomas G; Nikolić, Branislav K

    2015-10-02

    The Stark effect in hydrogen and the cubic anharmonic oscillator furnish examples of quantum systems where the perturbation results in a certain ionization probability by tunneling processes. Accordingly, the perturbed ground-state energy is shifted and broadened, thus acquiring an imaginary part which is considered to be a paradigm of nonperturbative behavior. Here we demonstrate how the low order coefficients of a divergent perturbation series can be used to obtain excellent approximations to both real and imaginary parts of the perturbed ground state eigenenergy. The key is to use analytic continuation functions with a built-in singularity structure within the complex plane of the coupling constant, which is tailored by means of Bender-Wu dispersion relations. In the examples discussed the analytic continuation functions are Gauss hypergeometric functions, which take as input fourth order perturbation theory and return excellent approximations to the complex perturbed eigenvalue. These functions are Borel consistent and dramatically outperform widely used Padé and Borel-Padé approaches, even for rather large values of the coupling constant.

  12. Assessment regarding the use of the computer aided analytical models in the calculus of the general strength of a ship hull

    NASA Astrophysics Data System (ADS)

    Hreniuc, V.; Hreniuc, A.; Pescaru, A.

    2017-08-01

    Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.

  13. Stochastic modeling of macrodispersion in unsaturated heterogeneous porous media. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, T.C.J.

    1995-02-01

    Spatial heterogeneity of geologic media leads to uncertainty in predicting both flow and transport in the vadose zone. In this work an efficient and flexible, combined analytical-numerical Monte Carlo approach is developed for the analysis of steady-state flow and transient transport processes in highly heterogeneous, variably saturated porous media. The approach is also used for the investigation of the validity of linear, first order analytical stochastic models. With the Monte Carlo analysis accurate estimates of the ensemble conductivity, head, velocity, and concentration mean and covariance are obtained; the statistical moments describing displacement of solute plumes, solute breakthrough at a compliancemore » surface, and time of first exceedance of a given solute flux level are analyzed; and the cumulative probability density functions for solute flux across a compliance surface are investigated. The results of the Monte Carlo analysis show that for very heterogeneous flow fields, and particularly in anisotropic soils, the linearized, analytical predictions of soil water tension and soil moisture flux become erroneous. Analytical, linearized Lagrangian transport models also overestimate both the longitudinal and the transverse spreading of the mean solute plume in very heterogeneous soils and in dry soils. A combined analytical-numerical conditional simulation algorithm is also developed to estimate the impact of in-situ soil hydraulic measurements on reducing the uncertainty of concentration and solute flux predictions.« less

  14. Benchmarking fully analytic DFT force fields for vibrational spectroscopy: A study on halogenated compounds

    NASA Astrophysics Data System (ADS)

    Pietropolli Charmet, Andrea; Cornaton, Yann

    2018-05-01

    This work presents an investigation of the theoretical predictions yielded by anharmonic force fields having the cubic and quartic force constants are computed analytically by means of density functional theory (DFT) using the recursive scheme developed by M. Ringholm et al. (J. Comput. Chem. 35 (2014) 622). Different functionals (namely B3LYP, PBE, PBE0 and PW86x) and basis sets were used for calculating the anharmonic vibrational spectra of two halomethanes. The benchmark analysis carried out demonstrates the reliability and overall good performances offered by hybrid approaches, where the harmonic data obtained at the coupled cluster with single and double excitations level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T), are combined with the fully analytic higher order force constants yielded by DFT functionals. These methods lead to reliable and computationally affordable calculations of anharmonic vibrational spectra with an accuracy comparable to that yielded by hybrid force fields having the anharmonic force fields computed at second order Møller-Plesset perturbation theory (MP2) level of theory using numerical differentiation but without the corresponding potential issues related to computational costs and numerical errors.

  15. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  16. A Boltzmann machine for the organization of intelligent machines

    NASA Technical Reports Server (NTRS)

    Moed, Michael C.; Saridis, George N.

    1989-01-01

    In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved to converge to the minimum of a cost function. Finally, simulations will show the effectiveness of a variety of search techniques for the intelligent machine.

  17. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  18. Analysis of density effects in plasmas and their influence on electron-impact cross sections

    NASA Astrophysics Data System (ADS)

    Belkhiri, M.; Poirier, M.

    2014-12-01

    Density effects in plasmas are analyzed using a Thomas-Fermi approach for free electrons. First, scaling properties are determined for the free-electron potential and density. For hydrogen-like ions, the first two terms of an analytical expansion of this potential as a function of the plasma coupling parameter are obtained. In such ions, from these properties and numerical calculations, a simple analytical fit is proposed for the plasma potential, which holds for any electron density, temperature, and atomic number, at least assuming that Maxwell-Boltzmann statistics is applicable. This allows one to analyze perturbatively the influence of the plasma potential on energies, wave functions, transition rates, and electron-impact collision rates for single-electron ions. Second, plasmas with an arbitrary charge state are considered, using a modified version of the Flexible Atomic Code (FAC) package with a plasma potential based on a Thomas-Fermi approach. Various methods for the collision cross-section calculations are reviewed. The influence of plasma density on these cross sections is analyzed in detail. Moreover, it is demonstrated that, in a given transition, the radiative and collisional-excitation rates are differently affected by the plasma density. Some analytical expressions are proposed for hydrogen-like ions in the limit where the Born or Lotz approximation applies and are compared to the numerical results from the FAC.

  19. A Baseline for the Multivariate Comparison of Resting-State Networks

    PubMed Central

    Allen, Elena A.; Erhardt, Erik B.; Damaraju, Eswar; Gruner, William; Segall, Judith M.; Silva, Rogers F.; Havlicek, Martin; Rachakonda, Srinivas; Fries, Jill; Kalyanam, Ravi; Michael, Andrew M.; Caprihan, Arvind; Turner, Jessica A.; Eichele, Tom; Adelsheim, Steven; Bryan, Angela D.; Bustillo, Juan; Clark, Vincent P.; Feldstein Ewing, Sarah W.; Filbey, Francesca; Ford, Corey C.; Hutchison, Kent; Jung, Rex E.; Kiehl, Kent A.; Kodituwakku, Piyadasa; Komesu, Yuko M.; Mayer, Andrew R.; Pearlson, Godfrey D.; Phillips, John P.; Sadek, Joseph R.; Stevens, Michael; Teuscher, Ursina; Thoma, Robert J.; Calhoun, Vince D.

    2011-01-01

    As the size of functional and structural MRI datasets expands, it becomes increasingly important to establish a baseline from which diagnostic relevance may be determined, a processing strategy that efficiently prepares data for analysis, and a statistical approach that identifies important effects in a manner that is both robust and reproducible. In this paper, we introduce a multivariate analytic approach that optimizes sensitivity and reduces unnecessary testing. We demonstrate the utility of this mega-analytic approach by identifying the effects of age and gender on the resting-state networks (RSNs) of 603 healthy adolescents and adults (mean age: 23.4 years, range: 12–71 years). Data were collected on the same scanner, preprocessed using an automated analysis pipeline based in SPM, and studied using group independent component analysis. RSNs were identified and evaluated in terms of three primary outcome measures: time course spectral power, spatial map intensity, and functional network connectivity. Results revealed robust effects of age on all three outcome measures, largely indicating decreases in network coherence and connectivity with increasing age. Gender effects were of smaller magnitude but suggested stronger intra-network connectivity in females and more inter-network connectivity in males, particularly with regard to sensorimotor networks. These findings, along with the analysis approach and statistical framework described here, provide a useful baseline for future investigations of brain networks in health and disease. PMID:21442040

  20. Effect of the Level of Inquiry on Student Interactions in Chemistry Laboratories

    ERIC Educational Resources Information Center

    Xu, Haozhi; Talanquer, Vicente

    2013-01-01

    The central goal of our exploratory study was to investigate differences in college chemistry students' interactions during lab experiments with different levels of inquiry. This analysis was approached from three major analytic dimensions: (i) functional analysis; (ii) cognitive processing; and (iii) social processing. According to our results,…

  1. Analytical Observations of the Applicability of the Concept of Student-as-Customer in a University Setting

    ERIC Educational Resources Information Center

    Tasie, George O.

    2010-01-01

    Total quality management (TQM) strives to improve organizational functioning by carefully studying the interface between an organization's mission, values, vision, policies and procedures, and the consumer that the organization serves. Central to this approach to revitalizing economic institutions is the importance placed on client satisfaction.…

  2. Laughing and Smiling to Manage Trouble in French-Language Classroom Interaction

    ERIC Educational Resources Information Center

    Petitjean, Cécile; González-Martínez, Esther

    2015-01-01

    This article deals with communicative functions of laughter and smiling in the classroom studied using a conversation analytical approach. Analysing a corpus of video-recorded French first-language lessons, we show how students sequentially organise laughter and smiling, and use them to preempt, solve or assess a problematic action. We also focus…

  3. Construction of RFIF using VVSFs with application

    NASA Astrophysics Data System (ADS)

    Katiyar, Kuldip; Prasad, Bhagwati

    2017-10-01

    A method of variable vertical scaling factors (VVSFs) is proposed to define the recurrent fractal interpolation function (RFIF) for fitting the data sets. A generalization of one of the recent methods using analytic approach is presented for finding variable vertical scaling factors. An application of it in reconstruction of an EEG signal is also given.

  4. Medication information leaflets for patients: the further validation of an analytic linguistic framework.

    PubMed

    Clerehan, Rosemary; Hirsh, Di; Buchbinder, Rachelle

    2009-01-01

    While clinicians may routinely use patient information leaflets about drug therapy, a poorly conceived leaflet has the potential to do harm. We previously developed a novel approach to analysing leaflets about a rheumatoid arthritis drug, using an analytic approach based on systemic functional linguistics. The aim of the present study was to verify the validity of the linguistic framework by applying it to two further arthritis drug leaflets. The findings confirmed the applicability of the framework and were used to refine it. A new stage or 'move' in the genre was identified. While the function of many of the moves appeared to be 'to instruct' the patient, the instruction was often unclear. The role relationships expressed in the text were critical to the meaning. As with our previous study, judged on their lexical density, the leaflets resembled academic text. The framework can provide specific tools to assess and produce medication information leaflets to support readers in taking medication. Future work could utilize the framework to evaluate information on other treatments and procedures or on healthcare information more widely.

  5. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  6. Prediction of relative and absolute permeabilities for gas and water from soil water retention curves using a pore-scale network model

    NASA Astrophysics Data System (ADS)

    Fischer, Ulrich; Celia, Michael A.

    1999-04-01

    Functional relationships for unsaturated flow in soils, including those between capillary pressure, saturation, and relative permeabilities, are often described using analytical models based on the bundle-of-tubes concept. These models are often limited by, for example, inherent difficulties in prediction of absolute permeabilities, and in incorporation of a discontinuous nonwetting phase. To overcome these difficulties, an alternative approach may be formulated using pore-scale network models. In this approach, the pore space of the network model is adjusted to match retention data, and absolute and relative permeabilities are then calculated. A new approach that allows more general assignments of pore sizes within the network model provides for greater flexibility to match measured data. This additional flexibility is especially important for simultaneous modeling of main imbibition and drainage branches. Through comparisons between the network model results, analytical model results, and measured data for a variety of both undisturbed and repacked soils, the network model is seen to match capillary pressure-saturation data nearly as well as the analytical model, to predict water phase relative permeabilities equally well, and to predict gas phase relative permeabilities significantly better than the analytical model. The network model also provides very good estimates for intrinsic permeability and thus for absolute permeabilities. Both the network model and the analytical model lost accuracy in predicting relative water permeabilities for soils characterized by a van Genuchten exponent n≲3. Overall, the computational results indicate that reliable predictions of both relative and absolute permeabilities are obtained with the network model when the model matches the capillary pressure-saturation data well. The results also indicate that measured imbibition data are crucial to good predictions of the complete hysteresis loop.

  7. Detector response function of an energy-resolved CdTe single photon counting detector.

    PubMed

    Liu, Xin; Lee, Hyoung Koo

    2014-01-01

    While spectral CT using single photon counting detector has shown a number of advantages in diagnostic imaging, knowledge of the detector response function of an energy-resolved detector is needed to correct the signal bias and reconstruct the image more accurately. The objective of this paper is to study the photo counting detector response function using laboratory sources, and investigate the signal bias correction method. Our approach is to model the detector response function over the entire diagnostic energy range (20 keV

  8. Analytical spectrum for a Hamiltonian of quantum dots with Rashba spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Dossa, Anselme F.; Avossevou, Gabriel Y. H.

    2014-12-01

    We determine the analytical solution for a Hamiltonian describing a confined charged particle in a quantum dot, including Rashba spin-orbit coupling and Zeeman splitting terms. The approach followed in this paper is straightforward and uses the symmetrization of the wave function's components. The eigenvalue problem for the Hamiltonian in Bargmann's Hilbert space reduces to a system of coupled first-order differential equations. Then we exploit the symmetry in the system to obtain uncoupled second-order differential equations, which are found to be the Whittaker-Ince limit of the confluent Heun equations. Analytical expressions as well as numerical results are obtained for the spectrum. One of the main features of such models, namely, the level splitting, is present through the spectrum obtained in this paper.

  9. Epilepsy analytic system with cloud computing.

    PubMed

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  10. L-hop percolation on networks with arbitrary degree distributions and its applications

    NASA Astrophysics Data System (ADS)

    Shang, Yilun; Luo, Weiliang; Xu, Shouhuai

    2011-09-01

    Site percolation has been used to help understand analytically the robustness of complex networks in the presence of random node deletion (or failure). In this paper we move a further step beyond random node deletion by considering that a node can be deleted because it is chosen or because it is within some L-hop distance of a chosen node. Using the generating functions approach, we present analytic results on the percolation threshold as well as the mean size, and size distribution, of nongiant components of complex networks under such operations. The introduction of parameter L is both conceptually interesting because it accommodates a sort of nonindependent node deletion, which is often difficult to tackle analytically, and practically interesting because it offers useful insights for cybersecurity (such as botnet defense).

  11. Exact hierarchical clustering in one dimension. [in universe

    NASA Technical Reports Server (NTRS)

    Williams, B. G.; Heavens, A. F.; Peacock, J. A.; Shandarin, S. F.

    1991-01-01

    The present adhesion model-based one-dimensional simulations of gravitational clustering have yielded bound-object catalogs applicable in tests of analytical approaches to cosmological structure formation. Attention is given to Press-Schechter (1974) type functions, as well as to their density peak-theory modifications and the two-point correlation function estimated from peak theory. The extent to which individual collapsed-object locations can be predicted by linear theory is significant only for objects of near-characteristic nonlinear mass.

  12. Modeling of normal contact of elastic bodies with surface relief taken into account

    NASA Astrophysics Data System (ADS)

    Goryacheva, I. G.; Tsukanov, I. Yu

    2018-04-01

    An approach to account the surface relief in normal contact problems for rough bodies on the basis of an additional displacement function for asperities is considered. The method and analytic expressions for calculating the additional displacement function for one-scale and two-scale wavy relief are presented. The influence of the microrelief geometric parameters, including the number of scales and asperities density, on additional displacements of the rough layer is analyzed.

  13. Periodontal Ligament Entheses and their Adaptive Role in the Context of Dentoalveolar Joint Function

    PubMed Central

    Lin, Jeremy D.; Jang, Andrew T.; Kurylo, Michael P.; Hurng, Jonathan; Yang, Feifei; Yang, Lynn; Pal, Arvin; Chen, Ling; Ho, Sunita P.

    2017-01-01

    Objectives The dynamic bone-periodontal ligament (PDL)-tooth fibrous joint consists of two adaptive functionally graded interfaces (FGI), the PDL-bone and PDL-cementum that respond to mechanical strain transmitted during mastication. In general, from a materials and mechanics perspective, FGI prevent catastrophic failure during prolonged cyclic loading. This review is a discourse of results gathered from literature to illustrate the dynamic adaptive nature of the fibrous joint in response to physiologic and pathologic simulated functions, and experimental tooth movement. Methods Historically, studies have investigated soft to hard tissue transitions through analytical techniques that provided insights into structural, biochemical, and mechanical characterization methods. Experimental approaches included two dimensional to three dimensional advanced in situ imaging and analytical techniques. These techniques allowed mapping and correlation of deformations to physicochemical and mechanobiological changes within volumes of the complex subjected to concentric and eccentric loading regimes respectively. Results Tooth movement is facilitated by mechanobiological activity at the interfaces of the fibrous joint and generates elastic discontinuities at these interfaces in response to eccentric loading. Both concentric and eccentric loads mediated cellular responses to strains, and prompted self-regulating mineral forming and resorbing zones that in turn altered the functional space of the joint. Significance A multiscale biomechanics and mechanobiology approach is important for correlating joint function to tissue-level strain-adaptive properties with overall effects on joint form as related to physiologic and pathologic functions. Elucidating the shift in localization of biomolecules specifically at interfaces during development, function, and therapeutic loading of the joint is critical for developing “functional regeneration and adaptation” strategies with an emphasis on restoring physiologic joint function. PMID:28476202

  14. Validity of Particle-Counting Method Using Laser-Light Scattering for Detecting Platelet Aggregation in Diabetic Patients

    NASA Astrophysics Data System (ADS)

    Nakadate, Hiromichi; Sekizuka, Eiichi; Minamitani, Haruyuki

    We aimed to study the validity of a new analytical approach that reflected the phase from platelet activation to the formation of small platelet aggregates. We hoped that this new approach would enable us to use the particle-counting method with laser-light scattering to measure platelet aggregation in healthy controls and in diabetic patients without complications. We measured agonist-induced platelet aggregation for 10 min. Agonist was added to the platelet-rich plasma 1 min after measurement started. We compared the total scattered light intensity from small aggregates over a 10-min period (established analytical approach) and that over a 2-min period from 1 to 3 min after measurement started (new analytical approach). Consequently platelet aggregation in diabetics with HbA1c ≥ 6.5% was significantly greater than in healthy controls by both analytical approaches. However, platelet aggregation in diabetics with HbA1c < 6.5%, i.e. patients in the early stages of diabetes, was significantly greater than in healthy controls only by the new analytical approach, not by the established analytical approach. These results suggest that platelet aggregation as detected by the particle-counting method using laser-light scattering could be applied in clinical examinations by our new analytical approach.

  15. Universal Long Ranged Correlations in Driven Binary Mixtures

    NASA Astrophysics Data System (ADS)

    Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb

    2017-03-01

    When two populations of "particles" move in opposite directions, like oppositely charged colloids under an electric field or intersecting flows of pedestrians, they can move collectively, forming lanes along their direction of motion. The nature of this "laning transition" is still being debated and, in particular, the pair correlation functions, which are the key observables to quantify this phenomenon, have not been characterized yet. Here, we determine the correlations using an analytical approach based on a linearization of the stochastic equations for the density fields, which is valid for dense systems of soft particles. We find that the correlations decay algebraically along the direction of motion, and have a self-similar exponential profile in the transverse direction. Brownian dynamics simulations confirm our theoretical predictions and show that they also hold beyond the validity range of our analytical approach, pointing to a universal behavior.

  16. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  17. Bridging the Gap between Human Judgment and Automated Reasoning in Predictive Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Unwin, Stephen D.

    2010-06-07

    Events occur daily that impact the health, security and sustainable growth of our society. If we are to address the challenges that emerge from these events, anticipatory reasoning has to become an everyday activity. Strong advances have been made in using integrated modeling for analysis and decision making. However, a wider impact of predictive analytics is currently hindered by the lack of systematic methods for integrating predictive inferences from computer models with human judgment. In this paper, we present a predictive analytics approach that supports anticipatory analysis and decision-making through a concerted reasoning effort that interleaves human judgment and automatedmore » inferences. We describe a systematic methodology for integrating modeling algorithms within a serious gaming environment in which role-playing by human agents provides updates to model nodes and the ensuing model outcomes in turn influence the behavior of the human players. The approach ensures a strong functional partnership between human players and computer models while maintaining a high degree of independence and greatly facilitating the connection between model and game structures.« less

  18. Multiplexed and Microparticle-based Analyses: Quantitative Tools for the Large-Scale Analysis of Biological Systems

    PubMed Central

    Nolan, John P.; Mandy, Francis

    2008-01-01

    While the term flow cytometry refers to the measurement of cells, the approach of making sensitive multiparameter optical measurements in a flowing sample stream is a very general analytical approach. The past few years have seen an explosion in the application of flow cytometry technology for molecular analysis and measurements using micro-particles as solid supports. While microsphere-based molecular analyses using flow cytometry date back three decades, the need for highly parallel quantitative molecular measurements that has arisen from various genomic and proteomic advances has driven the development in particle encoding technology to enable highly multiplexed assays. Multiplexed particle-based immunoassays are now common place, and new assays to study genes, protein function, and molecular assembly. Numerous efforts are underway to extend the multiplexing capabilities of microparticle-based assays through new approaches to particle encoding and analyte reporting. The impact of these developments will be seen in the basic research and clinical laboratories, as well as in drug development. PMID:16604537

  19. Laser-Induced Breakdown Spectroscopy (LIBS) in a Novel Molten Salt Aerosol System.

    PubMed

    Williams, Ammon N; Phongikaroon, Supathorn

    2017-04-01

    In the pyrochemical separation of used nuclear fuel (UNF), fission product, rare earth, and actinide chlorides accumulate in the molten salt electrolyte over time. Measuring this salt composition in near real-time is advantageous for operational efficiency, material accountability, and nuclear safeguards. Laser-induced breakdown spectroscopy (LIBS) has been proposed and demonstrated as a potential analytical approach for molten LiCl-KCl salts. However, all the studies conducted to date have used a static surface approach which can lead to issues with splashing, low repeatability, and poor sample homogeneity. In this initial study, a novel molten salt aerosol approach has been developed and explored to measure the composition of the salt via LIBS. The functionality of the system has been demonstrated as well as a basic optimization of the laser energy and nebulizer gas pressure used. Initial results have shown that this molten salt aerosol-LIBS system has a great potential as an analytical technique for measuring the molten salt electrolyte used in this UNF reprocessing technology.

  20. Inline roasting hyphenated with gas chromatography-mass spectrometry as an innovative approach for assessment of cocoa fermentation quality and aroma formation potential.

    PubMed

    Van Durme, Jim; Ingels, Isabel; De Winne, Ann

    2016-08-15

    Today, the cocoa industry is in great need of faster and robust analytical techniques to objectively assess incoming cocoa quality. In this work, inline roasting hyphenated with a cooled injection system coupled to a gas chromatograph-mass spectrometer (ILR-CIS-GC-MS) has been explored for the first time to assess fermentation quality and/or overall aroma formation potential of cocoa. This innovative approach resulted in the in-situ formation of relevant cocoa aroma compounds. After comparison with data obtained by headspace solid phase micro extraction (HS-SPME-GC-MS) on conventional roasted cocoa beans, ILR-CIS-GC-MS data on unroasted cocoa beans showed similar formation trends of important cocoa aroma markers as a function of fermentation quality. The latter approach only requires small aliquots of unroasted cocoa beans, can be automatated, requires no sample preparation, needs relatively short analytical times (<1h) and is highly reproducible. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Density functional theory-based simulations of sum frequency generation spectra involving methyl stretching vibrations: effect of the molecular model on the deduced molecular orientation and comparison with an analytical approach.

    PubMed

    Cecchet, F; Lis, D; Caudano, Y; Mani, A A; Peremans, A; Champagne, B; Guthmuller, J

    2012-03-28

    The knowledge of the first hyperpolarizability tensor elements of molecular groups is crucial for a quantitative interpretation of the sum frequency generation (SFG) activity of thin organic films at interfaces. Here, the SFG response of the terminal methyl group of a dodecanethiol (DDT) monolayer has been interpreted on the basis of calculations performed at the density functional theory (DFT) level of approximation. In particular, DFT calculations have been carried out on three classes of models for the aliphatic chains. The first class of models consists of aliphatic chains, containing from 3 to 12 carbon atoms, in which only one methyl group can freely vibrate, while the rest of the chain is frozen by a strong overweight of its C and H atoms. This enables us to localize the probed vibrational modes on the methyl group. In the second class, only one methyl group is frozen, while the entire remaining chain is allowed to vibrate. This enables us to analyse the influence of the aliphatic chain on the methyl stretching vibrations. Finally, the dodecanethiol (DDT) molecule is considered, for which the effects of two dielectrics, i.e. n-hexane and n-dodecane, are investigated. Moreover, DDT calculations are also carried out by using different exchange-correlation (XC) functionals in order to assess the DFT approximations. Using the DFT IR vectors and Raman tensors, the SFG spectrum of DDT has been simulated and the orientation of the methyl group has then been deduced and compared with that obtained using an analytical approach based on a bond additivity model. This analysis shows that when using DFT molecular properties, the predicted orientation of the terminal methyl group tends to converge as a function of the alkyl chain length and that the effects of the chain as well as of the dielectric environment are small. Instead, a more significant difference is observed when comparing the DFT-based results with those obtained from the analytical approach, thus indicating the importance of a quantum chemical description of the hyperpolarizability tensor elements of the methyl group. © 2012 IOP Publishing Ltd

  2. OpenChrom: a cross-platform open source software for the mass spectrometric analysis of chromatographic data.

    PubMed

    Wenig, Philip; Odermatt, Juergen

    2010-07-30

    Today, data evaluation has become a bottleneck in chromatographic science. Analytical instruments equipped with automated samplers yield large amounts of measurement data, which needs to be verified and analyzed. Since nearly every GC/MS instrument vendor offers its own data format and software tools, the consequences are problems with data exchange and a lack of comparability between the analytical results. To challenge this situation a number of either commercial or non-profit software applications have been developed. These applications provide functionalities to import and analyze several data formats but have shortcomings in terms of the transparency of the implemented analytical algorithms and/or are restricted to a specific computer platform. This work describes a native approach to handle chromatographic data files. The approach can be extended in its functionality such as facilities to detect baselines, to detect, integrate and identify peaks and to compare mass spectra, as well as the ability to internationalize the application. Additionally, filters can be applied on the chromatographic data to enhance its quality, for example to remove background and noise. Extended operations like do, undo and redo are supported. OpenChrom is a software application to edit and analyze mass spectrometric chromatographic data. It is extensible in many different ways, depending on the demands of the users or the analytical procedures and algorithms. It offers a customizable graphical user interface. The software is independent of the operating system, due to the fact that the Rich Client Platform is written in Java. OpenChrom is released under the Eclipse Public License 1.0 (EPL). There are no license constraints regarding extensions. They can be published using open source as well as proprietary licenses. OpenChrom is available free of charge at http://www.openchrom.net.

  3. Using constraints and their value for optimization of large ODE systems

    PubMed Central

    Domijan, Mirela; Rand, David A.

    2015-01-01

    We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300

  4. Shear joint capability versus bolt clearance

    NASA Technical Reports Server (NTRS)

    Lee, H. M.

    1992-01-01

    The results of a conservative analysis approach into the determination of shear joint strength capability for typical space-flight hardware as a function of the bolt-hole clearance specified in the design are presented. These joints are comprised of high-strength steel fasteners and abutments constructed of aluminum alloys familiar to the aerospace industry. A general analytical expression was first arrived at which relates bolt-hole clearance to the bolt shear load required to place all joint fasteners into a shear transferring position. Extension of this work allowed the analytical development of joint load capability as a function of the number of fasteners, shear strength of the bolt, bolt-hole clearance, and the desired factor of safety. Analysis results clearly indicate that a typical space-flight hardware joint can withstand significant loading when less than ideal bolt hole clearances are used in the design.

  5. Analytic model of a multi-electron atom

    NASA Astrophysics Data System (ADS)

    Skoromnik, O. D.; Feranchuk, I. D.; Leonau, A. U.; Keitel, C. H.

    2017-12-01

    A fully analytical approximation for the observable characteristics of many-electron atoms is developed via a complete and orthonormal hydrogen-like basis with a single-effective charge parameter for all electrons of a given atom. The basis completeness allows us to employ the secondary-quantized representation for the construction of regular perturbation theory, which includes in a natural way correlation effects, converges fast and enables an effective calculation of the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, including both the discrete and continuous spectra. This is achieved with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our fully analytical zeroth-order approximation describes the whole spectrum of the system, provides accuracy, which is independent of the number of electrons and is important for applications where the Thomas-Fermi model is still utilized. In addition already in second-order perturbation theory our results become comparable with those via a multi-configuration Hartree-Fock approach.

  6. Thrust and torque vector characteristics of axially-symmetric E-sail

    NASA Astrophysics Data System (ADS)

    Bassetto, Marco; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    The Electric Solar Wind Sail is an innovative propulsion system concept that gains propulsive acceleration from the interaction with charged particles released by the Sun. The aim of this paper is to obtain analytical expressions for the thrust and torque vectors of a spinning sail of given shape. Under the only assumption that each tether belongs to a plane containing the spacecraft spin axis, a general analytical relation is found for the thrust and torque vectors as a function of the spacecraft attitude relative to an orbital reference frame. The results are then applied to the noteworthy situation of a Sun-facing sail, that is, when the spacecraft spin axis is aligned with the Sun-spacecraft line, which approximatively coincides with the solar wind direction. In that case, the paper discusses the equilibrium shape of the generic conducting tether as a function of the sail geometry and the spin rate, using both a numerical and an analytical (approximate) approach. As a result, the structural characteristics of the conducting tether are related to the spacecraft geometric parameters.

  7. Statistical Modeling of an Optically Trapped Cilium

    NASA Astrophysics Data System (ADS)

    Flaherty, Justin; Resnick, Andrew

    We explore, analytically and experimentally, the stochastic dynamics of a biologically significant slender microcantilever, the primary cilium, held within an optical trap. Primary cilia are cellular organelles, present on most vertebrate cells, hypothesized to function as a fluid flow sensor. The mechanical properties of a cilium remain incompletely characterized. Optical trapping is an ideal method to probe the mechanical response of a cilium due to the spatial localization and non-contact nature of the applied force. However, analysis of an optically trapped cilium is complicated both by the geometry of a cilium and boundary conditions. Here, we present experimentally measured mean-squared displacement data of trapped cilia where the trapping force is oppositely directed to the elastic restoring force of the ciliary axoneme, analytical modeling results deriving the mean-squared displacement of a trapped cilium using the Langevin approach, and apply our analytical results to the experimental data. We demonstrate that mechanical properties of the cilium can be accurately determined and efficiently extracted from the data using our model. It is hoped that improved measurements will result in deeper understanding of the biological function of cellular flow sensing by this organelle.

  8. Determination of excitation profile and dielectric function spatial nonuniformity in porous silicon by using WKB approach.

    PubMed

    He, Wei; Yurkevich, Igor V; Canham, Leigh T; Loni, Armando; Kaplan, Andrey

    2014-11-03

    We develop an analytical model based on the WKB approach to evaluate the experimental results of the femtosecond pump-probe measurements of the transmittance and reflectance obtained on thin membranes of porous silicon. The model allows us to retrieve a pump-induced nonuniform complex dielectric function change along the membrane depth. We show that the model fitting to the experimental data requires a minimal number of fitting parameters while still complying with the restriction imposed by the Kramers-Kronig relation. The developed model has a broad range of applications for experimental data analysis and practical implementation in the design of devices involving a spatially nonuniform dielectric function, such as in biosensing, wave-guiding, solar energy harvesting, photonics and electro-optical devices.

  9. Role of man in flight experiment payloads, phase 1. [Spacelab mission planning

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Kirkpatrick, M.

    1974-01-01

    The identification of required data for studies of Spacelab experiment functional allocation, the development of an approach to collecting these data from the payload community, and the specification of analytical methods necessary to quantitatively determine the role of man in specific Spacelab experiments are presented. A generalized Spacelab experiment operation sequence was developed, and the parameters necessary to describe each signle function in the sequence were identified. A set of functional descriptor worksheets were also drawn up. The methodological approach to defining the role of man was defined as a series of trade studies using a digial simulation technique. The tradeoff variables identified include scientific crew size, skill mix, and location. An existing digital simulation program suitable for the required analyses was identified and obtained.

  10. The Discounted Method and Equivalence of Average Criteria for Risk-Sensitive Markov Decision Processes on Borel Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m

    2010-04-15

    This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less

  11. More than a Lingua Franca: Functions of English in a Globalised Educational Language Policy

    ERIC Educational Resources Information Center

    Hult, Francis M.

    2017-01-01

    The Swedish educational policy for upper secondary English, which took effect in 2011 and adopts a globalised perspective on language, is explored with respect to how skills and awareness related to local, national, and international roles of English are represented in policy documents. A discourse analytic approach to language policy is used to…

  12. Psychosocial Dimensions of Exceptional Longevity: A Qualitative Exploration of Centenarians' Experiences, Personality, and Life Strategies

    ERIC Educational Resources Information Center

    Darviri, Christina; Demakakos, Panayotes; Tigani, Xanthi; Charizani, Fotini; Tsiou, Chrysoula; Tsagkari, Christina; Chliaoutakis, Joannes; Monos, Dimitrios

    2009-01-01

    This qualitative study provides a comprehensive account of the social and life experiences and strategies and personality attributes that characterize exceptional longevity (living to 100 or over). It is based on nine semi-structured interviews of relatively healthy and functional Greek centenarians of both sexes. The analytic approach was…

  13. Promoting Appropriate Behavior in Daily Life Contexts Using Functional Analytic Psychotherapy in Early-Adolescent Children

    ERIC Educational Resources Information Center

    Cattivelli, Roberto; Tirelli, Valentina; Berardo, Federica; Perini, Silvia

    2012-01-01

    The topics of social skills development in adolescents and ways to promote this process have been amply investigated in both the clinical and educational literature. Yet, although this line of research has led to the development of many different approaches for this population, most have shown little effectiveness in promoting further social…

  14. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  15. Implementation of Insight Responsibilities in Process Engineering

    NASA Technical Reports Server (NTRS)

    Osborne, Deborah M.

    1997-01-01

    This report describes an approach for evaluating flight readiness (COFR) and contractor performance evaluation (award fee) as part of the insight role of NASA Process Engineering at Kennedy Space Center. Several evaluation methods are presented, including systems engineering evaluations and use of systems performance data. The transition from an oversight function to the insight function is described. The types of analytical tools appropriate for achieving the flight readiness and contractor performance evaluation goals are described and examples are provided. Special emphasis is placed upon short and small run statistical quality control techniques. Training requirements for system engineers are delineated. The approach described herein would be equally appropriate in other directorates at Kennedy Space Center.

  16. Stochastic modelling of the hydrologic operation of rainwater harvesting systems

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Guo, Yiping

    2018-07-01

    Rainwater harvesting (RWH) systems are an effective low impact development practice that provides both water supply and runoff reduction benefits. A stochastic modelling approach is proposed in this paper to quantify the water supply reliability and stormwater capture efficiency of RWH systems. The input rainfall series is represented as a marked Poisson process and two typical water use patterns are analytically described. The stochastic mass balance equation is solved analytically, and based on this, explicit expressions relating system performance to system characteristics are derived. The performances of a wide variety of RWH systems located in five representative climatic regions of the United States are examined using the newly derived analytical equations. Close agreements between analytical and continuous simulation results are shown for all the compared cases. In addition, an analytical equation is obtained expressing the required storage size as a function of the desired water supply reliability, average water use rate, as well as rainfall and catchment characteristics. The equations developed herein constitute a convenient and effective tool for sizing RWH systems and evaluating their performances.

  17. Fall Velocities of Hydrometeors in the Atmosphere: Refinements to a Continuous Analytical Power Law.

    NASA Astrophysics Data System (ADS)

    Khvorostyanov, Vitaly I.; Curry, Judith A.

    2005-12-01

    This paper extends the previous research of the authors on the unified representation of fall velocities for both liquid and crystalline particles as a power law over the entire size range of hydrometeors observed in the atmosphere. The power-law coefficients are determined as continuous analytical functions of the Best or Reynolds number or of the particle size. Here, analytical expressions are formulated for the turbulent corrections to the Reynolds number and to the power-law coefficients that describe the continuous transition from the laminar to the turbulent flow around a falling particle. A simple analytical expression is found for the correction of fall velocities for temperature and pressure. These expressions and the resulting fall velocities are compared with observations and other calculations for a range of ice crystal habits and sizes. This approach provides a continuous analytical power-law description of the terminal velocities of liquid and crystalline hydrometeors with sufficiently high accuracy and can be directly used in bin-resolving models or incorporated into parameterizations for cloud- and large-scale models and remote sensing techniques.

  18. One-pot tandem Ugi-4CR/S(N)Ar approach to highly functionalized quino[2,3-b][1,5]benzoxazepines.

    PubMed

    Ghandi, Mehdi; Zarezadeh, Nahid; Abbasi, Alireza

    2016-05-01

    We have developed a convenient and facile method for the synthesis of functionalized diverse quino[2,3-b][1,5]benzoxazepines. These new compounds were synthesized through a one-pot sequential Ugi-4CR/base-free intramolecular aromatic nucleophilic substitution (S(N)Ar) reaction in moderate to good yields from readily available starting materials. Structural confirmation of the products is confirmed by analytical data and X-ray crystallography.

  19. Analytical approximations to seawater optical phase functions of scattering

    NASA Astrophysics Data System (ADS)

    Haltrin, Vladimir I.

    2004-11-01

    This paper proposes a number of analytical approximations to the classic and recently measured seawater light scattering phase functions. The three types of analytical phase functions are derived: individual representations for 15 Petzold, 41 Mankovsky, and 91 Gulf of Mexico phase functions; collective fits to Petzold phase functions; and analytical representations that take into account dependencies between inherent optical properties of seawater. The proposed phase functions may be used for problems of radiative transfer, remote sensing, visibility and image propagation in natural waters of various turbidity.

  20. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.

    2011-09-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertaintymore » considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure empirical approach. In addition, total Pu with much better accuracy with the hybrid approach than the pure analytical approach. In FY2012, PNNL will continue efforts to optimize its empirical model and minimize its reliance on calibration data. In addition, PNNL will continue to develop an analytical model, considering effects such as neutron-scattering in the fuel and cladding, as well as neutrons streaming through gaps between fuel pins in the fuel assembly.« less

  1. Extrapolation of scattering data to the negative-energy region. II. Applicability of effective range functions within an exactly solvable model

    DOE PAGES

    Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.; ...

    2018-02-05

    A problem of analytical continuation of scattering data to the negative-energy region to obtain information about bound states is discussed within an exactly solvable potential model. This work is continuation of the previous one by the same authors [L. D. Blokhintsev et al., Phys. Rev. C 95, 044618 (2017)]. The goal of this paper is to determine the most effective way of analytic continuation for different systems. The d + α and α + 12C systems are considered and, for comparison, an effective-range function approach and a recently suggested Δ method [O. L. Ramírez Suárez and J.-M. Sparenberg, Phys. Rev.more » C 96, 034601 (2017).] are applied. We conclude that the method is more effective for heavier systems with large values of the Coulomb parameter, whereas for light systems with small values of the Coulomb parameter the effective-range function method might be preferable.« less

  2. Extrapolation of scattering data to the negative-energy region. II. Applicability of effective range functions within an exactly solvable model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.

    A problem of analytical continuation of scattering data to the negative-energy region to obtain information about bound states is discussed within an exactly solvable potential model. This work is continuation of the previous one by the same authors [L. D. Blokhintsev et al., Phys. Rev. C 95, 044618 (2017)]. The goal of this paper is to determine the most effective way of analytic continuation for different systems. The d + α and α + 12C systems are considered and, for comparison, an effective-range function approach and a recently suggested Δ method [O. L. Ramírez Suárez and J.-M. Sparenberg, Phys. Rev.more » C 96, 034601 (2017).] are applied. We conclude that the method is more effective for heavier systems with large values of the Coulomb parameter, whereas for light systems with small values of the Coulomb parameter the effective-range function method might be preferable.« less

  3. Abundance of common species, not species richness, drives delivery of a real-world ecosystem service.

    PubMed

    Winfree, Rachael; Fox, Jeremy W; Williams, Neal M; Reilly, James R; Cariveau, Daniel P

    2015-07-01

    Biodiversity-ecosystem functioning experiments have established that species richness and composition are both important determinants of ecosystem function in an experimental context. Determining whether this result holds for real-world ecosystem services has remained elusive, however, largely due to the lack of analytical methods appropriate for large-scale, associational data. Here, we use a novel analytical approach, the Price equation, to partition the contribution to ecosystem services made by species richness, composition and abundance in four large-scale data sets on crop pollination by native bees. We found that abundance fluctuations of dominant species drove ecosystem service delivery, whereas richness changes were relatively unimportant because they primarily involved rare species that contributed little to function. Thus, the mechanism behind our results was the skewed species-abundance distribution. Our finding that a few common species, not species richness, drive ecosystem service delivery could have broad generality given the ubiquity of skewed species-abundance distributions in nature. © 2015 John Wiley & Sons Ltd/CNRS.

  4. Finite-temperature dynamics of the Mott insulating Hubbard chain

    NASA Astrophysics Data System (ADS)

    Nocera, Alberto; Essler, Fabian H. L.; Feiguin, Adrian E.

    2018-01-01

    We study the dynamical response of the half-filled one-dimensional Hubbard model for a range of interaction strengths U and temperatures T by a combination of numerical and analytical techniques. Using time-dependent density matrix renormalization group computations we find that the single-particle spectral function undergoes a crossover to a spin-incoherent Luttinger liquid regime at temperatures T ˜J =4 t2/U for sufficiently large U >4 t . At smaller values of U and elevated temperatures the spectral function is found to exhibit two thermally broadened bands of excitations, reminiscent of what is found in the Hubbard-I approximation. The dynamical density-density response function is shown to exhibit a finite-temperature resonance at low frequencies inside the Mott gap, with a physical origin similar to the Villain mode in gapped quantum spin chains. We complement our numerical computations by developing an analytic strong-coupling approach to the low-temperature dynamics in the spin-incoherent regime.

  5. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  6. Exascale computing and big data

    DOE PAGES

    Reed, Daniel A.; Dongarra, Jack

    2015-06-25

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  7. Interaction of a conductive crack and of an electrode at a piezoelectric bimaterial interface

    NASA Astrophysics Data System (ADS)

    Onopriienko, Oleg; Loboda, Volodymyr; Sheveleva, Alla; Lapusta, Yuri

    2018-06-01

    The interaction of a conductive crack and an electrode at a piezoelectric bi-material interface is studied. The bimaterial is subjected to an in-plane electrical field parallel to the interface and an anti-plane mechanical loading. The problem is formulated and reduced, via the application of sectionally analytic vector functions, to a combined Dirichlet-Riemann boundary value problem. Simple analytical expressions for the stress, the electric field, and their intensity factors as well as for the crack faces' displacement jump are derived. Our numerical results illustrate the proposed approach and permit to draw some conclusions on the crack-electrode interaction.

  8. The Potential-Well Distortion Effect and Coherent Instabilities of Electron Bunches in Storage Rings

    NASA Astrophysics Data System (ADS)

    Korchuganov, V. N.; Smygacheva, A. S.; Fomin, E. A.

    2018-05-01

    The effect of electromagnetic interaction between electron bunches and the vacuum chamber of a storage ring on the longitudinal motion of bunches is studied. Specifically, the potential-well distortion effect and the so-called coherent instabilities of coupled bunches are considered. An approximate analytical solution for the frequencies of incoherent oscillations of bunches distributed arbitrarily within the ring is obtained for a distorted potential well. A new approach to determining frequencies of coherent oscillations and an approximate analytical relation for estimating the stability of a system of bunches as a function of their distribution in the accelerator orbit are presented.

  9. Exascale computing and big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, Daniel A.; Dongarra, Jack

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  10. Ab initio electronic transport and thermoelectric properties of solids from full and range-separated hybrid functionals

    NASA Astrophysics Data System (ADS)

    Sansone, Giuseppe; Ferretti, Andrea; Maschio, Lorenzo

    2017-09-01

    Within the semiclassical Boltzmann transport theory in the constant relaxation-time approximation, we perform an ab initio study of the transport properties of selected systems, including crystalline solids and nanostructures. A local (Gaussian) basis set is adopted and exploited to analytically evaluate band velocities as well as to access full and range-separated hybrid functionals (such as B3LYP, PBE0, or HSE06) at a moderate computational cost. As a consequence of the analytical derivative, our approach is computationally efficient and does not suffer from problems related to band crossings. We investigate and compare the performance of a variety of hybrid functionals in evaluating Boltzmann conductivity. Demonstrative examples include silicon and aluminum bulk crystals as well as two thermoelectric materials (CoSb3, Bi2Te3). We observe that hybrid functionals other than providing more realistic bandgaps—as expected—lead to larger bandwidths and hence allow for a better estimate of transport properties, also in metallic systems. As a nanostructure prototype, we also investigate conductivity in boron-nitride (BN) substituted graphene, in which nanoribbons (nanoroads) alternate with BN ones.

  11. Label-free functional nucleic acid sensors for detecting target agents

    DOEpatents

    Lu, Yi; Xiang, Yu

    2015-01-13

    A general methodology to design label-free fluorescent functional nucleic acid sensors using a vacant site approach and an abasic site approach is described. In one example, a method for designing label-free fluorescent functional nucleic acid sensors (e.g., those that include a DNAzyme, aptamer or aptazyme) that have a tunable dynamic range through the introduction of an abasic site (e.g., dSpacer) or a vacant site into the functional nucleic acids. Also provided is a general method for designing label-free fluorescent aptamer sensors based on the regulation of malachite green (MG) fluorescence. A general method for designing label-free fluorescent catalytic and molecular beacons (CAMBs) is also provided. The methods demonstrated here can be used to design many other label-free fluorescent sensors to detect a wide range of analytes. Sensors and methods of using the disclosed sensors are also provided.

  12. Horizon-absorbed energy flux in circularized, nonspinning black-hole binaries, and its effective-one-body representation

    NASA Astrophysics Data System (ADS)

    Nagar, Alessandro; Akcay, Sarp

    2012-02-01

    We propose, within the effective-one-body approach, a new, resummed analytical representation of the gravitational-wave energy flux absorbed by a system of two circularized (nonspinning) black holes. This expression is such that it is well-behaved in the strong-field, fast-motion regime, notably up to the effective-one-body-defined last unstable orbit. Building conceptually upon the procedure adopted to resum the multipolar asymptotic energy flux, we introduce a multiplicative decomposition of the multipolar absorbed flux made by three factors: (i) the leading-order contribution, (ii) an “effective source” and (iii) a new residual amplitude correction (ρ˜ℓmH)2ℓ. In the test-mass limit, we use a frequency-domain perturbative approach to accurately compute numerically the horizon-absorbed fluxes along a sequence of stable and unstable circular orbits, and we extract from them the functions ρ˜ℓmH. These quantities are then fitted via rational functions. The resulting analytically represented test-mass knowledge is then suitably hybridized with lower-order analytical information that is valid for any mass ratio. This yields a resummed representation of the absorbed flux for a generic, circularized, nonspinning black-hole binary. Our result adds new information to the state-of-the-art calculation of the absorbed flux at fractional 5 post-Newtonian order [S. Taylor and E. Poisson, Phys. Rev. D 78, 084016 (2008)], which is recovered in the weak-field limit approximation by construction.

  13. Study and characterization of a MEMS micromirror device

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2004-08-01

    In this paper, advances in our study and characterization of a MEMS micromirror device are presented. The micromirror device, of 510 mm characteristic length, operates in a dynamic mode with a maximum displacement on the order of 10 mm along its principal optical axis and oscillation frequencies of up to 1.3 kHz. Developments are carried on by analytical, computational, and experimental methods. Analytical and computational nonlinear geometrical models are developed in order to determine the optimal loading-displacement operational characteristics of the micromirror. Due to the operational mode of the micromirror, the experimental characterization of its loading-displacement transfer function requires utilization of advanced optical metrology methods. Optoelectronic holography (OEH) methodologies based on multiple wavelengths that we are developing to perform such characterization are described. It is shown that the analytical, computational, and experimental approach is effective in our developments.

  14. SU-C-204-01: A Fast Analytical Approach for Prompt Gamma and PET Predictions in a TPS for Proton Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, K; Herzog, M; Landry, G

    2015-06-15

    Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less

  15. Dual metal gate tunneling field effect transistors based on MOSFETs: A 2-D analytical approach

    NASA Astrophysics Data System (ADS)

    Ramezani, Zeinab; Orouji, Ali A.

    2018-01-01

    A novel 2-D analytical drain current model of novel Dual Metal Gate Tunnel Field Effect Transistors Based on MOSFETs (DMG-TFET) is presented in this paper. The proposed Tunneling FET is extracted from a MOSFET structure by employing an additional electrode in the source region with an appropriate work function to induce holes in the N+ source region and hence makes it as a P+ source region. The electric field is derived which is utilized to extract the expression of the drain current by analytically integrating the band to band tunneling generation rate in the tunneling region based on the potential profile by solving the Poisson's equation. Through this model, the effects of the thin film thickness and gate voltage on the potential, the electric field, and the effects of the thin film thickness on the tunneling current can be studied. To validate our present model we use SILVACO ATLAS device simulator and the analytical results have been compared with it and found a good agreement.

  16. An Analysis of Machine- and Human-Analytics in Classification.

    PubMed

    Tam, Gary K L; Kothari, Vivek; Chen, Min

    2017-01-01

    In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.

  17. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  18. A novel approach to piecewise analytic agricultural machinery path reconstruction

    NASA Astrophysics Data System (ADS)

    Wörz, Sascha; Mederle, Michael; Heizinger, Valentin; Bernhardt, Heinz

    2017-12-01

    Before analysing machinery operation in fields, it has to be coped with the problem that the GPS signals of GPS receivers located on the machines contain measurement noise, are time-discrete, and the underlying physical system describing the positions, axial and absolute velocities, angular rates and angular orientation of the operating machines during the whole working time are unknown. This research work presents a new three-dimensional mathematical approach using kinematic relations based on control variables as Euler angular velocities and angles and a discrete target control problem, such that the state control function is given by the sum of squared residuals involving the state and control variables to get such a physical system, which yields a noise-free and piecewise analytic representation of the positions, velocities, angular rates and angular orientation. It can be used for a further detailed study and analysis of the problem of why agricultural vehicles operate in practice as they do.

  19. Behavior analysis and social constructionism: Some points of contact and departure

    PubMed Central

    Roche, Bryan; Barnes-Holmes, Dermot

    2003-01-01

    Social constructionists occasionally single out behavior analysis as the field of psychology that most closely resembles the natural sciences in its commitment to empiricism, and accuses it of suffering from many of the limitations to science identified by the postmodernist movement (e.g., K. J. Gergen, 1985a; Soyland, 1994). Indeed, behavior analysis is a natural science in many respects. However, it also shares with social constructionism important epistemological features such as a rejection of mentalism, a functional-analytic approach to language, the use of interpretive methodologies, and a reflexive stance on analysis. The current paper outlines briefly the key tenets of the behavior-analytic and social constructionist perspectives before examining a number of commonalties between these approaches. The paper aims to show that far from being a nemesis to social constructionism, behavior analysis may in fact be its close ally. PMID:22478403

  20. An analytic approach to optimize tidal turbine fields

    NASA Astrophysics Data System (ADS)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  1. Mobility spectrum analytical approach for intrinsic band picture of Ba(FeAs)2

    NASA Astrophysics Data System (ADS)

    Huynh, K. K.; Tanabe, Y.; Urata, T.; Heguri, S.; Tanigaki, K.; Kida, T.; Hagiwara, M.

    2014-09-01

    Unconventional high temperature superconductivity as well as three-dimensional bulk Dirac cone quantum states arising from the unique d-orbital topology have comprised an intriguing research area in physics. Here we apply a special analytical approach using a mobility spectrum, in which the carrier number is conveniently described as a function of mobility without any hypothesis, both on the types and the numbers of carriers, for the interpretations of longitudinal and transverse electric transport of high quality single crystal Ba(FeAs)2 in a wide range of magnetic fields. We show that the majority carriers are accommodated in large parabolic hole and electron pockets with very different topology as well as remarkably different mobility spectra, whereas the minority carriers reside in Dirac quantum states with the largest mobility as high as 70,000 cm2(Vs)-1. The deduced mobility spectra are discussed and compared to the reported sophisticated first principle band calculations.

  2. Behavior analysis and social constructionism: some points of contact and departure.

    PubMed

    Roche, Bryan; Barnes-Holmes, Dermot

    2003-01-01

    Social constructionists occasionally single out behavior analysis as the field of psychology that most closely resembles the natural sciences in its commitment to empiricism, and accuses it of suffering from many of the limitations to science identified by the postmodernist movement (e.g., K. J. Gergen, 1985a; Soyland, 1994). Indeed, behavior analysis is a natural science in many respects. However, it also shares with social constructionism important epistemological features such as a rejection of mentalism, a functional-analytic approach to language, the use of interpretive methodologies, and a reflexive stance on analysis. The current paper outlines briefly the key tenets of the behavior-analytic and social constructionist perspectives before examining a number of commonalties between these approaches. The paper aims to show that far from being a nemesis to social constructionism, behavior analysis may in fact be its close ally.

  3. An analytically solvable three-body break-up model problem in hyperspherical coordinates

    NASA Astrophysics Data System (ADS)

    Ancarani, L. U.; Gasaneo, G.; Mitnik, D. M.

    2012-10-01

    An analytically solvable S-wave model for three particles break-up processes is presented. The scattering process is represented by a non-homogeneous Coulombic Schrödinger equation where the driven term is given by a Coulomb-like interaction multiplied by the product of a continuum wave function and a bound state in the particles coordinates. The closed form solution is derived in hyperspherical coordinates leading to an analytic expression for the associated scattering transition amplitude. The proposed scattering model contains most of the difficulties encountered in real three-body scattering problem, e.g., non-separability in the electrons' spherical coordinates and Coulombic asymptotic behavior. Since the coordinates' coupling is completely different, the model provides an alternative test to that given by the Temkin-Poet model. The knowledge of the analytic solution provides an interesting benchmark to test numerical methods dealing with the double continuum, in particular in the asymptotic regions. An hyperspherical Sturmian approach recently developed for three-body collisional problems is used to reproduce to high accuracy the analytical results. In addition to this, we generalized the model generating an approximate wave function possessing the correct radial asymptotic behavior corresponding to an S-wave three-body Coulomb problem. The model allows us to explore the typical structure of the solution of a three-body driven equation, to identify three regions (the driven, the Coulombic and the asymptotic), and to analyze how far one has to go to extract the transition amplitude.

  4. Adapting Surface Ground Motion Relations to Underground conditions: A case study for the Sudbury Neutrino Observatory in Sudbury, Ontario, Canada

    NASA Astrophysics Data System (ADS)

    Babaie Mahani, A.; Eaton, D. W.

    2013-12-01

    Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.

  5. From Genome to Function: Systematic Analysis of the Soil Bacterium Bacillus Subtilis

    PubMed Central

    Crawshaw, Samuel G.; Wipat, Anil

    2001-01-01

    Bacillus subtilis is a sporulating Gram-positive bacterium that lives primarily in the soil and associated water sources. Whilst this bacterium has been studied extensively in the laboratory, relatively few studies have been undertaken to study its activity in natural environments. The publication of the B. subtilis genome sequence and subsequent systematic functional analysis programme have provided an opportunity to develop tools for analysing the role and expression of Bacillus genes in situ. In this paper we discuss analytical approaches that are being developed to relate genes to function in environments such as the rhizosphere. PMID:18628943

  6. Variational model for one-dimensional quantum magnets

    NASA Astrophysics Data System (ADS)

    Kudasov, Yu. B.; Kozabaranov, R. V.

    2018-04-01

    A new variational technique for investigation of the ground state and correlation functions in 1D quantum magnets is proposed. A spin Hamiltonian is reduced to a fermionic representation by the Jordan-Wigner transformation. The ground state is described by a new non-local trial wave function, and the total energy is calculated in an analytic form as a function of two variational parameters. This approach is demonstrated with an example of the XXZ-chain of spin-1/2 under a staggered magnetic field. Generalizations and applications of the variational technique for low-dimensional magnetic systems are discussed.

  7. [Recent development of metabonomics and its applications in clinical research].

    PubMed

    Li, Hao; Jiang, Ying; He, Fu-Chu

    2008-04-01

    In the post-genomic era, systems biology is central to the biological sciences. Functional genomics such as transcriptomics and proteomics can simultaneous determine massive gene or protein expression changes following drug treatment or other intervention. However, these changes can't be coupled directly to changes in biological function. As a result, metabonomics and its many pseudonyms (metabolomics, metabolic profiling, etc.) have exploded onto the scientific scene in the past several years. Metabonomics is a rapidly growing research area and a system approach for comprehensive and quantitative analysis of the global metabolites in a biological matrix. Analytical chemistry approach is necessary for the development of comprehensive metabonomics investigations. Fundamentally, there are two types of metabonomics approaches: mass-spectrometry (MS) based and nuclear magnetic resonance (NMR) methodologies. Metabonomics measurements provide a wealth of data information and interpretation of these data relies mainly on chemometrics approaches to perform large-scale data analysis and data visualization, such as principal and independent component analysis, multidimensional scaling, a variety of clustering techniques, and discriminant function analysis, among many others. In this review, the recent development of analytical and statistical techniques used in metabonomics is summarized. Major applications of metabonomics relevant to clinical and preclinical study are then reviewed. The applications of metabonomics in study of liver diseases, cancers and other diseases have proved useful both as an experimental tool for pathogenesis mechanism re-search and ultimately a tool for diagnosis and monitoring treatment response of these diseases. Next, the applications of metabonomics in preclinical toxicology are discussed and the role that metabonomics might do in pharmaceutical research and development is explained with special reference to the aims and achievements of the Consortium for Metabonomic Toxicology (COMET), and the concept of pharmacometabonomics as a way of predicting an individual's response to treatment is highlighted. Finally, the role of metabonomics in elucidating the function of the unknown or novel enzyme is mentioned.

  8. From Brain Maps to Cognitive Ontologies: Informatics and the Search for Mental Structure.

    PubMed

    Poldrack, Russell A; Yarkoni, Tal

    2016-01-01

    A major goal of cognitive neuroscience is to delineate how brain systems give rise to mental function. Here we review the increasingly large role informatics-driven approaches are playing in such efforts. We begin by reviewing a number of challenges conventional neuroimaging approaches face in trying to delineate brain-cognition mappings--for example, the difficulty in establishing the specificity of postulated associations. Next, we demonstrate how these limitations can potentially be overcome using complementary approaches that emphasize large-scale analysis--including meta-analytic methods that synthesize hundreds or thousands of studies at a time; latent-variable approaches that seek to extract structure from data in a bottom-up manner; and predictive modeling approaches capable of quantitatively inferring mental states from patterns of brain activity. We highlight the underappreciated but critical role for formal cognitive ontologies in helping to clarify, refine, and test theories of brain and cognitive function. Finally, we conclude with a speculative discussion of what future informatics developments may hold for cognitive neuroscience.

  9. From brain maps to cognitive ontologies: informatics and the search for mental structure

    PubMed Central

    Poldrack, Russell A.; Yarkoni, Tal

    2015-01-01

    A major goal of cognitive neuroscience is to delineate how brain systems give rise to mental function. Here we review the increasingly large role informatics-driven approaches are playing in such efforts. We begin by reviewing a number of challenges conventional neuroimaging approaches face in trying to delineate brain-cognition mappings—for example, the difficulty in establishing the specificity of postulated associations. Next, we demonstrate how these limitations can potentially be overcome using complementary approaches that emphasize large-scale analysis—including meta-analytic methods that synthesize hundreds or thousands of studies at a time; latent-variable approaches that seek to extract structure from data in a bottom-up manner; and predictive modeling approaches capable of quantitatively inferring mental states from patterns of brain activity. We highlight the underappreciated but critical role for formal cognitive ontologies in helping to clarify, refine, and test theories of brain and cognitive function. Finally, we conclude with a speculative discussion of what future informatics developments may hold for cognitive neuroscience. PMID:26393866

  10. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  11. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  12. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback

    NASA Astrophysics Data System (ADS)

    Bruni, Renato; Celani, Fabio

    2016-10-01

    The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.

  13. Properties of atomic pairs produced in the collision of Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Ziń, Paweł; Wasak, Tomasz

    2018-04-01

    During a collision of Bose-Einstein condensates correlated pairs of atoms are emitted. The scattered massive particles, in analogy to photon pairs in quantum optics, might be used in the violation of Bell's inequalities, demonstration of Einstein-Podolsky-Rosen correlations, or sub-shot-noise atomic interferometry. Usually, a theoretical description of the collision relies either on stochastic numerical methods or on analytical treatments involving various approximations. Here, we investigate elastic scattering of atoms from colliding elongated Bose-Einstein condensates within the Bogoliubov method, carefully controlling performed approximations at every stage of the analysis. We derive expressions for the one- and two-particle correlation functions. The obtained formulas, which relate the correlation functions to the condensate wave function, are convenient for numerical calculations. We employ the variational approach for condensate wave functions to obtain analytical expressions for the correlation functions, whose properties we analyze in detail. We also present a useful semiclassical model of the process and compare its results with the quantum one. The results are relevant for recent experiments with excited helium atoms, as well as for planned experiments aimed at investigating the nonclassicality of the system.

  14. Hidden order and flux attachment in symmetry-protected topological phases: A Laughlin-like approach

    NASA Astrophysics Data System (ADS)

    Ringel, Zohar; Simon, Steven H.

    2015-05-01

    Topological phases of matter are distinct from conventional ones by their lack of a local order parameter. Still in the quantum Hall effect, hidden order parameters exist and constitute the basis for the celebrated composite-particle approach. Whether similar hidden orders exist in 2D and 3D symmetry protected topological phases (SPTs) is a largely open question. Here, we introduce a new approach for generating SPT ground states, based on a generalization of the Laughlin wave function. This approach gives a simple and unifying picture of some classes of SPTs in 1D and 2D, and reveals their hidden order and flux attachment structures. For the 1D case, we derive exact relations between the wave functions obtained in this manner and group cohomology wave functions, as well as matrix product state classification. For the 2D Ising SPT, strong analytical and numerical evidence is given to show that the wave function obtained indeed describes the desired SPT. The Ising SPT then appears as a state with quasi-long-range order in composite degrees of freedom consisting of Ising-symmetry charges attached to Ising-symmetry fluxes.

  15. Spiral Form of the Human Cochlea Results from Spatial Constraints.

    PubMed

    Pietsch, M; Aguirre Dávila, L; Erfurt, P; Avci, E; Lenarz, T; Kral, A

    2017-08-08

    The human inner ear has an intricate spiral shape often compared to shells of mollusks, particularly to the nautilus shell. It has inspired many functional hearing theories. The reasons for this complex geometry remain unresolved. We digitized 138 human cochleae at microscopic resolution and observed an astonishing interindividual variability in the shape. A 3D analytical cochlear model was developed that fits the analyzed data with high precision. The cochlear geometry neither matched a proposed function, namely sound focusing similar to a whispering gallery, nor did it have the form of a nautilus. Instead, the innate cochlear blueprint and its actual ontogenetic variants were determined by spatial constraints and resulted from an efficient packing of the cochlear duct within the petrous bone. The analytical model predicts well the individual 3D cochlear geometry from few clinical measures and represents a clinical tool for an individualized approach to neurosensory restoration with cochlear implants.

  16. An approach to the determination of aircraft handling qualities using pilot transfer functions

    NASA Technical Reports Server (NTRS)

    Adams, J. J.; Hatch, H. G., Jr.

    1978-01-01

    It was shown that a correlation exists between pilot-aircraft system closed-loop characteristics, determined by using analytical expressions for pilot response along with the analytical expression for the aircraft response, and pilot ratings obtained in many previous flight and simulation studies. Two different levels of preferred pilot response were used. These levels were: (1) a static gain and a second-order lag function with a lag time constant of 0.2 second; and (2) a static gain, a lead time constant of 1 second, and a 0.2-second lag time constant. If a system response with a pitch-angle time constant of 2.6 seconds and a stable oscillatory mode of motion with a period of 2.5 seconds could be achieved with the first-level pilot model, it was shown that the pilot rating will be satisfactory for that vehicle.

  17. Revision of 'Cumulative effect of the filamentation and Weibel instabilities in counterstreaming thermal plasmas' [Phys. Plasmas 13, 102107 (2006)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stockem, A.; Lazar, M.; Department of Physics and Engineering Physics, University of Saskatchewan, Saskatoon

    2008-01-15

    Dispersion formalism reported in Lazar et al. [Phys. Plasmas 13, 102107 (2006)] is affected by errors due to the misfitting of the distribution function (1) used to interpret the counterstreaming plasmas, with the general dispersion relations (4) and (5), where distribution function (1) has been inserted to find the unstable solutions. The analytical approach is reviewed here, providing a correct analytical and numerical description for the cumulative effect of filamentation and Weibel instabilities arising in initially counterstreaming plasmas with temperature anisotropies. The growth rates are plotted again, and for the cumulative mode, they are orders of magnitude larger than thosemore » obtained in Lazar et al. [Phys. Plasmas 13, 102107 (2006)]. Physically, this can be understood as an increasing of the efficiency of magnetic field generation, and rather enhances the potential role of magnetic instabilities for the fast magnetization scenario in astrophysical applications.« less

  18. Precursor ion scanning-mass spectrometry for the determination of nitro functional groups in atmospheric particulate organic matter.

    PubMed

    Dron, Julien; Abidi, Ehgere; Haddad, Imad El; Marchand, Nicolas; Wortham, Henri

    2008-06-23

    An analytical method for the quantitative determination of the total nitro functional group (R-NO2) content in atmospheric particulate organic matter is developed. The method is based on the selectivity of NO2(-) (m/z 46) precursor ion scanning (PAR 46) by atmospheric pressure chemical ionization-tandem mass spectrometry (APCI-MS/MS). PAR 46 was experimented on 16 nitro compounds of different molecular structures and was compared with a neutral loss of NO (30 amu) technique in terms of sensitivity and efficiency to characterize the nitro functional groups. Covering a wider range of compounds, PAR 46 was preferred and applied to reference mixtures containing all the 16 compounds under study. Repeatability carried out using an original statistical approach, and calibration experiments were performed on the reference mixtures proven the suitability of the technique for quantitative measurements of nitro functional groups in samples of environmental interest with good accuracy. A linear range was obtained for concentrations ranging between 0.005 and 0.25 mM with a detection limit of 0.001 mM of nitro functional groups. Finally, the analytical error based on an original statistical approach applied to numerous reference mixtures was below 20%. Despite of potential artifacts related to nitro-alkanes and organonitrates, this new methodology offers a promising alternative to FT-IR measurements. The relevance of the method and its potentialities are demonstrated through its application to aerosols collected in the EUPHORE simulation chamber during o-xylene photooxidation experiments and in a suburban area of a French alpine valley during summer.

  19. Method of and apparatus for determining the similarity of a biological analyte from a model constructed from known biological fluids

    DOEpatents

    Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.

    1990-01-01

    The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.

  20. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  1. Performance enhancement of Pt/TiO2/Si UV-photodetector by optimizing light trapping capability and interdigitated electrodes geometry

    NASA Astrophysics Data System (ADS)

    Bencherif, H.; Djeffal, F.; Ferhati, H.

    2016-09-01

    This paper presents a hybrid approach based on an analytical and metaheuristic investigation to study the impact of the interdigitated electrodes engineering on both speed and optical performance of an Interdigitated Metal-Semiconductor-Metal Ultraviolet Photodetector (IMSM-UV-PD). In this context, analytical models regarding the speed and optical performance have been developed and validated by experimental results, where a good agreement has been recorded. Moreover, the developed analytical models have been used as objective functions to determine the optimized design parameters, including the interdigit configuration effect, via a Multi-Objective Genetic Algorithm (MOGA). The ultimate goal of the proposed hybrid approach is to identify the optimal design parameters associated with the maximum of electrical and optical device performance. The optimized IMSM-PD not only reveals superior performance in terms of photocurrent and response time, but also illustrates higher optical reliability against the optical losses due to the active area shadowing effects. The advantages offered by the proposed design methodology suggest the possibility to overcome the most challenging problem with the communication speed and power requirements of the UV optical interconnect: high derived current and commutation speed in the UV receiver.

  2. Analytic calculations of anharmonic infrared and Raman vibrational spectra

    PubMed Central

    Louant, Orian; Ruud, Kenneth

    2016-01-01

    Using a recently developed recursive scheme for the calculation of high-order geometric derivatives of frequency-dependent molecular properties [Ringholm et al., J. Comp. Chem., 2014, 35, 622], we present the first analytic calculations of anharmonic infrared (IR) and Raman spectra including anharmonicity both in the vibrational frequencies and in the IR and Raman intensities. In the case of anharmonic corrections to the Raman intensities, this involves the calculation of fifth-order energy derivatives—that is, the third-order geometric derivatives of the frequency-dependent polarizability. The approach is applicable to both Hartree–Fock and Kohn–Sham density functional theory. Using generalized vibrational perturbation theory to second order, we have calculated the anharmonic infrared and Raman spectra of the non- and partially deuterated isotopomers of nitromethane, where the inclusion of anharmonic effects introduces combination and overtone bands that are observed in the experimental spectra. For the major features of the spectra, the inclusion of anharmonicities in the calculation of the vibrational frequencies is more important than anharmonic effects in the calculated infrared and Raman intensities. Using methanimine as a trial system, we demonstrate that the analytic approach avoids errors in the calculated spectra that may arise if numerical differentiation schemes are used. PMID:26784673

  3. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  4. Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences.

    PubMed

    Voelz, David G; Roggemann, Michael C

    2009-11-10

    Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

  5. Collision problems treated with the Generalized Hyperspherical Sturmian method

    NASA Astrophysics Data System (ADS)

    Mitnik, D. M.; Gasaneo, G.; Ancarani, L. U.; Ambrosio, M. J.

    2014-04-01

    An hyperspherical Sturmian approach recently developed for three-body break-up processes is presented. To test several of its features, the method is applied to two simplified models. Excellent agreement is found when compared with the results of an analytically solvable problem. For the Temkin-Poet model of the double ionization of He by high energy electron impact, the present method is compared with the Spherical Sturmian approach, and again excellent agreement is found. Finally, a study of the channels appearing in the break-up three-body wave function is presented.

  6. Unsteady Aerodynamic Force Sensing from Strain Data

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2017-01-01

    A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.

  7. Blade selection for a modern axial-flow compressor

    NASA Technical Reports Server (NTRS)

    Wright, L. C.

    1974-01-01

    The procedures leading to successful design of an axial flow compressor are discussed. The three related approaches to cascade selection are: (1) experimental approach which relies on the use of experimental results from identical cascades to satisfy the velocity diagrams calculated, (2) a purely analytical procedure whereby blade shapes are calculated from the theoretical cascade and viscous flow equations, and (3) a semiempirical procedure which used experimental data together with the theoretically derived functional relations to relate the cascade parameters. Diagrams of typical transonic blade sections with uncambered leading edges are presented.

  8. Functionalized nanoparticles for measurement of biomarkers using a SERS nanochannel platform

    NASA Astrophysics Data System (ADS)

    Benford, Melodie; Wang, Miao; Kameoka, Jun; Good, Theresa; Cote, Gerard

    2010-02-01

    The overall goal of this research is to develop a new point-of-care system for early detection and characterization of cardiac markers to aid in diagnosis of acute coronary syndrome. The envisioned final technology platform incorporates functionalized gold colloidal nanoparticles trapped at the entrance to a nanofluidic device providing a robust means for analyte detection at trace levels using surface enhanced Raman spectroscopy (SERS). To discriminate a specific biomarker, we designed an assay format analogous to a competitive ELISA. Notably, the biomarker would be captured by an antibody and in turn displace a peptide fragment, containing the binding epitope of the antibody labeled with a Raman reporter molecule that would not interfere with blood serum proteins. To demonstrate the feasibility of this approach, we used C-reactive protein (CRP) as a surrogate biomarker. We functionalized agarose beads with anti-CRP that were placed outside the nanochannel, then added either Rhodamine-6-G (R6G) labeled-CRP and gold (as a surrogate of a sample without analyte present), or R6G labeled CRP, gold, and unlabeled CRP (as a surrogate of a sample with analyte present). Analyzing the spectra we see an increase in peak intensity in the presence of analyte at characteristic peaks for R6G specifically, 1284 and1567 cm- 1. Further, our results illustrate the reproducibility of the Raman spectra collected for R6G-labeled CRP in the nanochannel. Overall, we believe that this method will provide the advantage of sensitivity and narrow line widths characteristic of SERS as well as the specificity toward the biomarker of interest.

  9. Towards the blackbox computation of magnetic exchange coupling parameters in polynuclear transition-metal complexes: theory, implementation, and application.

    PubMed

    Phillips, Jordan J; Peralta, Juan E

    2013-05-07

    We present a method for calculating magnetic coupling parameters from a single spin-configuration via analytic derivatives of the electronic energy with respect to the local spin direction. This method does not introduce new approximations beyond those found in the Heisenberg-Dirac Hamiltonian and a standard Kohn-Sham Density Functional Theory calculation, and in the limit of an ideal Heisenberg system it reproduces the coupling as determined from spin-projected energy-differences. Our method employs a generalized perturbative approach to constrained density functional theory, where exact expressions for the energy to second order in the constraints are obtained by analytic derivatives from coupled-perturbed theory. When the relative angle between magnetization vectors of metal atoms enters as a constraint, this allows us to calculate all the magnetic exchange couplings of a system from derivatives with respect to local spin directions from the high-spin configuration. Because of the favorable computational scaling of our method with respect to the number of spin-centers, as compared to the broken-symmetry energy-differences approach, this opens the possibility for the blackbox exploration of magnetic properties in large polynuclear transition-metal complexes. In this work we outline the motivation, theory, and implementation of this method, and present results for several model systems and transition-metal complexes with a variety of density functional approximations and Hartree-Fock.

  10. Speckle-field propagation in 'frozen' turbulence: brightness function approach

    NASA Astrophysics Data System (ADS)

    Dudorov, Vadim V.; Vorontsov, Mikhail A.; Kolosov, Valeriy V.

    2006-08-01

    Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.

  11. Speckle-field propagation in 'frozen' turbulence: brightness function approach.

    PubMed

    Dudorov, Vadim V; Vorontsov, Mikhail A; Kolosov, Valeriy V

    2006-08-01

    Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.

  12. Quantum-shutter approach to tunneling time scales with wave packets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Norifumi; Garcia-Calderon, Gaston; Villavicencio, Jorge

    2005-07-15

    The quantum-shutter approach to tunneling time scales [G. Garcia-Calderon and A. Rubio, Phys. Rev. A 55, 3361 (1997)], which uses a cutoff plane wave as the initial condition, is extended to consider certain type of wave packet initial conditions. An analytical expression for the time-evolved wave function is derived. The time-domain resonance, the peaked structure of the probability density (as the function of time) at the exit of the barrier, originally found with the cutoff plane wave initial condition, is studied with the wave packet initial conditions. It is found that the time-domain resonance is not very sensitive to themore » width of the packet when the transmission process occurs in the tunneling regime.« less

  13. Surrogate matrix and surrogate analyte approaches for definitive quantitation of endogenous biomolecules.

    PubMed

    Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L

    2012-10-01

    Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.

  14. Deviation pattern approach for optimizing perturbative terms of QCD renormalization group invariant observables

    NASA Astrophysics Data System (ADS)

    Khellat, M. R.; Mirjalili, A.

    2017-03-01

    We first consider the idea of renormalization group-induced estimates, in the context of optimization procedures, for the Brodsky-Lepage-Mackenzie approach to generate higher-order contributions to QCD perturbative series. Secondly, we develop the deviation pattern approach (DPA) in which through a series of comparisons between lowerorder RG-induced estimates and the corresponding analytical calculations, one could modify higher-order RG-induced estimates. Finally, using the normal estimation procedure and DPA, we get estimates of αs4 corrections for the Bjorken sum rule of polarized deep-inelastic scattering and for the non-singlet contribution to the Adler function.

  15. High-Speed Rotor Analytical Dynamics on Flexible Foundation Subjected to Internal and External Excitation

    NASA Astrophysics Data System (ADS)

    Jivkov, Venelin S.; Zahariev, Evtim V.

    2016-12-01

    The paper presents a geometrical approach to dynamics simulation of a rigid and flexible system, compiled of high speed rotating machine with eccentricity and considerable inertia and mass. The machine is mounted on a vertical flexible pillar with considerable height. The stiffness and damping of the column, as well as, of the rotor bearings and the shaft are taken into account. Non-stationary vibrations and transitional processes are analyzed. The major frequency and modal mode of the flexible column are used for analytical reduction of its mass, stiffness and damping properties. The rotor and the foundation are modelled as rigid bodies, while the flexibility of the bearings is estimated by experiments and the requirements of the manufacturer. The transition effects as a result of limited power are analyzed by asymptotic methods of averaging. Analytical expressions for the amplitudes and unstable vibrations throughout resonance are derived by quasi-static approach increasing and decreasing of the exciting frequency. Analytical functions give the possibility to analyze the influence of the design parameter of many structure applications as wind power generators, gas turbines, turbo-generators, and etc. A numerical procedure is applied to verify the effectiveness and precision of the simulation process. Nonlinear and transitional effects are analyzed and compared to the analytical results. External excitations, as wave propagation and earthquakes, are discussed. Finite elements in relative and absolute coordinates are applied to model the flexible column and the high speed rotating machine. Generalized Newton - Euler dynamics equations are used to derive the precise dynamics equations. Examples of simulation of the system vibrations and nonstationary behaviour are presented.

  16. Multidimensional assessment of awareness in early-stage dementia: a cluster analytic approach.

    PubMed

    Clare, Linda; Whitaker, Christopher J; Nelis, Sharon M; Martyr, Anthony; Markova, Ivana S; Roth, Ilona; Woods, Robert T; Morris, Robin G

    2011-01-01

    Research on awareness in dementia has yielded variable and inconsistent associations between awareness and other factors. This study examined awareness using a multidimensional approach and applied cluster analytic techniques to identify associations between the level of awareness and other variables. Participants were 101 individuals with early-stage dementia (PwD) and their carers. Explicit awareness was assessed at 3 levels: performance monitoring in relation to memory, evaluative judgement in relation to memory, everyday activities and socio-emotional functioning, and metacognitive reflection in relation to the experience and impact of the condition. Implicit awareness was assessed with an emotional Stroop task. Different measures of explicit awareness scores were related only to a limited extent. Cluster analysis yielded 3 groups with differing degrees of explicit awareness. These groups showed no differences in implicit awareness. Lower explicit awareness was associated with greater age, lower MMSE scores, poorer recall and naming scores, lower anxiety and greater carer stress. Multidimensional assessment offers a more robust approach to classifying PwD according to level of awareness and hence to examining correlates and predictors of awareness. Copyright © 2011 S. Karger AG, Basel.

  17. Study of a vibrating plate: comparison between experimental (ESPI) and analytical results

    NASA Astrophysics Data System (ADS)

    Romero, G.; Alvarez, L.; Alanís, E.; Nallim, L.; Grossi, R.

    2003-07-01

    Real-time electronic speckle pattern interferometry (ESPI) was used for tuning and visualization of natural frequencies of a trapezoidal plate. The plate was excited to resonant vibration by a sinusoidal acoustical source, which provided a continuous range of audio frequencies. Fringe patterns produced during the time-average recording of the vibrating plate—corresponding to several resonant frequencies—were registered. From these interferograms, calculations of vibrational amplitudes by means of zero-order Bessel functions were performed in some particular cases. The system was also studied analytically. The analytical approach developed is based on the Rayleigh-Ritz method and on the use of non-orthogonal right triangular co-ordinates. The deflection of the plate is approximated by a set of beam characteristic orthogonal polynomials generated by using the Gram-Schmidt procedure. A high degree of correlation between computational analysis and experimental results was observed.

  18. Delay differential equations via the matrix Lambert W function and bifurcation analysis: application to machine tool chatter.

    PubMed

    Yi, Sun; Nelson, Patrick W; Ulsoy, A Galip

    2007-04-01

    In a turning process modeled using delay differential equations (DDEs), we investigate the stability of the regenerative machine tool chatter problem. An approach using the matrix Lambert W function for the analytical solution to systems of delay differential equations is applied to this problem and compared with the result obtained using a bifurcation analysis. The Lambert W function, known to be useful for solving scalar first-order DDEs, has recently been extended to a matrix Lambert W function approach to solve systems of DDEs. The essential advantages of the matrix Lambert W approach are not only the similarity to the concept of the state transition matrix in lin ear ordinary differential equations, enabling its use for general classes of linear delay differential equations, but also the observation that we need only the principal branch among an infinite number of roots to determine the stability of a system of DDEs. The bifurcation method combined with Sturm sequences provides an algorithm for determining the stability of DDEs without restrictive geometric analysis. With this approach, one can obtain the critical values of delay, which determine the stability of a system and hence the preferred operating spindle speed without chatter. We apply both the matrix Lambert W function and the bifurcation analysis approach to the problem of chatter stability in turning, and compare the results obtained to existing methods. The two new approaches show excellent accuracy and certain other advantages, when compared to traditional graphical, computational and approximate methods.

  19. An analytical approach for the simulation of flow in a heterogeneous confined aquifer with a parameter zonation structure

    NASA Astrophysics Data System (ADS)

    Huang, Ching-Sheng; Yeh, Hund-Der

    2016-11-01

    This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.

  20. Two stage algorithm vs commonly used approaches for the suspect screening of complex environmental samples analyzed via liquid chromatography high resolution time of flight mass spectroscopy: A test study.

    PubMed

    Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V

    2017-06-09

    LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. On the equilibrium charge density at tilt grain boundaries

    NASA Astrophysics Data System (ADS)

    Srikant, V.; Clarke, D. R.

    1998-05-01

    The equilibrium charge density and free energy of tilt grain boundaries as a function of their misorientation is computed using a Monte Carlo simulation that takes into account both the electrostatic and configurational energies associated with charges at the grain boundary. The computed equilibrium charge density increases with the grain-boundary angle and approaches a saturation value. The equilibrium charge density at large-angle grain boundaries compares well with experimental values for large-angle tilt boundaries in GaAs. The computed grain-boundary electrostatic energy is in agreement with the analytical solution to a one-dimensional Poisson equation at high donor densities but indicates that the analytical solution overestimates the electrostatic energy at lower donor densities.

  2. Analytical Derivation of Power Laws in Firm Size Variables from Gibrat's Law and Quasi-inversion Symmetry: A Geomorphological Approach

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki; Watanabe, Tsutomu

    2014-03-01

    We start from Gibrat's law and quasi-inversion symmetry for three firm size variables (i.e., tangible fixed assets K, number of employees L, and sales Y) and derive a partial differential equation to be satisfied by the joint probability density function of K and L. We then transform K and L, which are correlated, into two independent variables by applying surface openness used in geomorphology and provide an analytical solution to the partial differential equation. Using worldwide data on the firm size variables for companies, we confirm that the estimates on the power-law exponents of K, L, and Y satisfy a relationship implied by the theory.

  3. Electro-optical properties of Cu2O for P excitons in the regime of Franz-Keldysh oscillations

    NASA Astrophysics Data System (ADS)

    Zielińska-Raczyńska, Sylwia; Ziemkiewicz, David; Czajkowski, Gerard

    2018-04-01

    We present the analytical method which enables one to compute the optical functions i.e., reflectivity, transmission, and absorption, including the excitonic effects, for a semiconductor crystal exposed to a uniform electric field for the energy region above the gap and for the external field suitable for the appearance of Franz-Keldysh (FK) oscillations. Our approach intrinsically takes into account the coherence between the carriers and the electromagnetic field. We quantitatively describe the amplitudes and periodicity of FK modulations as well as the influence of Rydberg excitons on the FK effect. Our analytical findings are illustrated numerically for P excitons in Cu2O crystal.

  4. Properties of two-mode squeezed number states

    NASA Technical Reports Server (NTRS)

    Chizhov, Alexei V.; Murzakhmetov, B. K.

    1994-01-01

    Photon statistics and phase properties of two-mode squeezed number states are studied. It is shown that photon number distribution and Pegg-Barnett phase distribution for such states have similar (N + 1)-peak structure for nonzero value of the difference in the number of photons between modes. Exact analytical formulas for phase distributions based on different phase approaches are derived. The Pegg-Barnett phase distribution and the phase quasiprobability distribution associated with the Wigner function are close to each other, while the phase quasiprobability distribution associated with the Q function carries less phase information.

  5. Blade Tip Rubbing Stress Prediction

    NASA Technical Reports Server (NTRS)

    Davis, Gary A.; Clough, Ray C.

    1991-01-01

    An analytical model was constructed to predict the magnitude of stresses produced by rubbing a turbine blade against its tip seal. This model used a linearized approach to the problem, after a parametric study, found that the nonlinear effects were of insignificant magnitude. The important input parameters to the model were: the arc through which rubbing occurs, the turbine rotor speed, normal force exerted on the blade, and the rubbing coefficient of friction. Since it is not possible to exactly specify some of these parameters, values were entered into the model which bracket likely values. The form of the forcing function was another variable which was impossible to specify precisely, but the assumption of a half-sine wave with a period equal to the duration of the rub was taken as a realistic assumption. The analytical model predicted resonances between harmonics of the forcing function decomposition and known harmonics of the blade. Thus, it seemed probable that blade tip rubbing could be at least a contributor to the blade-cracking phenomenon. A full-scale, full-speed test conducted on the space shuttle main engine high pressure fuel turbopump Whirligig tester was conducted at speeds between 33,000 and 28,000 RPM to confirm analytical predictions.

  6. Efficient evaluation of the Coulomb force in the Gaussian and finite-element Coulomb method.

    PubMed

    Kurashige, Yuki; Nakajima, Takahito; Sato, Takeshi; Hirao, Kimihiko

    2010-06-28

    We propose an efficient method for evaluating the Coulomb force in the Gaussian and finite-element Coulomb (GFC) method, which is a linear-scaling approach for evaluating the Coulomb matrix and energy in large molecular systems. The efficient evaluation of the analytical gradient in the GFC is not straightforward as well as the evaluation of the energy because the SCF procedure with the Coulomb matrix does not give a variational solution for the Coulomb energy. Thus, an efficient approximate method is alternatively proposed, in which the Coulomb potential is expanded in the Gaussian and finite-element auxiliary functions as done in the GFC. To minimize the error in the gradient not just in the energy, the derived functions of the original auxiliary functions of the GFC are used additionally for the evaluation of the Coulomb gradient. In fact, the use of the derived functions significantly improves the accuracy of this approach. Although these additional auxiliary functions enlarge the size of the discretized Poisson equation and thereby increase the computational cost, it maintains the near linear scaling as the GFC and does not affects the overall efficiency of the GFC approach.

  7. Bioparticles assembled using low frequency vibration immune to evacuation drifts

    NASA Astrophysics Data System (ADS)

    Shao, Fenfen; Whitehill, James David; Ng, Tuck Wah

    2012-08-01

    The use of low frequency vibration on suspensions of glass beads in a droplet has been shown to develop a strong degree of patterning (to a ring) due to the manner with which the surface waves are modified. Functionalized glass beads that serve as bioparticles permit for sensitive readings when concentrated at specific locations. However, a time controlled exposure with analytes is desirable. The replacement of the liquid medium with analyte through extraction is needed to conserve time. Nevertheless, we show here that extraction with a porous media, which is simple and useable in the field, will strongly displace the patterned beads. The liquid removal was found to be dependent on two mechanisms that affect the shape of the droplet, one of contact hysteresis due to the outer edge pinning, and the other of liquid being drawn into the porous media. From this, we developed and demonstrated a modified well structure that prevented micro-bead displacement during evacuation. An added strong advantage with this approach lies with its ability to require only analytes to be dispensed at the location of aggregated particles, which minimizes analyte usage. This was analytically established here.

  8. Streamflow variability and optimal capacity of run-of-river hydropower plants

    NASA Astrophysics Data System (ADS)

    Basso, S.; Botter, G.

    2012-10-01

    The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.

  9. On Establishing Big Data Wave Breakwaters with Analytics (Invited)

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2013-12-01

    The Research Data Alliance Big Data Analytics (RDA-BDA) Interest Group seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. RDA-BDA seeks to analyze different scientific domain applications and their potential use of various big data analytics techniques. A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. These combinations are complex since a wide variety of different data analysis algorithms exist (e.g. specific algorithms using GPUs of analyzing brain images) that need to work together with multiple analytical tools reaching from simple (iterative) map-reduce methods (e.g. with Apache Hadoop or Twister) to sophisticated higher level frameworks that leverage machine learning algorithms (e.g. Apache Mahout). These computational analysis techniques are often augmented with visual analytics techniques (e.g. computational steering on large-scale high performance computing platforms) to put the human judgement into the analysis loop or new approaches with databases that are designed to support new forms of unstructured or semi-structured data as opposed to the rather tradtional structural databases (e.g. relational databases). More recently, data analysis and underpinned analytics frameworks also have to consider energy footprints of underlying resources. To sum up, the aim of this talk is to provide pieces of information to understand big data analytics in the context of science and engineering using the aforementioned classification as the lighthouse and as the frame of reference for a systematic approach. This talk will provide insights about big data analytics methods in context of science within varios communities and offers different views of how approaches of correlation and causality offer complementary methods to advance in science and engineering today. The RDA Big Data Analytics Group seeks to understand what approaches are not only technically feasible, but also scientifically feasible. The lighthouse Goal of the RDA Big Data Analytics Group is a classification of clever combinations of various Technologies and scientific applications in order to provide clear recommendations to the scientific community what approaches are technicalla and scientifically feasible.

  10. Alternative thermodynamic cycle for the Stirling machine

    NASA Astrophysics Data System (ADS)

    Romanelli, Alejandro

    2017-12-01

    We develop an alternative thermodynamic cycle for the Stirling machine, where the polytropic process plays a central role. Analytical expressions for pressure and temperatures of the working gas are obtained as a function of the volume and the parameter that characterizes the polytropic process. This approach achieves closer agreement with the experimental pressure-volume diagram and can be adapted to any type of Stirling engine.

  11. Analyticity without Differentiability

    ERIC Educational Resources Information Center

    Kirillova, Evgenia; Spindler, Karlheinz

    2008-01-01

    In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…

  12. Culture-Sensitive Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Vandenberghe, L.

    2008-01-01

    Functional analytic psychotherapy (FAP) is defined as behavior-analytically conceptualized talk therapy. In contrast to the technique-oriented educational format of cognitive behavior therapy and the use of structural mediational models, FAP depends on the functional analysis of the moment-to-moment stream of interactions between client and…

  13. Hollow silica microspheres for buoyancy-assisted separation of infectious pathogens from stool.

    PubMed

    Weigum, Shannon E; Xiang, Lichen; Osta, Erica; Li, Linying; López, Gabriel P

    2016-09-30

    Separation of cells and microorganisms from complex biological mixtures is a critical first step in many analytical applications ranging from clinical diagnostics to environmental monitoring for food and waterborne contaminants. Yet, existing techniques for cell separation are plagued by high reagent and/or instrumentation costs that limit their use in many remote or resource-poor settings, such as field clinics or developing countries. We developed an innovative approach to isolate infectious pathogens from biological fluids using buoyant hollow silica microspheres that function as "molecular buoys" for affinity-based target capture and separation by floatation. In this process, antibody functionalized glass microspheres are mixed with a complex biological sample, such as stool. When mixing is stopped, the target-bound, low-density microspheres float to the air/liquid surface, which simultaneously isolates and concentrates the target analytes from the sample matrix. The microspheres are highly tunable in terms of size, density, and surface functionality for targeting diverse analytes with separation times of ≤2min in viscous solutions. We have applied the molecular buoy technique for isolation of a protozoan parasite that causes diarrheal illness, Cryptosporidium, directly from stool with separation efficiencies over 90% and low non-specific binding. This low-cost method for phenotypic cell/pathogen separation from complex mixtures is expected to have widespread use in clinical diagnostics as well as basic research. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Validating Semi-analytic Models of High-redshift Galaxy Formation Using Radiation Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Côté, Benoit; Silvia, Devin W.; O’Shea, Brian W.; Smith, Britton; Wise, John H.

    2018-05-01

    We use a cosmological hydrodynamic simulation calculated with Enzo and the semi-analytic galaxy formation model (SAM) GAMMA to address the chemical evolution of dwarf galaxies in the early universe. The long-term goal of the project is to better understand the origin of metal-poor stars and the formation of dwarf galaxies and the Milky Way halo by cross-validating these theoretical approaches. We combine GAMMA with the merger tree of the most massive galaxy found in the hydrodynamic simulation and compare the star formation rate, the metallicity distribution function (MDF), and the age–metallicity relationship predicted by the two approaches. We found that the SAM can reproduce the global trends of the hydrodynamic simulation. However, there are degeneracies between the model parameters, and more constraints (e.g., star formation efficiency, gas flows) need to be extracted from the simulation to isolate the correct semi-analytic solution. Stochastic processes such as bursty star formation histories and star formation triggered by supernova explosions cannot be reproduced by the current version of GAMMA. Non-uniform mixing in the galaxy’s interstellar medium, coming primarily from self-enrichment by local supernovae, causes a broadening in the MDF that can be emulated in the SAM by convolving its predicted MDF with a Gaussian function having a standard deviation of ∼0.2 dex. We found that the most massive galaxy in the simulation retains nearby 100% of its baryonic mass within its virial radius, which is in agreement with what is needed in GAMMA to reproduce the global trends of the simulation.

  15. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  16. An approach to the design of wide-angle optical systems with special illumination and IFOV requirements

    NASA Astrophysics Data System (ADS)

    Pravdivtsev, Andrey V.

    2012-06-01

    The article presents the approach to the design wide-angle optical systems with special illumination and instantaneous field of view (IFOV) requirements. The unevenness of illumination reduces the dynamic range of the system, which negatively influence on the system ability to perform their task. The result illumination on the detector depends among other factors from the IFOV changes. It is also necessary to consider IFOV in the synthesis of data processing algorithms, as it directly affects to the potential "signal/background" ratio for the case of statistically homogeneous backgrounds. A numerical-analytical approach that simplifies the design of wideangle optical systems with special illumination and IFOV requirements is presented. The solution can be used for optical systems which field of view greater than 180 degrees. Illumination calculation in optical CAD is based on computationally expensive tracing of large number of rays. The author proposes to use analytical expression for some characteristics which illumination depends on. The rest characteristic are determined numerically in calculation with less computationally expensive operands, the calculation performs not every optimization step. The results of analytical calculation inserts in the merit function of optical CAD optimizer. As a result we reduce the optimizer load, since using less computationally expensive operands. It allows reducing time and resources required to develop a system with the desired characteristics. The proposed approach simplifies the creation and understanding of the requirements for the quality of the optical system, reduces the time and resources required to develop an optical system, and allows creating more efficient EOS.

  17. Homogeneous partial differential equations for superpositions of indeterminate functions of several variables

    NASA Astrophysics Data System (ADS)

    Asai, Kazuto

    2009-02-01

    We determine essentially all partial differential equations satisfied by superpositions of tree type and of a further special type. These equations represent necessary and sufficient conditions for an analytic function to be locally expressible as an analytic superposition of the type indicated. The representability of a real analytic function by a superposition of this type is independent of whether that superposition involves real-analytic functions or C^{\\rho}-functions, where the constant \\rho is determined by the structure of the superposition. We also prove that the function u defined by u^n=xu^a+yu^b+zu^c+1 is generally non-representable in any real (resp. complex) domain as f\\bigl(g(x,y),h(y,z)\\bigr) with twice differentiable f and differentiable g, h (resp. analytic f, g, h).

  18. On metric structure of ultrametric spaces

    NASA Astrophysics Data System (ADS)

    Nechaev, S. K.; Vasilyev, O. A.

    2004-03-01

    In our work we have reconsidered the old problem of diffusion at the boundary of an ultrametric tree from a 'number theoretic' point of view. Namely, we use the modular functions (in particular, the Dedekind eegr-function) to construct the 'continuous' analogue of the Cayley tree isometrically embedded in the Poincaré upper half-plane. Later we work with this continuous Cayley tree as with a standard function of a complex variable. In the framework of our approach, the results of Ogielsky and Stein on dynamics in ultrametric spaces are reproduced semi-analytically or semi-numerically. The speculation on the new 'geometrical' interpretation of replica n rarr 0 limit is proposed.

  19. Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2001-01-01

    A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.

  20. At-line nanofractionation with parallel mass spectrometry and bioactivity assessment for the rapid screening of thrombin and factor Xa inhibitors in snake venoms.

    PubMed

    Mladic, Marija; Zietek, Barbara M; Iyer, Janaki Krishnamoorthy; Hermarij, Philip; Niessen, Wilfried M A; Somsen, Govert W; Kini, R Manjunatha; Kool, Jeroen

    2016-02-01

    Snake venoms comprise complex mixtures of peptides and proteins causing modulation of diverse physiological functions upon envenomation of the prey organism. The components of snake venoms are studied as research tools and as potential drug candidates. However, the bioactivity determination with subsequent identification and purification of the bioactive compounds is a demanding and often laborious effort involving different analytical and pharmacological techniques. This study describes the development and optimization of an integrated analytical approach for activity profiling and identification of venom constituents targeting the cardiovascular system, thrombin and factor Xa enzymes in particular. The approach developed encompasses reversed-phase liquid chromatography (RPLC) analysis of a crude snake venom with parallel mass spectrometry (MS) and bioactivity analysis. The analytical and pharmacological part in this approach are linked using at-line nanofractionation. This implies that the bioactivity is assessed after high-resolution nanofractionation (6 s/well) onto high-density 384-well microtiter plates and subsequent freeze drying of the plates. The nanofractionation and bioassay conditions were optimized for maintaining LC resolution and achieving good bioassay sensitivity. The developed integrated analytical approach was successfully applied for the fast screening of snake venoms for compounds affecting thrombin and factor Xa activity. Parallel accurate MS measurements provided correlation of observed bioactivity to peptide/protein masses. This resulted in identification of a few interesting peptides with activity towards the drug target factor Xa from a screening campaign involving venoms of 39 snake species. Besides this, many positive protease activity peaks were observed in most venoms analysed. These protease fingerprint chromatograms were found to be similar for evolutionary closely related species and as such might serve as generic snake protease bioactivity fingerprints in biological studies on venoms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Crystal structure optimisation using an auxiliary equation of state

    NASA Astrophysics Data System (ADS)

    Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron

    2015-11-01

    Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.

  2. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  3. High-beta analytic equilibria in circular, elliptical, and D-shaped large aspect ratio axisymmetric configurations with poloidal and toroidal flows

    NASA Astrophysics Data System (ADS)

    López, O. E.; Guazzotto, L.

    2017-03-01

    The Grad-Shafranov-Bernoulli system of equations is a single fluid magnetohydrodynamical description of axisymmetric equilibria with mass flows. Using a variational perturbative approach [E. Hameiri, Phys. Plasmas 20, 024504 (2013)], analytic approximations for high-beta equilibria in circular, elliptical, and D-shaped cross sections in the high aspect ratio approximation are found, which include finite toroidal and poloidal flows. Assuming a polynomial dependence of the free functions on the poloidal flux, the equilibrium problem is reduced to an inhomogeneous Helmholtz partial differential equation (PDE) subject to homogeneous Dirichlet conditions. An application of the Green's function method leads to a closed form for the circular solution and to a series solution in terms of Mathieu functions for the elliptical case, which is valid for arbitrary elongations. To extend the elliptical solution to a D-shaped domain, a boundary perturbation in terms of the triangularity is used. A comparison with the code FLOW [L. Guazzotto et al., Phys. Plasmas 11(2), 604-614 (2004)] is presented for relevant scenarios.

  4. Analytical excited state forces for the time-dependent density-functional tight-binding method.

    PubMed

    Heringer, D; Niehaus, T A; Wanko, M; Frauenheim, Th

    2007-12-01

    An analytical formulation for the geometrical derivatives of excitation energies within the time-dependent density-functional tight-binding (TD-DFTB) method is presented. The derivation is based on the auxiliary functional approach proposed in [Furche and Ahlrichs, J Chem Phys 2002, 117, 7433]. To validate the quality of the potential energy surfaces provided by the method, adiabatic excitation energies, excited state geometries, and harmonic vibrational frequencies were calculated for a test set of molecules in excited states of different symmetry and multiplicity. According to the results, the TD-DFTB scheme surpasses the performance of configuration interaction singles and the random phase approximation but has a lower quality than ab initio time-dependent density-functional theory. As a consequence of the special form of the approximations made in TD-DFTB, the scaling exponent of the method can be reduced to three, similar to the ground state. The low scaling prefactor and the satisfactory accuracy of the method makes TD-DFTB especially suitable for molecular dynamics simulations of dozens of atoms as well as for the computation of luminescence spectra of systems containing hundreds of atoms. (c) 2007 Wiley Periodicals, Inc.

  5. Learning Analytics Considered Harmful

    ERIC Educational Resources Information Center

    Dringus, Laurie P.

    2012-01-01

    This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through…

  6. Teaching Analytical Chemistry to Pharmacy Students: A Combined, Iterative Approach

    ERIC Educational Resources Information Center

    Masania, Jinit; Grootveld, Martin; Wilson, Philippe B.

    2018-01-01

    Analytical chemistry has often been a difficult subject to teach in a classroom or lecture-based context. Numerous strategies for overcoming the inherently practical-based difficulties have been suggested, each with differing pedagogical theories. Here, we present a combined approach to tackling the problem of teaching analytical chemistry, with…

  7. Analytical study of the heat loss attenuation by clothing on thermal manikins under radiative heat loads.

    PubMed

    Den Hartog, Emiel A; Havenith, George

    2010-01-01

    For wearers of protective clothing in radiation environments there are no quantitative guidelines available for the effect of a radiative heat load on heat exchange. Under the European Union funded project ThermProtect an analytical effort was defined to address the issue of radiative heat load while wearing protective clothing. As within the ThermProtect project much information has become available from thermal manikin experiments in thermal radiation environments, these sets of experimental data are used to verify the analytical approach. The analytical approach provided a good prediction of the heat loss in the manikin experiments, 95% of the variance was explained by the model. The model has not yet been validated at high radiative heat loads and neglects some physical properties of the radiation emissivity. Still, the analytical approach provides a pragmatic approach and may be useful for practical implementation in protective clothing standards for moderate thermal radiation environments.

  8. Propellant Readiness Level: A Methodological Approach to Propellant Characterization

    NASA Technical Reports Server (NTRS)

    Bossard, John A.; Rhys, Noah O.

    2010-01-01

    A methodological approach to defining propellant characterization is presented. The method is based on the well-established Technology Readiness Level nomenclature. This approach establishes the Propellant Readiness Level as a metric for ascertaining the readiness of a propellant or a propellant combination by evaluating the following set of propellant characteristics: thermodynamic data, toxicity, applications, combustion data, heat transfer data, material compatibility, analytical prediction modeling, injector/chamber geometry, pressurization, ignition, combustion stability, system storability, qualification testing, and flight capability. The methodology is meant to be applicable to all propellants or propellant combinations; liquid, solid, and gaseous propellants as well as monopropellants and propellant combinations are equally served. The functionality of the proposed approach is tested through the evaluation and comparison of an example set of hydrocarbon fuels.

  9. FAPRS Manual: Manual for the Functional Analytic Psychotherapy Rating Scale

    ERIC Educational Resources Information Center

    Callaghan, Glenn M.; Follette, William C.

    2008-01-01

    The Functional Analytic Psychotherapy Rating Scale (FAPRS) is behavioral coding system designed to capture those essential client and therapist behaviors that occur during Functional Analytic Psychotherapy (FAP). The FAPRS manual presents the purpose and rules for documenting essential aspects of FAP. The FAPRS codes are exclusive and exhaustive…

  10. A semi-analytical description of protein folding that incorporates detailed geometrical information

    PubMed Central

    Suzuki, Yoko; Noel, Jeffrey K.; Onuchic, José N.

    2011-01-01

    Much has been done to study the interplay between geometric and energetic effects on the protein folding energy landscape. Numerical techniques such as molecular dynamics simulations are able to maintain a precise geometrical representation of the protein. Analytical approaches, however, often focus on the energetic aspects of folding, including geometrical information only in an average way. Here, we investigate a semi-analytical expression of folding that explicitly includes geometrical effects. We consider a Hamiltonian corresponding to a Gaussian filament with structure-based interactions. The model captures local features of protein folding often averaged over by mean-field theories, for example, loop contact formation and excluded volume. We explore the thermodynamics and folding mechanisms of beta-hairpin and alpha-helical structures as functions of temperature and Q, the fraction of native contacts formed. Excluded volume is shown to be an important component of a protein Hamiltonian, since it both dominates the cooperativity of the folding transition and alters folding mechanisms. Understanding geometrical effects in analytical formulae will help illuminate the consequences of the approximations required for the study of larger proteins. PMID:21721664

  11. Automatic numerical evaluation of vacancy-mediated transport for arbitrary crystals: Onsager coefficients in the dilute limit using a Green function approach

    NASA Astrophysics Data System (ADS)

    Trinkle, Dallas R.

    2017-10-01

    A general solution for vacancy-mediated diffusion in the dilute-vacancy/dilute-solute limit for arbitrary crystal structures is derived from the master equation. A general numerical approach to the vacancy lattice Green function reduces to the sum of a few analytic functions and numerical integration of a smooth function over the Brillouin zone for arbitrary crystals. The Dyson equation solves for the Green function in the presence of a solute with arbitrary but finite interaction range to compute the transport coefficients accurately, efficiently and automatically, including cases with very large differences in solute-vacancy exchange rates. The methodology takes advantage of the space group symmetry of a crystal to reduce the complexity of the matrix inversion in the Dyson equation. An open-source implementation of the algorithm is available, and numerical results are presented for the convergence of the integration error of the bare vacancy Green function, and tracer correlation factors for a variety of crystals including wurtzite (hexagonal diamond) and garnet.

  12. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  13. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  14. A physically-based analytical model to describe effective excess charge for streaming potential generation in saturated porous media

    NASA Astrophysics Data System (ADS)

    Jougnot, D.; Guarracino, L.

    2016-12-01

    The self-potential (SP) method is considered by most researchers the only geophysical method that is directly sensitive to groundwater flow. One source of SP signals, the so-called streaming potential, results from the presence of an electrical double layer at the mineral-pore water interface. When water flows through the pore space, it gives rise to a streaming current and a resulting measurable electrical voltage. Different approaches have been proposed to predict streaming potentials in porous media. One approach is based on the excess charge which is effectively dragged in the medium by the water flow. Following a recent theoretical framework, we developed a physically-based analytical model to predict the effective excess charge in saturated porous media. In this study, the porous media is described by a bundle of capillary tubes with a fractal pore-size distribution. First, an analytical relationship is derived to determine the effective excess charge for a single capillary tube as a function of the pore water salinity. Then, this relationship is used to obtain both exact and approximated expressions for the effective excess charge at the Representative Elementary Volume (REV) scale. The resulting analytical relationship allows the determination of the effective excess charge as a function of pore water salinity, fractal dimension and hydraulic parameters like porosity and permeability, which are also obtained at the REV scale. This new model has been successfully tested against data from the literature of different sources. One of the main finding of this study is that it provides a mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by various researchers. The proposed petrophysical relationship also contributes to understand the role of porosity and water salinity on effective excess charge and will help to push further the use of streaming potential to monitor groundwater flow.

  15. Calculation of phonon dispersion relation using new correlation functional

    NASA Astrophysics Data System (ADS)

    Jitropas, Ukrit; Hsu, Chung-Hao

    2017-06-01

    To extend the use of Local Density Approximation (LDA), a new analytical correlation functional is introduced. Correlation energy is an essential ingredient within density functional theory and used to determine ground state energy and other properties including phonon dispersion relation. Except for high and low density limit, the general expression of correlation energy is unknown. The approximation approach is therefore required. The accuracy of the modelling system depends on the quality of correlation energy approximation. Typical correlation functionals used in LDA such as Vosko-Wilk-Nusair (VWN) and Perdew-Wang (PW) were obtained from parameterizing the near-exact quantum Monte Carlo data of Ceperley and Alder. These functionals are presented in complex form and inconvenient to implement. Alternatively, the latest published formula of Chachiyo correlation functional provides a comparable result for those much more complicated functionals. In addition, it provides more predictive power based on the first principle approach, not fitting functionals. Nevertheless, the performance of Chachiyo formula for calculating phonon dispersion relation (a key to the thermal properties of materials) has not been tested yet. Here, the implementation of new correlation functional to calculate phonon dispersion relation is initiated. The accuracy and its validity will be explored.

  16. Macroscopic dielectric function within time-dependent density functional theory—Real time evolution versus the Casida approach

    NASA Astrophysics Data System (ADS)

    Sander, Tobias; Kresse, Georg

    2017-02-01

    Linear optical properties can be calculated by solving the time-dependent density functional theory equations. Linearization of the equation of motion around the ground state orbitals results in the so-called Casida equation, which is formally very similar to the Bethe-Salpeter equation. Alternatively one can determine the spectral functions by applying an infinitely short electric field in time and then following the evolution of the electron orbitals and the evolution of the dipole moments. The long wavelength response function is then given by the Fourier transformation of the evolution of the dipole moments in time. In this work, we compare the results and performance of these two approaches for the projector augmented wave method. To allow for large time steps and still rely on a simple difference scheme to solve the differential equation, we correct for the errors in the frequency domain, using a simple analytic equation. In general, we find that both approaches yield virtually indistinguishable results. For standard density functionals, the time evolution approach is, with respect to the computational performance, clearly superior compared to the solution of the Casida equation. However, for functionals including nonlocal exchange, the direct solution of the Casida equation is usually much more efficient, even though it scales less beneficial with the system size. We relate this to the large computational prefactors in evaluating the nonlocal exchange, which renders the time evolution algorithm fairly inefficient.

  17. Exact mean-energy expansion of Ginibre's gas for coupling constants Γ =2 ×(oddinteger)

    NASA Astrophysics Data System (ADS)

    Salazar, R.; Téllez, G.

    2017-12-01

    Using the approach of a Vandermonde determinant to the power Γ =Q2/kBT expansion on monomial functions, a way to find the excess energy Uexc of the two-dimensional one-component plasma (2DOCP) on hard and soft disks (or a Dyson gas) for odd values of Γ /2 is provided. At Γ =2 , the present study not only corroborates the result for the particle-particle energy contribution of the Dyson gas found by Shakirov [Shakirov, Phys. Lett. A 375, 984 (2011), 10.1016/j.physleta.2011.01.004] by using an alternative approach, but also provides the exact N -finite expansion of the excess energy of the 2DOCP on the hard disk. The excess energy is fitted to the ansatz of the form Uexc=K1N +K2√{N }+K3+K4/N +O (1 /N2) to study the finite-size correction, with Ki coefficients and N the number of particles. In particular, the bulk term of the excess energy is in agreement with the well known result of Jancovici for the hard disk in the thermodynamic limit [Jancovici, Phys. Rev. Lett. 46, 386 (1981), 10.1103/PhysRevLett.46.386]. Finally, an expression is found for the pair correlation function which still keeps a link with the random matrix theory via the kernel in the Ginibre ensemble [Ginibre, J. Math. Phys. 6, 440 (1965), 10.1063/1.1704292] for odd values of Γ /2 . A comparison between the analytical two-body density function and histograms obtained with Monte Carlo simulations for small systems and Γ =2 ,6 ,10 ,... shows that the approach described in this paper may be used to study analytically the crossover behavior from systems in the fluid phase to small crystals.

  18. Magnetic exchange couplings from constrained density functional theory: an efficient approach utilizing analytic derivatives.

    PubMed

    Phillips, Jordan J; Peralta, Juan E

    2011-11-14

    We introduce a method for evaluating magnetic exchange couplings based on the constrained density functional theory (C-DFT) approach of Rudra, Wu, and Van Voorhis [J. Chem. Phys. 124, 024103 (2006)]. Our method shares the same physical principles as C-DFT but makes use of the fact that the electronic energy changes quadratically and bilinearly with respect to the constraints in the range of interest. This allows us to use coupled perturbed Kohn-Sham spin density functional theory to determine approximately the corrections to the energy of the different spin configurations and construct a priori the relevant energy-landscapes obtained by constrained spin density functional theory. We assess this methodology in a set of binuclear transition-metal complexes and show that it reproduces very closely the results of C-DFT. This demonstrates a proof-of-concept for this method as a potential tool for studying a number of other molecular phenomena. Additionally, routes to improving upon the limitations of this method are discussed. © 2011 American Institute of Physics

  19. Cotinine analytical workshop report: consideration of analytical methods for determining cotinine in human body fluids as a measure of passive exposure to tobacco smoke.

    PubMed Central

    Watts, R R; Langone, J J; Knight, G J; Lewtas, J

    1990-01-01

    A two-day technical workshop was convened November 10-11, 1986, to discuss analytical approaches for determining trace amounts of cotinine in human body fluids resulting from passive exposure to environmental tobacco smoke (ETS). The workshop, jointly sponsored by the U.S. Environmental Protection Agency and Centers for Disease Control, was attended by scientists with expertise in cotinine analytical methodology and/or conduct of human monitoring studies related to ETS. The workshop format included technical presentations, separate panel discussions on chromatography and immunoassay analytical approaches, and group discussions related to the quality assurance/quality control aspects of future monitoring programs. This report presents a consensus of opinion on general issues before the workshop panel participants and also a detailed comparison of several analytical approaches being used by the various represented laboratories. The salient features of the chromatography and immunoassay analytical methods are discussed separately. PMID:2190812

  20. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  1. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  2. Who do we think we are? Analysing the content and form of identity work in the English National Health Service.

    PubMed

    McDermott, Imelda; Checkland, Kath; Harrison, Stephen; Snow, Stephanie; Coleman, Anna

    2013-01-01

    The language used by National Health Service (NHS) "commissioning" managers when discussing their roles and responsibilities can be seen as a manifestation of "identity work", defined as a process of identifying. This paper aims to offer a novel approach to analysing "identity work" by triangulation of multiple analytical methods, combining analysis of the content of text with analysis of its form. Fairclough's discourse analytic methodology is used as a framework. Following Fairclough, the authors use analytical methods associated with Halliday's systemic functional linguistics. While analysis of the content of interviews provides some information about NHS Commissioners' perceptions of their roles and responsibilities, analysis of the form of discourse that they use provides a more detailed and nuanced view. Overall, the authors found that commissioning managers have a higher level of certainty about what commissioning is not rather than what commissioning is; GP managers have a high level of certainty of their identity as a GP rather than as a manager; and both GP managers and non-GP managers oscillate between multiple identities depending on the different situations they are in. This paper offers a novel approach to triangulation, based not on the usual comparison of multiple data sources, but rather based on the application of multiple analytical methods to a single source of data. This paper also shows the latent uncertainty about the nature of commissioning enterprise in the English NHS.

  3. Morphometry, geometry, function, and the future.

    PubMed

    Mcnulty, Kieran P; Vinyard, Christopher J

    2015-01-01

    The proliferation of geometric morphometrics (GM) in biological anthropology and more broadly throughout the biological sciences has resulted in a multitude of studies that adopt landmark-based approaches for addressing a variety of questions in evolutionary morphology. In some cases, particularly in the realm of systematics, the fit between research question and analytical design is quite good. Functional-adaptive studies, however, do not readily conform to the methods available in the GM toolkit. The symposium organized by Terhune and Cooke entitled "Assessing function via shape: What is the place of GM in functional morphology?" held at the 2013 meetings of the American Association of Physical Anthropologists was designed specifically to explore this relationship between landmark-based methods and analyses of functional morphology, and the articles in this special issue, which stem in large part from this symposium, provide numerous examples of how the two approaches can complement and contrast each other. Here, we underscore some of the major difficulties in interpreting GM results within a functional regime. In combination with other contributions in this issue, we identify emerging areas of research that will help bridge the gap between multivariate morphometry and functional-adaptive analysis. Ultimately, neither geometric nor functional morphometric approaches is sufficient to elaborate the adaptive pathways that explain morphological evolution through natural selection. These perspectives must be further integrated with research from physiology, developmental biology, genomics, and ecology. © 2014 Wiley Periodicals, Inc.

  4. Multiple Interactive Pollutants in Water Quality Trading

    NASA Astrophysics Data System (ADS)

    Sarang, Amin; Lence, Barbara J.; Shamsai, Abolfazl

    2008-10-01

    Efficient environmental management calls for the consideration of multiple pollutants, for which two main types of transferable discharge permit (TDP) program have been described: separate permits that manage each pollutant individually in separate markets, with each permit based on the quantity of the pollutant or its environmental effects, and weighted-sum permits that aggregate several pollutants as a single commodity to be traded in a single market. In this paper, we perform a mathematical analysis of TDP programs for multiple pollutants that jointly affect the environment (i.e., interactive pollutants) and demonstrate the practicality of this approach for cost-efficient maintenance of river water quality. For interactive pollutants, the relative weighting factors are functions of the water quality impacts, marginal damage function, and marginal treatment costs at optimality. We derive the optimal set of weighting factors required by this approach for important scenarios for multiple interactive pollutants and propose using an analytical elasticity of substitution function to estimate damage functions for these scenarios. We evaluate the applicability of this approach using a hypothetical example that considers two interactive pollutants. We compare the weighted-sum permit approach for interactive pollutants with individual permit systems and TDP programs for multiple additive pollutants. We conclude by discussing practical considerations and implementation issues that result from the application of weighted-sum permit programs.

  5. Flat-plate solar array project. Volume 6: Engineering sciences and reliability

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.; Smokler, M. I.

    1986-01-01

    The Flat-Plate Solar Array (FSA) Project activities directed at developing the engineering technology base required to achieve modules that meet the functional, safety, and reliability requirements of large scale terrestrial photovoltaic systems applications are reported. These activities included: (1) development of functional, safety, and reliability requirements for such applications; (2) development of the engineering analytical approaches, test techniques, and design solutions required to meet the requirements; (3) synthesis and procurement of candidate designs for test and evaluation; and (4) performance of extensive testing, evaluation, and failure analysis of define design shortfalls and, thus, areas requiring additional research and development. A summary of the approach and technical outcome of these activities are provided along with a complete bibliography of the published documentation covering the detailed accomplishments and technologies developed.

  6. QFD-ANP Approach for the Conceptual Design of Research Vessels: A Case Study

    NASA Astrophysics Data System (ADS)

    Venkata Subbaiah, Kambagowni; Yeshwanth Sai, Koneru; Suresh, Challa

    2016-10-01

    Conceptual design is a subset of concept art wherein a new idea of product is created instead of a visual representation which would directly be used in a final product. The purpose is to understand the needs of conceptual design which are being used in engineering designs and to clarify the current conceptual design practice. Quality function deployment (QFD) is a customer oriented design approach for developing new or improved products and services to enhance customer satisfaction. House of quality (HOQ) has been traditionally used as planning tool of QFD which translates customer requirements (CRs) into design requirements (DRs). Factor analysis is carried out in order to reduce the CR portions of HOQ. The analytical hierarchical process is employed to obtain the priority ratings of CR's which are used in constructing HOQ. This paper mainly discusses about the conceptual design of an oceanographic research vessel using analytical network process (ANP) technique. Finally the QFD-ANP integrated methodology helps to establish the importance ratings of DRs.

  7. Linguistic Sources of Skinner's Verbal Behavior

    PubMed Central

    Matos, Maria Amelia; da F. Passos, Maria de Lourdes R.

    2006-01-01

    Formal and functional analyses of verbal behavior have been often considered to be divergent and incompatible. Yet, an examination of the history of part of the analytical approach used in Verbal Behavior (Skinner, 1957/1992) for the identification and conceptualization of verbal operant units discloses that it corresponds well with formal analyses of languages. Formal analyses have been carried out since the invention of writing and fall within the scope of traditional grammar and structural linguistics, particularly in analyses made by the linguist Leonard Bloomfield. The relevance of analytical instruments originated from linguistic studies (which examine and describe the practices of verbal communities) to the analysis of verbal behavior, as proposed by Skinner, relates to the conception of a verbal community as a prerequisite for the acquisition of verbal behavior. A deliberately interdisciplinary approach is advocated in this paper, with the systematic adoption of linguistic analyses and descriptions adding relevant knowledge to the design of experimental research in verbal behavior. PMID:22478454

  8. Different coupled atmosphere-recharge oscillator Low Order Models for ENSO: a projection approach.

    NASA Astrophysics Data System (ADS)

    Bianucci, Marco; Mannella, Riccardo; Merlino, Silvia; Olivieri, Andrea

    2016-04-01

    El Ninõ-Southern Oscillation (ENSO) is a large scale geophysical phenomenon where, according to the celebrated recharge oscillator model (ROM), the Ocean slow variables given by the East Pacific Sea Surface Temperature (SST) and the average thermocline depth (h), interact with some fast "irrelevant" ones, representing mostly the atmosphere (the westerly wind burst and the Madden-Julian Oscillation). The fast variables are usually inserted in the model as an external stochastic forcing. In a recent work (M. Bianucci, "Analytical probability density function for the statistics of the ENSO phenomenon: asymmetry and power law tail" Geophysical Research Letters, under press) the author, using a projection approach applied to general deterministic coupled systems, gives a physically reasonable explanation for the use of stochastic models for mimicking the apparent random features of the ENSO phenomenon. Moreover, in the same paper, assuming that the interaction between the ROM and the fast atmosphere is of multiplicative type, i.e., it depends on the SST variable, an analytical expression for the equilibrium density function of the anomaly SST is obtained. This expression fits well the data from observations, reproducing the asymmetry and the power law tail of the histograms of the NINÕ3 index. Here, using the same theoretical approach, we consider and discuss different kind of interactions between the ROM and the other perturbing variables, and we take into account also non linear ROM as a low order model for ENSO. The theoretical and numerical results are then compared with data from observations.

  9. SU-G-206-06: Analytic Dose Function for CT Scans in Infinite Cylinders as a Function of Scan Length and Cylinder Radius

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakalyar, D; Feng, W; McKenney, S

    Purpose: The radiation dose absorbed at a particular radius ρ within the central plane of a long cylinder following a CT scan is a function of the length of the scan L and the cylinder radius R along with kVp and cylinder composition. An analytic function was created that that not only expresses these dependencies but is integrable in closed form over the area of the central plane. This feature facilitates explicit calculation of the planar average dose. The “approach to equilibrium” h(L) discussed in the TG111 report is seamlessly included in this function. Methods: For a cylindrically symmetric radiationmore » field, Monte Carlo calculations were performed to compute the dose distribution to long polyethylene cylinders for scans of varying L for cylinders ranging in radius from 5 to 20 cm. The function was developed from the resultant Monte Carlo data. In addition, the function was successfully fit to data taken from measurements on the 30 cm diameter ICRU/TG200 phantom using a real-time dosimeter. Results: Symmetry and continuity dictate a local extremum at the center which is a minimum for the larger sizes. There are competing effects as the beam penetrates the cylinder from the outside: attenuation, resulting in a decrease; scatter, abruptly increasing at the circumference. This competition may result in an absolute maximum between the center and outer edge leading to a “gull wing” shape for the radial dependence. For the smallest cylinders, scatter may dominate to the extent that there is an absolute maximum at the center. Conclusion: An integrable, analytic function has been developed that provides the radial dependency of dose for the central plane of a scan of length L for cylinders of varying diameter. Equivalently, we have developed h(L,R,ρ).« less

  10. Problems of the Synthesis of Radar Signals,

    DTIC Science & Technology

    1981-05-14

    Unfortunately, the more ccmplete wcrk cf the same authors on the theme indicated (reference of 6 articles [43]) was not published. In the number of research cn...conditicns cf ore or tha other interferences. Some research of this type is [15]. But here, as when selecting of general/common/total apprcach tc the...this approach more rarely. set Y, characterized by the desired property, frequently contains continuous, analytic functions. Therefore research of

  11. Response of space shuttle insulation panels to acoustic noise pressure

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1976-01-01

    The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.

  12. An Analytical Approach for Relating Boiling Points of Monofunctional Organic Compounds to Intermolecular Forces

    ERIC Educational Resources Information Center

    Struyf, Jef

    2011-01-01

    The boiling point of a monofunctional organic compound is expressed as the sum of two parts: a contribution to the boiling point due to the R group and a contribution due to the functional group. The boiling point in absolute temperature of the corresponding RH hydrocarbon is chosen for the contribution to the boiling point of the R group and is a…

  13. An analytic modeling and system identification study of rotor/fuselage dynamics at hover

    NASA Technical Reports Server (NTRS)

    Hong, Steven W.; Curtiss, H. C., Jr.

    1993-01-01

    A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.

  14. A semi-analytical method for near-trapped mode and fictitious frequencies of multiple scattering by an array of elliptical cylinders in water waves

    NASA Astrophysics Data System (ADS)

    Chen, Jeng-Tzong; Lee, Jia-Wei

    2013-09-01

    In this paper, we focus on the water wave scattering by an array of four elliptical cylinders. The null-field boundary integral equation method (BIEM) is used in conjunction with degenerate kernels and eigenfunctions expansion. The closed-form fundamental solution is expressed in terms of the degenerate kernel containing the Mathieu and the modified Mathieu functions in the elliptical coordinates. Boundary densities are represented by using the eigenfunction expansion. To avoid using the addition theorem to translate the Mathieu functions, the present approach can solve the water wave problem containing multiple elliptical cylinders in a semi-analytical manner by introducing the adaptive observer system. Regarding water wave problems, the phenomena of numerical instability of fictitious frequencies may appear when the BIEM/boundary element method (BEM) is used. Besides, the near-trapped mode for an array of four identical elliptical cylinders is observed in a special layout. Both physical (near-trapped mode) and mathematical (fictitious frequency) resonances simultaneously appear in the present paper for a water wave problem by an array of four identical elliptical cylinders. Two regularization techniques, the combined Helmholtz interior integral equation formulation (CHIEF) method and the Burton and Miller approach, are adopted to alleviate the numerical resonance due to fictitious frequency.

  15. Analytical representation of dynamical quantities in G W from a matrix resolvent

    NASA Astrophysics Data System (ADS)

    Gesenhues, J.; Nabok, D.; Rohlfing, M.; Draxl, C.

    2017-12-01

    The power of the G W formalism is, to a large extent, based on the explicit treatment of dynamical correlations in the self-energy. This dynamics is taken into account by calculating the energy dependence of the screened Coulomb interaction W , followed by a convolution with the Green's function G . In order to obtain the energy dependence of W the prevalent methods are plasmon-pole models and numerical integration techniques. In this paper, we discuss an alternative approach, in which the energy-dependent screening is calculated by determining the resolvent, which is set up from a matrix representation of the dielectric function. On the one hand, this refrains from a numerical energy convolution and allows one to actually write down the energy dependence of W explicitly (like in the plasmon-pole models). On the other hand, the method is at least as accurate as the numerical approaches due to its multipole nature. We discuss the theoretical setup in some detail, give insight into the computational aspects, and present results for Si, C, GaAs, and LiF. Finally, we argue that the analytic representability is not only useful for educational purposes but may also be of avail for the development of theory that goes beyond G W .

  16. An analytic modeling and system identification study of rotor/fuselage dynamics at hover

    NASA Technical Reports Server (NTRS)

    Hong, Steven W.; Curtiss, H. C., Jr.

    1993-01-01

    A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives, resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.

  17. Electrolyte solutions at curved electrodes. I. Mesoscopic approach

    NASA Astrophysics Data System (ADS)

    Reindl, Andreas; Bier, Markus; Dietrich, S.

    2017-04-01

    Within the Poisson-Boltzmann approach, electrolytes in contact with planar, spherical, and cylindrical electrodes are analyzed systematically. The dependences of their capacitance C on the surface charge density σ and the ionic strength I are examined as a function of the wall curvature. The surface charge density has a strong effect on the capacitance for small curvatures, whereas for large curvatures the behavior becomes independent of σ. An expansion for small curvatures gives rise to capacitance coefficients which depend only on a single parameter, allowing for a convenient analysis. The universal behavior at large curvatures can be captured by an analytic expression.

  18. Advancing Risk Assessment through the Application of Systems Toxicology

    PubMed Central

    Sauer, John Michael; Kleensang, André; Peitsch, Manuel C.; Hayes, A. Wallace

    2016-01-01

    Risk assessment is the process of quantifying the probability of a harmful effect to individuals or populations from human activities. Mechanistic approaches to risk assessment have been generally referred to as systems toxicology. Systems toxicology makes use of advanced analytical and computational tools to integrate classical toxicology and quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. Three presentations including two case studies involving both in vitro and in vivo approaches described the current state of systems toxicology and the potential for its future application in chemical risk assessment. PMID:26977253

  19. A General Model for Performance Evaluation in DS-CDMA Systems with Variable Spreading Factors

    NASA Astrophysics Data System (ADS)

    Chiaraluce, Franco; Gambi, Ennio; Righi, Giorgia

    This paper extends previous analytical approaches for the study of CDMA systems to the relevant case of multipath environments where users can operate at different bit rates. This scenario is of interest for the Wideband CDMA strategy employed in UMTS, and the model permits the performance comparison of classic and more innovative spreading signals. The method is based on the characteristic function approach, that allows to model accurately the various kinds of interferences. Some numerical examples are given with reference to the ITU-R M. 1225 Recommendations, but the analysis could be extended to different channel descriptions.

  20. A Novel Capacity Analysis for Wireless Backhaul Mesh Networks

    NASA Astrophysics Data System (ADS)

    Chung, Tein-Yaw; Lee, Kuan-Chun; Lee, Hsiao-Chih

    This paper derived a closed-form expression for inter-flow capacity of a backhaul wireless mesh network (WMN) with centralized scheduling by employing a ring-based approach. Through the definition of an interference area, we are able to accurately describe a bottleneck collision area for a WMN and calculate the upper bound of inter-flow capacity. The closed-form expression shows that the upper bound is a function of the ratio between transmission range and network radius. Simulations and numerical analysis show that our analytic solution can better estimate the inter-flow capacity of WMNs than that of previous approach.

  1. Analyte-Responsive Hydrogels: Intelligent Materials for Biosensing and Drug Delivery.

    PubMed

    Culver, Heidi R; Clegg, John R; Peppas, Nicholas A

    2017-02-21

    Nature has mastered the art of molecular recognition. For example, using synergistic non-covalent interactions, proteins can distinguish between molecules and bind a partner with incredible affinity and specificity. Scientists have developed, and continue to develop, techniques to investigate and better understand molecular recognition. As a consequence, analyte-responsive hydrogels that mimic these recognitive processes have emerged as a class of intelligent materials. These materials are unique not only in the type of analyte to which they respond but also in how molecular recognition is achieved and how the hydrogel responds to the analyte. Traditional intelligent hydrogels can respond to environmental cues such as pH, temperature, and ionic strength. The functional monomers used to make these hydrogels can be varied to achieve responsive behavior. For analyte-responsive hydrogels, molecular recognition can also be achieved by incorporating biomolecules with inherent molecular recognition properties (e.g., nucleic acids, peptides, enzymes, etc.) into the polymer network. Furthermore, in addition to typical swelling/syneresis responses, these materials exhibit unique responsive behaviors, such as gel assembly or disassembly, upon interaction with the target analyte. With the diverse tools available for molecular recognition and the ability to generate unique responsive behaviors, analyte-responsive hydrogels have found great utility in a wide range of applications. In this Account, we discuss strategies for making four different classes of analyte-responsive hydrogels, specifically, non-imprinted, molecularly imprinted, biomolecule-containing, and enzymatically responsive hydrogels. Then we explore how these materials have been incorporated into sensors and drug delivery systems, highlighting examples that demonstrate the versatility of these materials. For example, in addition to the molecular recognition properties of analyte-responsive hydrogels, the physicochemical changes that are induced upon analyte binding can be exploited to generate a detectable signal for sensing applications. As research in this area has grown, a number of creative approaches for improving the selectivity and sensitivity (i.e., detection limit) of these sensors have emerged. For applications in drug delivery systems, therapeutic release can be triggered by competitive molecular interactions or physicochemical changes in the network. Additionally, including degradable units within the network can enable sustained and responsive therapeutic release. Several exciting examples exploiting the analyte-responsive behavior of hydrogels for the treatment of cancer, diabetes, and irritable bowel syndrome are discussed in detail. We expect that creative and combinatorial approaches used in the design of analyte-responsive hydrogels will continue to yield materials with great potential in the fields of sensing and drug delivery.

  2. Analytical Calculation of the Lower Bound on Timing Resolution for PET Scintillation Detectors Comprising High-Aspect-Ratio Crystal Elements

    PubMed Central

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-01-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  3. Magnetic solid-phase extraction using carbon nanotubes as sorbents: a review.

    PubMed

    Herrero-Latorre, C; Barciela-García, J; García-Martín, S; Peña-Crecente, R M; Otárola-Jiménez, J

    2015-09-10

    Magnetic solid-phase extraction (M-SPE) is a procedure based on the use of magnetic sorbents for the separation and preconcentration of different organic and inorganic analytes from large sample volumes. The magnetic sorbent is added to the sample solution and the target analyte is adsorbed onto the surface of the magnetic sorbent particles (M-SPs). Analyte-M-SPs are separated from the sample solution by applying an external magnetic field and, after elution with the appropriate solvent, the recovered analyte is analyzed. This approach has several advantages over traditional solid phase extraction as it avoids time-consuming and tedious on-column SPE procedures and it provides a rapid and simple analyte separation that avoids the need for centrifugation or filtration steps. As a consequence, in the past few years a great deal of research has been focused on M-SPE, including the development of new sorbents and novel automation strategies. In recent years, the use of magnetic carbon nanotubes (M-CNTs) as a sorption substrate in M-SPE has become an active area of research. These materials have exceptional mechanical, electrical, optical and magnetic properties and they also have an extremely large surface area and varied possibilities for functionalization. This review covers the synthesis of M-CNTs and the different approaches for the use of these compounds in M-SPE. The performance, general characteristics and applications of M-SPE based on magnetic carbon nanotubes for organic and inorganic analysis have been evaluated on the basis of more than 110 references. Finally, some important challenges with respect the use of magnetic carbon nanotubes in M-SPE are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Analytical calculation of the lower bound on timing resolution for PET scintillation detectors comprising high-aspect-ratio crystal elements

    NASA Astrophysics Data System (ADS)

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-07-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3× 3× 20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162+/- 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%.

  5. A self-consistent first-principle based approach to model carrier mobility in organic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meded, Velimir; Friederich, Pascal; Symalla, Franz

    2015-12-31

    Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using amore » fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC.« less

  6. Pair mobility functions for rigid spheres in concentrated colloidal dispersions: Stresslet and straining motion couplings

    NASA Astrophysics Data System (ADS)

    Su, Yu; Swan, James W.; Zia, Roseanna N.

    2017-03-01

    Accurate modeling of particle interactions arising from hydrodynamic, entropic, and other microscopic forces is essential to understanding and predicting particle motion and suspension behavior in complex and biological fluids. The long-range nature of hydrodynamic interactions can be particularly challenging to capture. In dilute dispersions, pair-level interactions are sufficient and can be modeled in detail by analytical relations derived by Jeffrey and Onishi [J. Fluid Mech. 139, 261-290 (1984)] and Jeffrey [Phys. Fluids A 4, 16-29 (1992)]. In more concentrated dispersions, analytical modeling of many-body hydrodynamic interactions quickly becomes intractable, leading to the development of simplified models. These include mean-field approaches that smear out particle-scale structure and essentially assume that long-range hydrodynamic interactions are screened by crowding, as particle mobility decays at high concentrations. Toward the development of an accurate and simplified model for the hydrodynamic interactions in concentrated suspensions, we recently computed a set of effective pair of hydrodynamic functions coupling particle motion to a hydrodynamic force and torque at volume fractions up to 50% utilizing accelerated Stokesian dynamics and a fast stochastic sampling technique [Zia et al., J. Chem. Phys. 143, 224901 (2015)]. We showed that the hydrodynamic mobility in suspensions of colloidal spheres is not screened, and the power law decay of the hydrodynamic functions persists at all concentrations studied. In the present work, we extend these mobility functions to include the couplings of particle motion and straining flow to the hydrodynamic stresslet. The couplings computed in these two articles constitute a set of orthogonal coupling functions that can be utilized to compute equilibrium properties in suspensions at arbitrary concentration and are readily applied to solve many-body hydrodynamic interactions analytically.

  7. Assessing the Importance of Treatment Goals in Patients with Psoriasis: Analytic Hierarchy Process vs. Likert Scales.

    PubMed

    Gutknecht, Mandy; Danner, Marion; Schaarschmidt, Marthe-Lisa; Gross, Christian; Augustin, Matthias

    2018-02-15

    To define treatment benefit, the Patient Benefit Index contains a weighting of patient-relevant treatment goals using the Patient Needs Questionnaire, which includes a 5-point Likert scale ranging from 0 ("not important at all") to 4 ("very important"). These treatment goals have been assigned to five health dimensions. The importance of each dimension can be derived by averaging the importance ratings on the Likert scales of associated treatment goals. As the use of a Likert scale does not allow for a relative assessment of importance, the objective of this study was to estimate relative importance weights for health dimensions and associated treatment goals in patients with psoriasis by using the analytic hierarchy process and to compare these weights with the weights resulting from the Patient Needs Questionnaire. Furthermore, patients' judgments on the difficulty of the methods were investigated. Dimensions of the Patient Benefit Index and their treatment goals were mapped into a hierarchy of criteria and sub-criteria to develop the analytic hierarchy process questionnaire. Adult patients with psoriasis starting a new anti-psoriatic therapy in the outpatient clinic of the Institute for Health Services Research in Dermatology and Nursing at the University Medical Center Hamburg (Germany) were recruited and completed both methods (analytic hierarchy process, Patient Needs Questionnaire). Ratings of treatment goals on the Likert scales (Patient Needs Questionnaire) were summarized within each dimension to assess the importance of the respective health dimension/criterion. Following the analytic hierarchy process approach, consistency in judgments was assessed using a standardized measurement (consistency ratio). At the analytic hierarchy process level of criteria, 78 of 140 patients achieved the accepted consistency. Using the analytic hierarchy process, the dimension "improvement of physical functioning" was most important, followed by "improvement of social functioning". Concerning the Patient Needs Questionnaire results, these dimensions were ranked in second and fifth position, whereas "strengthening of confidence in the therapy and in a possible healing" was ranked most important, which was least important in the analytic hierarchy process ranking. In both methods, "improvement of psychological well-being" and "reduction of impairments due to therapy" were equally ranked in positions three and four. In contrast to this, on the level of sub-criteria, predominantly a similar ranking of treatment goals could be observed between the analytic hierarchy process and the Patient Needs Questionnaire. From the patients' point of view, the Likert scales (Patient Needs Questionnaire) were easier to complete than the analytic hierarchy process pairwise comparisons. Patients with psoriasis assign different importance to health dimensions and associated treatment goals. In choosing a method to assess the importance of health dimensions and/or treatment goals, it needs to be considered that resulting importance weights may differ in dependence on the used method. However, in this study, observed discrepancies in importance weights of the health dimensions were most likely caused by the different methodological approaches focusing on treatment goals to assess the importance of health dimensions on the one hand (Patient Needs Questionnaire) or directly assessing health dimensions on the other hand (analytic hierarchy process).

  8. Quantifying Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III Training and Testing

    DTIC Science & Technology

    2017-06-16

    Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III Training and Testing Sarah A. Blackstock Joseph O...December 2017 4. TITLE AND SUBTITLE Quantifying Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III...Navy’s Phase III Study Areas as described in each Environmental Impact Statement/ Overseas Environmental Impact Statement and describes the methods

  9. An analytical approach to γ-ray self-shielding effects for radioactive bodies encountered nuclear decommissioning scenarios.

    PubMed

    Gamage, K A A; Joyce, M J

    2011-10-01

    A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Metabolomics as a Hypothesis-Generating Functional Genomics Tool for the Annotation of Arabidopsis thaliana Genes of “Unknown Function”

    PubMed Central

    Quanbeck, Stephanie M.; Brachova, Libuse; Campbell, Alexis A.; Guan, Xin; Perera, Ann; He, Kun; Rhee, Seung Y.; Bais, Preeti; Dickerson, Julie A.; Dixon, Philip; Wohlgemuth, Gert; Fiehn, Oliver; Barkan, Lenore; Lange, Iris; Lange, B. Markus; Lee, Insuk; Cortes, Diego; Salazar, Carolina; Shuman, Joel; Shulaev, Vladimir; Huhman, David V.; Sumner, Lloyd W.; Roth, Mary R.; Welti, Ruth; Ilarslan, Hilal; Wurtele, Eve S.; Nikolau, Basil J.

    2012-01-01

    Metabolomics is the methodology that identifies and measures global pools of small molecules (of less than about 1,000 Da) of a biological sample, which are collectively called the metabolome. Metabolomics can therefore reveal the metabolic outcome of a genetic or environmental perturbation of a metabolic regulatory network, and thus provide insights into the structure and regulation of that network. Because of the chemical complexity of the metabolome and limitations associated with individual analytical platforms for determining the metabolome, it is currently difficult to capture the complete metabolome of an organism or tissue, which is in contrast to genomics and transcriptomics. This paper describes the analysis of Arabidopsis metabolomics data sets acquired by a consortium that includes five analytical laboratories, bioinformaticists, and biostatisticians, which aims to develop and validate metabolomics as a hypothesis-generating functional genomics tool. The consortium is determining the metabolomes of Arabidopsis T-DNA mutant stocks, grown in standardized controlled environment optimized to minimize environmental impacts on the metabolomes. Metabolomics data were generated with seven analytical platforms, and the combined data is being provided to the research community to formulate initial hypotheses about genes of unknown function (GUFs). A public database (www.PlantMetabolomics.org) has been developed to provide the scientific community with access to the data along with tools to allow for its interactive analysis. Exemplary datasets are discussed to validate the approach, which illustrate how initial hypotheses can be generated from the consortium-produced metabolomics data, integrated with prior knowledge to provide a testable hypothesis concerning the functionality of GUFs. PMID:22645570

  11. Heterogeneous fractionation profiles of meta-analytic coactivation networks.

    PubMed

    Laird, Angela R; Riedel, Michael C; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L; Eickhoff, Simon B; Smith, Stephen M; Fox, Peter T; Sutherland, Matthew T

    2017-04-01

    Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d=20-300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how "parent" functional brain systems decompose into constituent "child" sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Heterogeneous fractionation profiles of meta-analytic coactivation networks

    PubMed Central

    Laird, Angela R.; Riedel, Michael C.; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L.; Eickhoff, Simon B.; Smith, Stephen M.; Fox, Peter T.; Sutherland, Matthew T.

    2017-01-01

    Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d = 20 to 300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how “parent” functional brain systems decompose into constituent “child” sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. PMID:28222386

  13. Meta-connectomics: human brain network and connectivity meta-analyses.

    PubMed

    Crossley, N A; Fox, P T; Bullmore, E T

    2016-04-01

    Abnormal brain connectivity or network dysfunction has been suggested as a paradigm to understand several psychiatric disorders. We here review the use of novel meta-analytic approaches in neuroscience that go beyond a summary description of existing results by applying network analysis methods to previously published studies and/or publicly accessible databases. We define this strategy of combining connectivity with other brain characteristics as 'meta-connectomics'. For example, we show how network analysis of task-based neuroimaging studies has been used to infer functional co-activation from primary data on regional activations. This approach has been able to relate cognition to functional network topology, demonstrating that the brain is composed of cognitively specialized functional subnetworks or modules, linked by a rich club of cognitively generalized regions that mediate many inter-modular connections. Another major application of meta-connectomics has been efforts to link meta-analytic maps of disorder-related abnormalities or MRI 'lesions' to the complex topology of the normative connectome. This work has highlighted the general importance of network hubs as hotspots for concentration of cortical grey-matter deficits in schizophrenia, Alzheimer's disease and other disorders. Finally, we show how by incorporating cellular and transcriptional data on individual nodes with network models of the connectome, studies have begun to elucidate the microscopic mechanisms underpinning the macroscopic organization of whole-brain networks. We argue that meta-connectomics is an exciting field, providing robust and integrative insights into brain organization that will likely play an important future role in consolidating network models of psychiatric disorders.

  14. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  15. Universal surface-enhanced Raman scattering amplification detector for ultrasensitive detection of multiple target analytes.

    PubMed

    Zheng, Jing; Hu, Yaping; Bai, Junhui; Ma, Cheng; Li, Jishan; Li, Yinhui; Shi, Muling; Tan, Weihong; Yang, Ronghua

    2014-02-18

    Up to now, the successful fabrication of efficient hot-spot substrates for surface-enhanced Raman scattering (SERS) remains an unsolved problem. To address this issue, we describe herein a universal aptamer-based SERS biodetection approach that uses a single-stranded DNA as a universal trigger (UT) to induce SERS-active hot-spot formation, allowing, in turn, detection of a broad range of targets. More specifically, interaction between the aptamer probe and its target perturbs a triple-helix aptamer/UT structure in a manner that activates a hybridization chain reaction (HCR) among three short DNA building blocks that self-assemble into a long DNA polymer. The SERS-active hot-spots are formed by conjugating 4-aminobenzenethiol (4-ABT)-encoded gold nanoparticles with the DNA polymer through a specific Au-S bond. As proof-of-principle, we used this approach to quantify multiple target analytes, including thrombin, adenosine, and CEM cancer cells, achieving lowest limit of detection values of 18 pM, 1.5 nM, and 10 cells/mL, respectively. As a universal SERS detector, this prototype can be applied to many other target analytes through the use of suitable DNA-functional partners, thus inspiring new designs and applications of SERS for bioanalysis.

  16. Monte-Carlo simulation of OCT structural images of human skin using experimental B-scans and voxel based approach to optical properties distribution

    NASA Astrophysics Data System (ADS)

    Frolov, S. V.; Potlov, A. Yu.; Petrov, D. A.; Proskurin, S. G.

    2017-03-01

    A method of optical coherence tomography (OCT) structural images reconstruction using Monte Carlo simulations is described. Biological object is considered as a set of 3D elements that allow simulation of media, structure of which cannot be described analytically. Each voxel is characterized by its refractive index and anisotropy parameter, scattering and absorption coefficients. B-scans of the inner structure are used to reconstruct a simulated image instead of analytical representation of the boundary geometry. Henye-Greenstein scattering function, Beer-Lambert-Bouguer law and Fresnel equations are used for photon transport description. Efficiency of the described technique is checked by the comparison of the simulated and experimentally acquired A-scans.

  17. Rayleigh approximation to ground state of the Bose and Coulomb glasses

    PubMed Central

    Ryan, S. D.; Mityushev, V.; Vinokur, V. M.; Berlyand, L.

    2015-01-01

    Glasses are rigid systems in which competing interactions prevent simultaneous minimization of local energies. This leads to frustration and highly degenerate ground states the nature and properties of which are still far from being thoroughly understood. We report an analytical approach based on the method of functional equations that allows us to construct the Rayleigh approximation to the ground state of a two-dimensional (2D) random Coulomb system with logarithmic interactions. We realize a model for 2D Coulomb glass as a cylindrical type II superconductor containing randomly located columnar defects (CD) which trap superconducting vortices induced by applied magnetic field. Our findings break ground for analytical studies of glassy systems, marking an important step towards understanding their properties. PMID:25592417

  18. Biosensors for the determination of environmental inhibitors of enzymes

    NASA Astrophysics Data System (ADS)

    Evtugyn, Gennadii A.; Budnikov, Herman C.; Nikolskaya, Elena B.

    1999-12-01

    Characteristic features of functioning and practical application of enzyme-based biosensors for the determination of environmental pollutants as enzyme inhibitors are considered with special emphasis on the influence of the methods used for the measurement of the rates of enzymic reactions, of enzyme immobilisation procedure and of the composition of the reaction medium on the analytical characteristics of inhibitor assays. The published data on the development of biosensors for detecting pesticides and heavy metals are surveyed. Special attention is given to the use of cholinesterase-based biosensors in environmental and analytical monitoring. The approaches to the estimation of kinetic parameters of inhibition are reviewed and the factors determining the selectivity and sensitivity of inhibitor assays in environmental objects are analysed. The bibliography includes 195 references.

  19. Mass and Momentum Transport in Microcavities for Diffusion-Dominant Cell Culture Applications

    NASA Technical Reports Server (NTRS)

    Yew, Alvin G.; Pinero, Daniel; Hsieh, Adam H.; Atencia, Javier

    2012-01-01

    For the informed design of microfluidic devices, it is important to understand transport phenomena at the microscale. This letter outlines an analytically-driven approach to the design of rectangular microcavities extending perpendicular to a perfusion microchannel for microfluidic cell culture devices. We present equations to estimate the spatial transition from advection- to diffusion-dominant transport inside cavities as a function of the geometry and flow conditions. We also estimate the time required for molecules, such as nutrients or drugs to travel from the microchannel to a given depth into the cavity. These analytical predictions can facilitate the rational design of microfluidic devices to optimize and maintain long-term, physiologically-based culture conditions with low fluid shear stress.

  20. Visual analytics for aviation safety: A collaborative approach to sensemaking

    NASA Astrophysics Data System (ADS)

    Wade, Andrew

    Visual analytics, the "science of analytical reasoning facilitated by interactive visual interfaces", is more than just visualization. Understanding the human reasoning process is essential for designing effective visualization tools and providing correct analyses. This thesis describes the evolution, application and evaluation of a new method for studying analytical reasoning that we have labeled paired analysis. Paired analysis combines subject matter experts (SMEs) and tool experts (TE) in an analytic dyad, here used to investigate aircraft maintenance and safety data. The method was developed and evaluated using interviews, pilot studies and analytic sessions during an internship at the Boeing Company. By enabling a collaborative approach to sensemaking that can be captured by researchers, paired analysis yielded rich data on human analytical reasoning that can be used to support analytic tool development and analyst training. Keywords: visual analytics, paired analysis, sensemaking, boeing, collaborative analysis.

  1. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States

    NASA Astrophysics Data System (ADS)

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-01

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  2. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States.

    PubMed

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-03

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  3. Crossing Fibers Detection with an Analytical High Order Tensor Decomposition

    PubMed Central

    Megherbi, T.; Kachouane, M.; Oulebsir-Boumghar, F.; Deriche, R.

    2014-01-01

    Diffusion magnetic resonance imaging (dMRI) is the only technique to probe in vivo and noninvasively the fiber structure of human brain white matter. Detecting the crossing of neuronal fibers remains an exciting challenge with an important impact in tractography. In this work, we tackle this challenging problem and propose an original and efficient technique to extract all crossing fibers from diffusion signals. To this end, we start by estimating, from the dMRI signal, the so-called Cartesian tensor fiber orientation distribution (CT-FOD) function, whose maxima correspond exactly to the orientations of the fibers. The fourth order symmetric positive definite tensor that represents the CT-FOD is then analytically decomposed via the application of a new theoretical approach and this decomposition is used to accurately extract all the fibers orientations. Our proposed high order tensor decomposition based approach is minimal and allows recovering the whole crossing fibers without any a priori information on the total number of fibers. Various experiments performed on noisy synthetic data, on phantom diffusion, data and on human brain data validate our approach and clearly demonstrate that it is efficient, robust to noise and performs favorably in terms of angular resolution and accuracy when compared to some classical and state-of-the-art approaches. PMID:25246940

  4. Ionization Suppression and Recovery in Direct Biofluid Analysis Using Paper Spray Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Vega, Carolina; Spence, Corina; Zhang, Chengsen; Bills, Brandon J.; Manicke, Nicholas E.

    2016-04-01

    Paper spray mass spectrometry is a method for the direct analysis of biofluid samples in which extraction of analytes from dried biofluid spots and electrospray ionization occur from the paper on which the dried sample is stored. We examined matrix effects in the analysis of small molecule drugs from urine, plasma, and whole blood. The general method was to spike stable isotope labeled analogs of each analyte into the spray solvent, while the analyte itself was in the dried biofluid. Intensity of the labeled analog is proportional to ionization efficiency, whereas the ratio of the analyte intensity to the labeled analog in the spray solvent is proportional to recovery. Ion suppression and recovery were found to be compound- and matrix-dependent. Highest levels of ion suppression were obtained for poor ionizers (e.g., analytes lacking basic aliphatic amine groups) in urine and approached -90%. Ion suppression was much lower or even absent for good ionizers (analytes with aliphatic amines) in dried blood spots. Recovery was generally highest in urine and lowest in blood. We also examined the effect of two experimental parameters on ion suppression and recovery: the spray solvent and the sample position (how far away from the paper tip the dried sample was spotted). Finally, the change in ion suppression and analyte elution as a function of time was examined by carrying out a paper spray analysis of dried plasma spots for 5 min by continually replenishing the spray solvent.

  5. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  6. A Bayesian deconvolution strategy for immunoprecipitation-based DNA methylome analysis

    PubMed Central

    Down, Thomas A.; Rakyan, Vardhman K.; Turner, Daniel J.; Flicek, Paul; Li, Heng; Kulesha, Eugene; Gräf, Stefan; Johnson, Nathan; Herrero, Javier; Tomazou, Eleni M.; Thorne, Natalie P.; Bäckdahl, Liselotte; Herberth, Marlis; Howe, Kevin L.; Jackson, David K.; Miretti, Marcos M.; Marioni, John C.; Birney, Ewan; Hubbard, Tim J. P.; Durbin, Richard; Tavaré, Simon; Beck, Stephan

    2009-01-01

    DNA methylation is an indispensible epigenetic modification of mammalian genomes. Consequently there is great interest in strategies for genome-wide/whole-genome DNA methylation analysis, and immunoprecipitation-based methods have proven to be a powerful option. Such methods are rapidly shifting the bottleneck from data generation to data analysis, necessitating the development of better analytical tools. Until now, a major analytical difficulty associated with immunoprecipitation-based DNA methylation profiling has been the inability to estimate absolute methylation levels. Here we report the development of a novel cross-platform algorithm – Bayesian Tool for Methylation Analysis (Batman) – for analyzing Methylated DNA Immunoprecipitation (MeDIP) profiles generated using arrays (MeDIP-chip) or next-generation sequencing (MeDIP-seq). The latter is an approach we have developed to elucidate the first high-resolution whole-genome DNA methylation profile (DNA methylome) of any mammalian genome. MeDIP-seq/MeDIP-chip combined with Batman represent robust, quantitative, and cost-effective functional genomic strategies for elucidating the function of DNA methylation. PMID:18612301

  7. Identification of complex stiffness tensor from waveform reconstruction

    NASA Astrophysics Data System (ADS)

    Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.

    2002-03-01

    An inverse method is proposed in order to determine the viscoelastic properties of composite-material plates from the plane-wave transmitted acoustic field. Analytical formulations of both the plate transmission coefficient and its first and second derivatives are established, and included in a two-step inversion scheme. Two objective functions to be minimized are then designed by considering the well-known maximum-likelihood principle and by using an analytic signal formulation. Through these innovative objective functions, the robustness of the inversion process against high level of noise in waveforms is improved and the method can be applied to a very thin specimen. The suitability of the inversion process for viscoelastic property identification is demonstrated using simulated data for composite materials with different anisotropy and damping degrees. A study of the effect of the rheologic model choice on the elastic property identification emphasizes the relevance of using a phenomenological description considering viscosity. Experimental characterizations show then the good reliability of the proposed approach. Difficulties arise experimentally for particular anisotropic media.

  8. Three-Dimensional Field Solutions for Multi-Pole Cylindrical Halbach Arrays in an Axial Orientation

    NASA Technical Reports Server (NTRS)

    Thompson, William K.

    2006-01-01

    This article presents three-dimensional B field solutions for the cylindrical Halbach array in an axial orientation. This arrangement has applications in the design of axial motors and passive axial magnetic bearings and couplers. The analytical model described here assumes ideal magnets with fixed and uniform magnetization. The field component functions are expressed as sums of 2-D definite integrals that are easily computed by a number of mathematical analysis software packages. The analysis is verified with sample calculations and the results are compared to equivalent results from traditional finite-element analysis (FEA). The field solutions are then approximated for use in flux linkage and induced EMF calculations in nearby stator windings by expressing the field variance with angular displacement as pure sinusoidal function whose amplitude depends on radial and axial position. The primary advantage of numerical implementation of the analytical approach presented in the article is that it lends itself more readily to parametric analysis and design tradeoffs than traditional FEA models.

  9. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  10. The generation of criteria for selecting analytical tools for landscape management

    Treesearch

    Marilyn Duffey-Armstrong

    1979-01-01

    This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...

  11. Time-dependent density functional theory description of total photoabsorption cross sections

    NASA Astrophysics Data System (ADS)

    Tenorio, Bruno Nunes Cabral; Nascimento, Marco Antonio Chaer; Rocha, Alexandre Braga

    2018-02-01

    The time-dependent version of the density functional theory (TDDFT) has been used to calculate the total photoabsorption cross section of a number of molecules, namely, benzene, pyridine, furan, pyrrole, thiophene, phenol, naphthalene, and anthracene. The discrete electronic pseudo-spectra, obtained in a L2 basis set calculation were used in an analytic continuation procedure to obtain the photoabsorption cross sections. The ammonia molecule was chosen as a model system to compare the results obtained with TDDFT to those obtained with the linear response coupled cluster approach in order to make a link with our previous work and establish benchmarks.

  12. Study of diatomic molecules. 2: Intensities. [optical emission spectroscopy of ScO

    NASA Technical Reports Server (NTRS)

    Femenias, J. L.

    1978-01-01

    The theory of perturbations, giving the diatomic effective Hamiltonian, is used for calculating actual molecular wave functions and intensity factors involved in transitions between states arising from Hund's coupling cases a,b, intermediate a-b, and c tendency. The Herman and Wallis corrections are derived, without any knowledge of the analytical expressions of the wave functions, and generalized to transitions between electronic states with whatever symmetry and multiplicity. A general method for studying perturbed intensities is presented using primarily modern spectroscopic numerical approaches. The method is used in the study of the ScO optical emission spectrum.

  13. Functional recognition imaging using artificial neural networks: applications to rapid cellular identification via broadband electromechanical response

    NASA Astrophysics Data System (ADS)

    Nikiforov, M. P.; Reukov, V. V.; Thompson, G. L.; Vertegel, A. A.; Guo, S.; Kalinin, S. V.; Jesse, S.

    2009-10-01

    Functional recognition imaging in scanning probe microscopy (SPM) using artificial neural network identification is demonstrated. This approach utilizes statistical analysis of complex SPM responses at a single spatial location to identify the target behavior, which is reminiscent of associative thinking in the human brain, obviating the need for analytical models. We demonstrate, as an example of recognition imaging, rapid identification of cellular organisms using the difference in electromechanical activity over a broad frequency range. Single-pixel identification of model Micrococcus lysodeikticus and Pseudomonas fluorescens bacteria is achieved, demonstrating the viability of the method.

  14. Combining Model-Based and Feature-Driven Diagnosis Approaches - A Case Study on Electromechanical Actuators

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav

    2010-01-01

    Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.

  15. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  16. Single conducting polymer nanowire based conductometric sensors

    NASA Astrophysics Data System (ADS)

    Bangar, Mangesh Ashok

    The detection of toxic chemicals, gases or biological agents at very low concentrations with high sensitivity and selectivity has been subject of immense interest. Sensors employing electrical signal readout as transduction mechanism offer easy, label-free detection of target analyte in real-time. Traditional thin film sensors inherently suffered through loss of sensitivity due to current shunting across the charge depleted/added region upon analyte binding to the sensor surface, due to their large cross sectional area. This limitation was overcome by use of nanostructure such as nanowire/tube as transducer where current shunting during sensing was almost eliminated. Due to their benign chemical/electrochemical fabrication route along with excellent electrical properties and biocompatibility, conducting polymers offer cost-effective alternative over other nanostructures. Biggest obstacle in using these nanostructures is lack of easy, scalable and cost-effective way of assembling these nanostructures on prefabricated micropatterns for device fabrication. In this dissertation, three different approaches have been taken to fabricate individual or array of single conducting polymer (and metal) nanowire based devices and using polymer by itself or after functionalization with appropriate recognition molecule they have been applied for gas and biochemical detection. In the first approach electrochemical fabrication of multisegmented nanowires with middle functional Ppy segment along with ferromagnetic nickel (Ni) and end gold segments for better electrical contact was studied. This multi-layered nanowires were used along with ferromagnetic contact electrode for controlled magnetic assembly of nanowires into devices and were used for ammonia gas sensing. The second approach uses conducting polymer, polypyrrole (Ppy) nanowires using simple electrophoretic alignment and maskless electrodeposition to anchor nanowire which were further functionalized with antibodies against cancer marker protein (Cancer Antigen, CA 125) using covalent immobilization for detection of CA 125 in buffer and human blood plasma. Third approach combined electrochemical deposition of conducting polymer and assembly steps into a single step fabrication & functionalization using e-beam lithographically patterned nano-channels. Using this method array of Ppy nanowires were fabricated. Further during fabrication step, by entrapping recognition molecule (avidin) biofunctionalization was achieved. Subsequently these sensors were used for detection of biotinylated single stranded DNA.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report was prepared at the request of the Lawrence Livermore Laboratory (LLL) to provide background information for analyzing soil-structure interaction by the frequency-independent impedance function approach. LLL is conducting such analyses as part of its seismic review of selected operating plants under the Systematic Evaluation Program for the US Nuclear Regulatory Commission. The analytical background and basic assumptionsof the impedance function theory are briefly reviewed, and the role of radiation damping in soil-structure interaction analysis is discussed. The validity of modeling soil-structure interaction by using frequency-independent functions is evaluated based on data from several field tests. Finally, the recommendedmore » procedures for performing soil-structure interaction analyses are discussed with emphasis on the modal superposition method.« less

  18. Prediction and redesign of protein–protein interactions

    PubMed Central

    Lua, Rhonald C.; Marciano, David C.; Katsonis, Panagiotis; Adikesavan, Anbu K.; Wilkins, Angela D.; Lichtarge, Olivier

    2014-01-01

    Understanding the molecular basis of protein function remains a central goal of biology, with the hope to elucidate the role of human genes in health and in disease, and to rationally design therapies through targeted molecular perturbations. We review here some of the computational techniques and resources available for characterizing a critical aspect of protein function – those mediated by protein–protein interactions (PPI). We describe several applications and recent successes of the Evolutionary Trace (ET) in identifying molecular events and shapes that underlie protein function and specificity in both eukaryotes and prokaryotes. ET is a part of analytical approaches based on the successes and failures of evolution that enable the rational control of PPI. PMID:24878423

  19. Proteoglycomics: Recent Progress and Future Challenges

    PubMed Central

    Ly, Mellisa; Laremore, Tatiana N.

    2010-01-01

    Abstract Proteoglycomics is a systematic study of structure, expression, and function of proteoglycans, a posttranslationally modified subset of a proteome. Although relying on the established technologies of proteomics and glycomics, proteoglycomics research requires unique approaches for elucidating structure–function relationships of both proteoglycan components, glycosaminoglycan chain, and core protein. This review discusses our current understanding of structure and function of proteoglycans, major players in the development, normal physiology, and disease. A brief outline of the proteoglycomic sample preparation and analysis is provided along with examples of several recent proteoglycomic studies. Unique challenges in the characterization of glycosaminoglycan component of proteoglycans are discussed, with emphasis on the many analytical tools used and the types of information they provide. PMID:20450439

  20. Applying Knowledge of Enzyme Biochemistry to the Prediction of Functional Sites for Aiding Drug Discovery.

    PubMed

    Pai, Priyadarshini P; Mondal, Sukanta

    2017-01-01

    Enzymes are biological catalysts that play an important role in determining the patterns of chemical transformations pertaining to life. Many milestones have been achieved in unraveling the mechanisms in which the enzymes orchestrate various cellular processes using experimental and computational approaches. Experimental studies generating nearly all possible mutations of target enzymes have been aided by rapid computational approaches aiming at enzyme functional classification, understanding domain organization, functional site identification. The functional architecture, essentially, is involved in binding or interaction with ligands including substrates, products, cofactors, inhibitors, providing for their function, such as in catalysis, ligand mediated cell signaling, allosteric regulation and post-translational modifications. With the increasing availability of enzyme information and advances in algorithm development, computational approaches have now become more capable of providing precise inputs for enzyme engineering, and in the process also making it more efficient. This has led to interesting findings, especially in aberrant enzyme interactions, such as hostpathogen interactions in infection, neurodegenerative diseases, cancer and diabetes. This review aims to summarize in retrospection - the mined knowledge, vivid perspectives and challenging strides in using available experimentally validated enzyme information for characterization. An analytical outlook is presented on the scope of exploring future directions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Nanoporous membranes enable concentration and transport in fully wet paper-based assays.

    PubMed

    Gong, Max M; Zhang, Pei; MacDonald, Brendan D; Sinton, David

    2014-08-19

    Low-cost paper-based assays are emerging as the platform for diagnostics worldwide. Paper does not, however, readily enable advanced functionality required for complex diagnostics, such as analyte concentration and controlled analyte transport. That is, after the initial wetting, no further analyte manipulation is possible. Here, we demonstrate active concentration and transport of analytes in fully wet paper-based assays by leveraging nanoporous material (mean pore diameter ≈ 4 nm) and ion concentration polarization. Two classes of devices are developed, an external stamp-like device with the nanoporous material separate from the paper-based assay, and an in-paper device patterned with the nanoporous material. Experimental results demonstrate up to 40-fold concentration of a fluorescent tracer in fully wet paper, and directional transport of the tracer over centimeters with efficiencies up to 96%. In-paper devices are applied to concentrate protein and colored dye, extending their limits of detection from ∼10 to ∼2 pmol/mL and from ∼40 to ∼10 μM, respectively. This approach is demonstrated in nitrocellulose membrane as well as paper, and the added cost of the nanoporous material is very low at ∼0.015 USD per device. The result is a major advance in analyte concentration and manipulation for the growing field of low-cost paper-based assays.

  2. Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.

    PubMed

    Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim

    2016-04-01

    Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.

  3. A dynamic mechanical analysis technique for porous media

    PubMed Central

    Pattison, Adam J; McGarry, Matthew; Weaver, John B; Paulsen, Keith D

    2015-01-01

    Dynamic mechanical analysis (DMA) is a common way to measure the mechanical properties of materials as functions of frequency. Traditionally, a viscoelastic mechanical model is applied and current DMA techniques fit an analytical approximation to measured dynamic motion data by neglecting inertial forces and adding empirical correction factors to account for transverse boundary displacements. Here, a finite element (FE) approach to processing DMA data was developed to estimate poroelastic material properties. Frequency-dependent inertial forces, which are significant in soft media and often neglected in DMA, were included in the FE model. The technique applies a constitutive relation to the DMA measurements and exploits a non-linear inversion to estimate the material properties in the model that best fit the model response to the DMA data. A viscoelastic version of this approach was developed to validate the approach by comparing complex modulus estimates to the direct DMA results. Both analytical and FE poroelastic models were also developed to explore their behavior in the DMA testing environment. All of the models were applied to tofu as a representative soft poroelastic material that is a common phantom in elastography imaging studies. Five samples of three different stiffnesses were tested from 1 – 14 Hz with rough platens placed on the top and bottom surfaces of the material specimen under test to restrict transverse displacements and promote fluid-solid interaction. The viscoelastic models were identical in the static case, and nearly the same at frequency with inertial forces accounting for some of the discrepancy. The poroelastic analytical method was not sufficient when the relevant physical boundary constraints were applied, whereas the poroelastic FE approach produced high quality estimates of shear modulus and hydraulic conductivity. These results illustrated appropriate shear modulus contrast between tofu samples and yielded a consistent contrast in hydraulic conductivity as well. PMID:25248170

  4. On the optimal design of molecular sensing interfaces with lipid bilayer assemblies - A knowledge based approach

    NASA Astrophysics Data System (ADS)

    Siontorou, Christina G.

    2012-12-01

    Biosensors are analytic devices that incorporate a biochemical recognition system (biological, biologicalderived or biomimic: enzyme, antibody, DNA, receptor, etc.) in close contact with a physicochemical transducer (electrochemical, optical, piezoelectric, conductimetric, etc.) that converts the biochemical information, produced by the specific biological recognition reaction (analyte-biomolecule binding), into a chemical or physical output signal, related to the concentration of the analyte in the measuring sample. The biosensing concept is based on natural chemoreception mechanisms, which are feasible over/within/by means of a biological membrane, i.e., a structured lipid bilayer, incorporating or attached to proteinaceous moieties that regulate molecular recognition events which trigger ion flux changes (facilitated or passive) through the bilayer. The creation of functional structures that are similar to natural signal transduction systems, correlating and interrelating compatibly and successfully the physicochemical transducer with the lipid film that is self-assembled on its surface while embedding the reconstituted biological recognition system, and at the same time manage to satisfy the basic conditions for measuring device development (simplicity, easy handling, ease of fabrication) is far from trivial. The aim of the present work is to present a methodological framework for designing such molecular sensing interfaces, functioning within a knowledge-based system built on an ontological platform for supplying sub-systems options, compatibilities, and optimization parameters.

  5. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.

    PubMed

    Bergeron, Dominic; Tremblay, A-M S

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  6. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  7. A New Analytic Framework for Moderation Analysis --- Moving Beyond Analytic Interactions

    PubMed Central

    Tang, Wan; Yu, Qin; Crits-Christoph, Paul; Tu, Xin M.

    2009-01-01

    Conceptually, a moderator is a variable that modifies the effect of a predictor on a response. Analytically, a common approach as used in most moderation analyses is to add analytic interactions involving the predictor and moderator in the form of cross-variable products and test the significance of such terms. The narrow scope of such a procedure is inconsistent with the broader conceptual definition of moderation, leading to confusion in interpretation of study findings. In this paper, we develop a new approach to the analytic procedure that is consistent with the concept of moderation. The proposed framework defines moderation as a process that modifies an existing relationship between the predictor and the outcome, rather than simply a test of a predictor by moderator interaction. The approach is illustrated with data from a real study. PMID:20161453

  8. A new frequency approach for light flicker evaluation in electric power systems

    NASA Astrophysics Data System (ADS)

    Feola, Luigi; Langella, Roberto; Testa, Alfredo

    2015-12-01

    In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.

  9. Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.

    2017-05-01

    This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.

  10. Understanding Rasch Measurement: Rasch Techniques for Detecting Bias in Performance Assessments: An Example Comparing the Performance of Native and Non-native Speakers on a Test of Academic English.

    ERIC Educational Resources Information Center

    Elder, Catherine; McNamara, Tim; Congdon, Peter

    2003-01-01

    Used Rasch analytic procedures to study item bias or differential item functioning in both dichotomous and scalar items on a test of English for academic purposes. Results for 139 college students on a pilot English language test model the approach and illustrate the measurement challenges posed by a diagnostic instrument to measure English…

  11. Analytical and Experimental Research on Large Angle Maneuvers of Flexible Structures

    DTIC Science & Technology

    1990-05-04

    and achieved a much higher level of technical maturity than could be expected based upon I the the proposal and contractual requirements. This...would expect , for example, that a very smooth, small reference torque input should result in the flexible structure motion approaching the rigid body...alleviated on sound physical grounds by using the forced response data (e. g., the frequency response function) to impose the proper scaling on the system

  12. Langevin dynamics for ramified structures

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Iomin, Alexander; Horsthemke, Werner; Campos, Daniel

    2017-06-01

    We propose a generalized Langevin formalism to describe transport in combs and similar ramified structures. Our approach consists of a Langevin equation without drift for the motion along the backbone. The motion along the secondary branches may be described either by a Langevin equation or by other types of random processes. The mean square displacement (MSD) along the backbone characterizes the transport through the ramified structure. We derive a general analytical expression for this observable in terms of the probability distribution function of the motion along the secondary branches. We apply our result to various types of motion along the secondary branches of finite or infinite length, such as subdiffusion, superdiffusion, and Langevin dynamics with colored Gaussian noise and with non-Gaussian white noise. Monte Carlo simulations show excellent agreement with the analytical results. The MSD for the case of Gaussian noise is shown to be independent of the noise color. We conclude by generalizing our analytical expression for the MSD to the case where each secondary branch is n dimensional.

  13. The Identification and Significance of Intuitive and Analytic Problem Solving Approaches Among College Physics Students

    ERIC Educational Resources Information Center

    Thorsland, Martin N.; Novak, Joseph D.

    1974-01-01

    Described is an approach to assessment of intuitive and analytic modes of thinking in physics. These modes of thinking are associated with Ausubel's theory of learning. High ability in either intuitive or analytic thinking was associated with success in college physics, with high learning efficiency following a pattern expected on the basis of…

  14. Analytics that Inform the University: Using Data You Already Have

    ERIC Educational Resources Information Center

    Dziuban, Charles; Moskal, Patsy; Cavanagh, Thomas; Watts, Andre

    2012-01-01

    The authors describe the University of Central Florida's top-down/bottom-up action analytics approach to using data to inform decision-making at the University of Central Florida. The top-down approach utilizes information about programs, modalities, and college implementation of Web initiatives. The bottom-up approach continuously monitors…

  15. Streamwise Versus Spanwise Spacing of Obstacle Arrays: Parametrization of the Effects on Drag and Turbulence

    NASA Astrophysics Data System (ADS)

    Simón-Moral, Andres; Santiago, Jose Luis; Krayenhoff, E. Scott; Martilli, Alberto

    2014-06-01

    A Reynolds-averaged Navier-Stokes model is used to investigate the evolution of the sectional drag coefficient and turbulent length scales with the layouts of aligned arrays of cubes. Results show that the sectional drag coefficient is determined by the non-dimensional streamwise distance (sheltering parameter), and the non-dimensional spanwise distance (channelling parameter) between obstacles. This is different than previous approaches that consider only plan area density . On the other hand, turbulent length scales behave similarly to the staggered case (e. g. they are function of only). Analytical formulae are proposed for the length scales and for the sectional drag coefficient as a function of sheltering and channelling parameters, and implemented in a column model. This approach demonstrates good skill in the prediction of vertical profiles of the spatially-averaged horizontal wind speed.

  16. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    PubMed

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  17. Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics

    PubMed Central

    Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723

  18. A Cameron-Storvick Theorem for Analytic Feynman Integrals on Product Abstract Wiener Space and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jae Gil, E-mail: jgchoi@dankook.ac.kr; Chang, Seung Jun, E-mail: sejchang@dankook.ac.kr

    In this paper we derive a Cameron-Storvick theorem for the analytic Feynman integral of functionals on product abstract Wiener space B{sup 2}. We then apply our result to obtain an evaluation formula for the analytic Feynman integral of unbounded functionals on B{sup 2}. We also present meaningful examples involving functionals which arise naturally in quantum mechanics.

  19. Functionalized magnetic nanoparticle analyte sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yantasee, Wassana; Warner, Maryin G; Warner, Cynthia L

    2014-03-25

    A method and system for simply and efficiently determining quantities of a preselected material in a particular solution by the placement of at least one superparamagnetic nanoparticle having a specified functionalized organic material connected thereto into a particular sample solution, wherein preselected analytes attach to the functionalized organic groups, these superparamagnetic nanoparticles are then collected at a collection site and analyzed for the presence of a particular analyte.

  20. Engineering of Surface Chemistry for Enhanced Sensitivity in Nanoporous Interferometric Sensing Platforms.

    PubMed

    Law, Cheryl Suwen; Sylvia, Georgina M; Nemati, Madieh; Yu, Jingxian; Losic, Dusan; Abell, Andrew D; Santos, Abel

    2017-03-15

    We explore new approaches to engineering the surface chemistry of interferometric sensing platforms based on nanoporous anodic alumina (NAA) and reflectometric interference spectroscopy (RIfS). Two surface engineering strategies are presented, namely (i) selective chemical functionalization of the inner surface of NAA pores with amine-terminated thiol molecules and (ii) selective chemical functionalization of the top surface of NAA with dithiol molecules. The strong molecular interaction of Au 3+ ions with thiol-containing functional molecules of alkane chain or peptide character provides a model sensing system with which to assess the sensitivity of these NAA platforms by both molecular feature and surface engineering. Changes in the effective optical thickness of the functionalized NAA photonic films (i.e., sensing principle), in response to gold ions, are monitored in real-time by RIfS. 6-Amino-1-hexanethiol (inner surface) and 1,6-hexanedithiol (top surface), the most sensitive functional molecules from approaches i and ii, respectively, were combined into a third sensing strategy whereby the NAA platforms are functionalized on both the top and inner surfaces concurrently. Engineering of the surface according to this approach resulted in an additive enhancement in sensitivity of up to 5-fold compared to previously reported systems. This study advances the rational engineering of surface chemistry for interferometric sensing on nanoporous platforms with potential applications for real-time monitoring of multiple analytes in dynamic environments.

  1. Two-dimensional analytic weighting functions for limb scattering

    NASA Astrophysics Data System (ADS)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  2. Derived Transformation of Children's Pregambling Game Playing

    PubMed Central

    Dymond, Simon; Bateman, Helena; Dixon, Mark R

    2010-01-01

    Contemporary behavior-analytic perspectives on gambling emphasize the impact of verbal relations, or derived relational responding and the transformation of stimulus functions, on the initiation and maintenance of gambling. Approached in this way, it is possible to undertake experimental analysis of the role of verbal/mediational variables in gambling behavior. The present study therefore sought to demonstrate the ways new stimuli could come to have functions relevant to gambling without those functions being trained directly. Following a successful derived-equivalence-relations test, a simulated board game established high- and low-roll functions for two concurrently presented dice labelled with members of the derived relations. During the test for derived transformation, children were reexposed to the board game with dice labelled with indirectly related stimuli. All participants except 1 who passed the equivalence relations test selected the die that was indirectly related to the trained high-roll die more often than the die that was indirectly related to low-roll die, despite the absence of differential outcomes. All participants except 3 also gave the derived high-roll die higher liking ratings than the derived low-roll die. The implications of the findings for behavior-analytic research on gambling and the development of verbally-based interventions for disordered gambling are discussed. PMID:21541176

  3. Derived transformation of children's pregambling game playing.

    PubMed

    Dymond, Simon; Bateman, Helena; Dixon, Mark R

    2010-11-01

    Contemporary behavior-analytic perspectives on gambling emphasize the impact of verbal relations, or derived relational responding and the transformation of stimulus functions, on the initiation and maintenance of gambling. Approached in this way, it is possible to undertake experimental analysis of the role of verbal/mediational variables in gambling behavior. The present study therefore sought to demonstrate the ways new stimuli could come to have functions relevant to gambling without those functions being trained directly. Following a successful derived-equivalence-relations test, a simulated board game established high- and low-roll functions for two concurrently presented dice labelled with members of the derived relations. During the test for derived transformation, children were reexposed to the board game with dice labelled with indirectly related stimuli. All participants except 1 who passed the equivalence relations test selected the die that was indirectly related to the trained high-roll die more often than the die that was indirectly related to low-roll die, despite the absence of differential outcomes. All participants except 3 also gave the derived high-roll die higher liking ratings than the derived low-roll die. The implications of the findings for behavior-analytic research on gambling and the development of verbally-based interventions for disordered gambling are discussed.

  4. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    PubMed

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  5. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  6. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  7. Reduction of Free Edge Peeling Stress of Laminated Composites Using Active Piezoelectric Layers

    PubMed Central

    Huang, Bin; Kim, Heung Soo

    2014-01-01

    An analytical approach is proposed in the reduction of free edge peeling stresses of laminated composites using active piezoelectric layers. The approach is the extended Kantorovich method which is an iterative method. Multiterms of trial function are employed and governing equations are derived by taking the principle of complementary virtual work. The solutions are obtained by solving a generalized eigenvalue problem. By this approach, the stresses automatically satisfy not only the traction-free boundary conditions, but also the free edge boundary conditions. Through the iteration processes, the free edge stresses converge very quickly. It is found that the peeling stresses generated by mechanical loadings are significantly reduced by applying a proper electric field to the piezoelectric actuators. PMID:25025088

  8. ``Green's function'' approach & low-mode asymmetries

    NASA Astrophysics Data System (ADS)

    Masse, Laurent; Clark, Dan; Salmonson, Jay; MacLaren, Steve; Ma, Tammy; Khan, Shahab; Pino, Jesse; Ralph, Jo; Czajka, C.; Tipton, Robert; Landen, Otto; Kyrala, Georges; 2 Team; 1 Team

    2017-10-01

    Long wavelength, low mode asymmetries are believed to play a leading role in limiting the performance of current ICF implosions on NIF. These long wavelength modes are initiated and driven by asymmetries in the x-ray flux from the hohlraum; however, the underlying hydrodynamics of the implosion also act to amplify these asymmetries. The work presented here aim to deepen our understanding of the interplay of the drive asymmetries and the underlying implosion hydrodynamics in determining the final imploded configuration. This is accomplished through a synthesis of numerical modeling, analytic theory, and experimental data. In detail, we use a Green's function approach to connect the drive asymmetry seen by the capsule to the measured inflight and hot spot symmetries. The approach has been validated against a suite of numerical simulations. Ultimately, we hope this work will identify additional measurements to further constrain the asymmetries and increase hohlraum illumination design flexibility on the NIF. The technique and derivation of associated error bars will be presented. LLC, (LLNS) Contract No. DE-AC52-07NA27344.

  9. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE PAGES

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    2017-10-13

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  10. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  11. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.

    PubMed

    Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J

    2017-12-01

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.

  12. Superhydrophobic Analyte Concentration Utilizing Colloid-Pillar Array SERS Substrates

    DOE PAGES

    Wallace, Ryan A.; Charlton, Jennifer J.; Kirchner, Teresa B.; ...

    2014-11-04

    In order to detect a few molecules present in a large sample it is important to know the trace components in the medicinal and environmental sample. Surface enhanced Raman spectroscopy (SERS) is a technique that can be utilized to detect molecules at very low absolute numbers. However, detection at trace concentration levels in real samples requires properly designed delivery and detection systems. Moreover, the following work involves superhydrophobic surfaces that includes silicon pillar arrays formed by lithographic and dewetting protocols. In order to generate the necessary plasmonic substrate for SERS detection, simple and flow stable Ag colloid was added tomore » the functionalized pillar array system via soaking. The pillars are used native and with hydrophobic modification. The pillars provide a means to concentrate analyte via superhydrophobic droplet evaporation effects. A 100-fold concentration of analyte was estimated, with a limit of detection of 2.9 10-12 M for mitoxantrone dihydrochloride. Additionally, analytes were delivered to the surface via a multiplex approach in order to demonstrate an ability to control droplet size and placement for scaled-up applications in real world applications. Finally, a concentration process involving transport and sequestration based on surface treatment selective wicking is demonstrated.« less

  13. A Requirements-Driven Optimization Method for Acoustic Liners Using Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Lopes, Leonard V.

    2017-01-01

    More than ever, there is flexibility and freedom in acoustic liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. In a previous paper on this subject, a method deriving the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground was described. A simple code-wrapping approach was used to evaluate a community noise objective function for an external optimizer. Gradients were evaluated using a finite difference formula. The subject of this paper is an application of analytic derivatives that supply precise gradients to an optimization process. Analytic derivatives improve the efficiency and accuracy of gradient-based optimization methods and allow consideration of more design variables. In addition, the benefit of variable impedance liners is explored using a multi-objective optimization.

  14. Superhydrophobic Analyte Concentration Utilizing Colloid-Pillar Array SERS Substrates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Ryan A.; Charlton, Jennifer J.; Kirchner, Teresa B.

    In order to detect a few molecules present in a large sample it is important to know the trace components in the medicinal and environmental sample. Surface enhanced Raman spectroscopy (SERS) is a technique that can be utilized to detect molecules at very low absolute numbers. However, detection at trace concentration levels in real samples requires properly designed delivery and detection systems. Moreover, the following work involves superhydrophobic surfaces that includes silicon pillar arrays formed by lithographic and dewetting protocols. In order to generate the necessary plasmonic substrate for SERS detection, simple and flow stable Ag colloid was added tomore » the functionalized pillar array system via soaking. The pillars are used native and with hydrophobic modification. The pillars provide a means to concentrate analyte via superhydrophobic droplet evaporation effects. A 100-fold concentration of analyte was estimated, with a limit of detection of 2.9 10-12 M for mitoxantrone dihydrochloride. Additionally, analytes were delivered to the surface via a multiplex approach in order to demonstrate an ability to control droplet size and placement for scaled-up applications in real world applications. Finally, a concentration process involving transport and sequestration based on surface treatment selective wicking is demonstrated.« less

  15. Helios: Understanding Solar Evolution Through Text Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randazzese, Lucien

    This proof-of-concept project focused on developing, testing, and validating a range of bibliometric, text analytic, and machine-learning based methods to explore the evolution of three photovoltaic (PV) technologies: Cadmium Telluride (CdTe), Dye-Sensitized solar cells (DSSC), and Multi-junction solar cells. The analytical approach to the work was inspired by previous work by the same team to measure and predict the scientific prominence of terms and entities within specific research domains. The goal was to create tools that could assist domain-knowledgeable analysts in investigating the history and path of technological developments in general, with a focus on analyzing step-function changes in performance,more » or “breakthroughs,” in particular. The text-analytics platform developed during this project was dubbed Helios. The project relied on computational methods for analyzing large corpora of technical documents. For this project we ingested technical documents from the following sources into Helios: Thomson Scientific Web of Science (papers), the U.S. Patent & Trademark Office (patents), the U.S. Department of Energy (technical documents), the U.S. National Science Foundation (project funding summaries), and a hand curated set of full-text documents from Thomson Scientific and other sources.« less

  16. Analytical and numerical solutions for heat transfer and effective thermal conductivity of cracked media

    NASA Astrophysics Data System (ADS)

    Tran, A. B.; Vu, M. N.; Nguyen, S. T.; Dong, T. Q.; Le-Nguyen, K.

    2018-02-01

    This paper presents analytical solutions to heat transfer problems around a crack and derive an adaptive model for effective thermal conductivity of cracked materials based on singular integral equation approach. Potential solution of heat diffusion through two-dimensional cracked media, where crack filled by air behaves as insulator to heat flow, is obtained in a singular integral equation form. It is demonstrated that the temperature field can be described as a function of temperature and rate of heat flow on the boundary and the temperature jump across the cracks. Numerical resolution of this boundary integral equation allows determining heat conduction and effective thermal conductivity of cracked media. Moreover, writing this boundary integral equation for an infinite medium embedding a single crack under a far-field condition allows deriving the closed-form solution of temperature discontinuity on the crack and particularly the closed-form solution of temperature field around the crack. These formulas are then used to establish analytical effective medium estimates. Finally, the comparison between the developed numerical and analytical solutions allows developing an adaptive model for effective thermal conductivity of cracked media. This model takes into account both the interaction between cracks and the percolation threshold.

  17. A general framework for parametric survival analysis.

    PubMed

    Crowther, Michael J; Lambert, Paul C

    2014-12-30

    Parametric survival models are being increasingly used as an alternative to the Cox model in biomedical research. Through direct modelling of the baseline hazard function, we can gain greater understanding of the risk profile of patients over time, obtaining absolute measures of risk. Commonly used parametric survival models, such as the Weibull, make restrictive assumptions of the baseline hazard function, such as monotonicity, which is often violated in clinical datasets. In this article, we extend the general framework of parametric survival models proposed by Crowther and Lambert (Journal of Statistical Software 53:12, 2013), to incorporate relative survival, and robust and cluster robust standard errors. We describe the general framework through three applications to clinical datasets, in particular, illustrating the use of restricted cubic splines, modelled on the log hazard scale, to provide a highly flexible survival modelling framework. Through the use of restricted cubic splines, we can derive the cumulative hazard function analytically beyond the boundary knots, resulting in a combined analytic/numerical approach, which substantially improves the estimation process compared with only using numerical integration. User-friendly Stata software is provided, which significantly extends parametric survival models available in standard software. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Analytical methodology for determination of helicopter IFR precision approach requirements. [pilot workload and acceptance level

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.

    1980-01-01

    A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.

  19. Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.

    PubMed

    Sun, Dong; Zhao, Daomu

    2005-08-01

    By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.

  20. On the Gibbs phenomenon 1: Recovering exponential accuracy from the Fourier partial sum of a non-periodic analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve

    1992-01-01

    It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.

  1. Some elements of a theory of multidimensional complex variables. I - General theory. II - Expansions of analytic functions and application to fluid flows

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1989-01-01

    The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.

  2. Analyte species and concentration identification using differentially functionalized microcantilever arrays and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senesac, Larry R; Datskos, Panos G; Sepaniak, Michael J

    2006-01-01

    In the present work, we have performed analyte species and concentration identification using an array of ten differentially functionalized microcantilevers coupled with a back-propagation artificial neural network pattern recognition algorithm. The array consists of ten nanostructured silicon microcantilevers functionalized by polymeric and gas chromatography phases and macrocyclic receptors as spatially dense, differentially responding sensing layers for identification and quantitation of individual analyte(s) and their binary mixtures. The array response (i.e. cantilever bending) to analyte vapor was measured by an optical readout scheme and the responses were recorded for a selection of individual analytes as well as several binary mixtures. Anmore » artificial neural network (ANN) was designed and trained to recognize not only the individual analytes and binary mixtures, but also to determine the concentration of individual components in a mixture. To the best of our knowledge, ANNs have not been applied to microcantilever array responses previously to determine concentrations of individual analytes. The trained ANN correctly identified the eleven test analyte(s) as individual components, most with probabilities greater than 97%, whereas it did not misidentify an unknown (untrained) analyte. Demonstrated unique aspects of this work include an ability to measure binary mixtures and provide both qualitative (identification) and quantitative (concentration) information with array-ANN-based sensor methodologies.« less

  3. Dynamic Modeling and Testing of MSRR-1 for Use in Microgravity Environments Analysis

    NASA Technical Reports Server (NTRS)

    Gattis, Christy; LaVerde, Bruce; Howell, Mike; Phelps, Lisa H. (Technical Monitor)

    2001-01-01

    Delicate microgravity science is unlikely to succeed on the International Space Station if vibratory and transient disturbers corrupt the environment. An analytical approach to compute the on-orbit acceleration environment at science experiment locations within a standard payload rack resulting from these disturbers is presented. This approach has been grounded by correlation and comparison to test verified transfer functions. The method combines the results of finite element and statistical energy analysis using tested damping and modal characteristics to provide a reasonable approximation of the total root-mean-square (RMS) acceleration spectra at the interface to microgravity science experiment hardware.

  4. Visualizing Molecular Diffusion through Passive Permeability Barriers in Cells: Conventional and Novel Approaches

    PubMed Central

    Lin, Yu-Chun; Phua, Siew Cheng; Lin, Benjamin; Inoue, Takanari

    2013-01-01

    Diffusion barriers are universal solutions for cells to achieve distinct organizations, compositions, and activities within a limited space. The influence of diffusion barriers on the spatiotemporal dynamics of signaling molecules often determines cellular physiology and functions. Over the years, the passive permeability barriers in various subcellular locales have been characterized using elaborate analytical techniques. In this review, we will summarize the current state of knowledge on the various passive permeability barriers present in mammalian cells. We will conclude with a description of several conventional techniques and one new approach based on chemically-inducible diffusion trap (C-IDT) for probing permeable barriers. PMID:23731778

  5. Electronic properties of a molecular system with Platinum

    NASA Astrophysics Data System (ADS)

    Ojeda, J. H.; Medina, F. G.; Becerra-Alonso, David

    2017-10-01

    The electronic properties are studied using a finite homogeneous molecule called Trans-platinum-linked oligo(tetraethenylethenes). This system is composed of individual molecules such as benzene rings, platinum, Phosphore and Sulfur. The mechanism for the study of the electron transport through this system is based on placing the molecule between metal contacts to control the current through the molecular system. We study this molecule based on the tight-binding approach for the calculation of the transport properties using the Landauer-Büttiker formalism and the Fischer-Lee relationship, based on a semi-analytic Green's function method within a real-space renormalization approach. Our results show a significant agreement with experimental measurements.

  6. Screening Vaccine Formulations in Fresh Human Whole Blood.

    PubMed

    Hakimi, Jalil; Aboutorabian, Sepideh; To, Frederick; Ausar, Salvador F; Rahman, Nausheen; Brookes, Roger H

    2017-01-01

    Monitoring the immunological functionality of vaccine formulations is critical for vaccine development. While the traditional approach using established animal models has been relatively effective, the use of animals is costly and cumbersome, and animal models are not always reflective of a human response. The development of a human-based approach would be a major step forward in understanding how vaccine formulations might behave in humans. Here, we describe a platform methodology using fresh human whole blood (hWB) to monitor adjuvant-modulated, antigen-specific responses to vaccine formulations, which is amenable to analysis by standard immunoassays as well as a variety of other analytical techniques.

  7. Psychodynamic psychotherapy for complex trauma: targets, focus, applications, and outcomes

    PubMed Central

    Spermon, Deborah; Darlington, Yvonne; Gibney, Paul

    2010-01-01

    Complex trauma describes that category of severe, chronic interpersonal trauma usually originating in the formative years of a child. In the adult, this can result in global dissociative difficulties across areas of cognitive, affective, somatic, and behavioral functions. Targeting this field of traumatic pathology, this article reviews the contributions and developments within one broad approach: psychodynamic theory and practice. Brief descriptions of aspects of analytical, Jungian, relational, object relations, and attachment therapeutic approaches are given, along with understandings of pathology and the formulation of therapeutic goals. Major practices within client sessions are canvassed and the issues of researching treatment outcomes are discussed. PMID:22110335

  8. Unifying Approach to Analytical Chemistry and Chemical Analysis: Problem-Oriented Role of Chemical Analysis.

    ERIC Educational Resources Information Center

    Pardue, Harry L.; Woo, Jannie

    1984-01-01

    Proposes an approach to teaching analytical chemistry and chemical analysis in which a problem to be resolved is the focus of a course. Indicates that this problem-oriented approach is intended to complement detailed discussions of fundamental and applied aspects of chemical determinations and not replace such discussions. (JN)

  9. Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.

    PubMed

    Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis

    2016-07-01

    Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.

  10. Rayleigh approximation to ground state of the Bose and Coulomb glasses

    DOE PAGES

    Ryan, S. D.; Mityushev, V.; Vinokur, V. M.; ...

    2015-01-16

    Glasses are rigid systems in which competing interactions prevent simultaneous minimization of local energies. This leads to frustration and highly degenerate ground states the nature and properties of which are still far from being thoroughly understood. We report an analytical approach based on the method of functional equations that allows us to construct the Rayleigh approximation to the ground state of a two-dimensional (2D) random Coulomb system with logarithmic interactions. We realize a model for 2D Coulomb glass as a cylindrical type II superconductor containing randomly located columnar defects (CD) which trap superconducting vortices induced by applied magnetic field. Ourmore » findings break ground for analytical studies of glassy systems, marking an important step towards understanding their properties.« less

  11. Adiabatic invariant analysis of dark and dark-bright soliton stripes in two-dimensional Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Kevrekidis, P. G.; Wang, Wenlong; Carretero-González, R.; Frantzeskakis, D. J.

    2018-06-01

    In the present work, we develop an adiabatic invariant approach for the evolution of quasi-one-dimensional (stripe) solitons embedded in a two-dimensional Bose-Einstein condensate. The results of the theory are obtained both for the one-component case of dark soliton stripes, as well as for the considerably more involved case of the two-component dark-bright (alias "filled dark") soliton stripes. In both cases, analytical predictions regarding the stability and dynamics of these structures are obtained. One of our main findings is the determination of the instability modes of the waves as a function of the parameters of the system (such as the trap strength and the chemical potential). Our analytical predictions are favorably compared with results of direct numerical simulations.

  12. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    NASA Astrophysics Data System (ADS)

    Trombetti, Tomaso

    This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral density of the table input and output were estimated using the Bartlett's spectral estimation method. The experimentally-estimated table acceleration transfer functions obtained for different working conditions are correlated with their analytical counterparts. As a result of this comprehensive correlation study, a thorough understanding of the shaking table dynamics and its sensitivities to control and payload parameters is obtained. Moreover, the correlation study leads to a calibrated analytical model of the shaking table of high predictive ability. It is concluded that, in its present conditions, the Rice shaking table is able to reproduce, with a high degree of accuracy, model earthquake accelerations time histories in the frequency bandwidth from 0 to 75 Hz. Furthermore, the exhaustive analysis performed indicates that the table transfer function is not significantly affected by the presence of a large (in terms of weight) payload with a fundamental frequency up to 20 Hz. Payloads having a higher fundamental frequency do affect significantly the shaking table performance and require a modification of the table control gain setting that can be easily obtained using the predictive analytical model of the shaking table. The complete description of a structural dynamic experiment performed using the Rice shaking table facility is also reported herein. The object of this experimentation was twofold: (1) to verify the testing capability of the shaking table and, (2) to experimentally validate a simplified theory developed by the author, which predicts the maximum rotational response developed by seismic isolated building structures characterized by non-coincident centers of mass and rigidity, when subjected to strong earthquake ground motions.

  13. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  14. Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.

    PubMed

    Goh, Wilson Wen Bin; Wong, Limsoon

    2016-09-02

    Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts.

  15. Generalized Langevin dynamics of a nanoparticle using a finite element approach: Thermostating with correlated noise

    NASA Astrophysics Data System (ADS)

    Uma, B.; Swaminathan, T. N.; Ayyaswamy, P. S.; Eckmann, D. M.; Radhakrishnan, R.

    2011-09-01

    A direct numerical simulation (DNS) procedure is employed to study the thermal motion of a nanoparticle in an incompressible Newtonian stationary fluid medium with the generalized Langevin approach. We consider both the Markovian (white noise) and non-Markovian (Ornstein-Uhlenbeck noise and Mittag-Leffler noise) processes. Initial locations of the particle are at various distances from the bounding wall to delineate wall effects. At thermal equilibrium, the numerical results are validated by comparing the calculated translational and rotational temperatures of the particle with those obtained from the equipartition theorem. The nature of the hydrodynamic interactions is verified by comparing the velocity autocorrelation functions and mean square displacements with analytical results. Numerical predictions of wall interactions with the particle in terms of mean square displacements are compared with analytical results. In the non-Markovian Langevin approach, an appropriate choice of colored noise is required to satisfy the power-law decay in the velocity autocorrelation function at long times. The results obtained by using non-Markovian Mittag-Leffler noise simultaneously satisfy the equipartition theorem and the long-time behavior of the hydrodynamic correlations for a range of memory correlation times. The Ornstein-Uhlenbeck process does not provide the appropriate hydrodynamic correlations. Comparing our DNS results to the solution of an one-dimensional generalized Langevin equation, it is observed that where the thermostat adheres to the equipartition theorem, the characteristic memory time in the noise is consistent with the inherent time scale of the memory kernel. The performance of the thermostat with respect to equilibrium and dynamic properties for various noise schemes is discussed.

  16. Cocontraction of pairs of antagonistic muscles: analytical solution for planar static nonlinear optimization approaches.

    PubMed

    Herzog, W; Binding, P

    1993-11-01

    It has been stated in the literature that static, nonlinear optimization approaches cannot predict coactivation of pairs of antagonistic muscles; however, numerical solutions of such approaches have predicted coactivation of pairs of one-joint and multijoint antagonists. Analytical support for either finding is not available in the literature for systems containing more than one degree of freedom. The purpose of this study was to investigate analytically the possibility of cocontraction of pairs of antagonistic muscles using a static nonlinear optimization approach for a multidegree-of-freedom, two-dimensional system. Analytical solutions were found using the Karush-Kuhn-Tucker conditions, which were necessary and sufficient for optimality in this problem. The results show that cocontraction of pairs of one-joint antagonistic muscles is not possible, whereas cocontraction of pairs of multijoint antagonists is. These findings suggest that cocontraction of pairs of antagonistic muscles may be an "efficient" way to accomplish many movement tasks.

  17. Clustering of galaxies with f(R) gravity

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; Faizal, Mir; Hameeda, Mir; Pourhassan, Behnam; Salzano, Vincenzo; Upadhyay, Sudhaker

    2018-02-01

    Based on thermodynamics, we discuss the galactic clustering of expanding Universe by assuming the gravitational interaction through the modified Newton's potential given by f(R) gravity. We compute the corrected N-particle partition function analytically. The corrected partition function leads to more exact equations of state of the system. By assuming that the system follows quasi-equilibrium, we derive the exact distribution function that exhibits the f(R) correction. Moreover, we evaluate the critical temperature and discuss the stability of the system. We observe the effects of correction of f(R) gravity on the power-law behaviour of particle-particle correlation function also. In order to check the feasibility of an f(R) gravity approach to the clustering of galaxies, we compare our results with an observational galaxy cluster catalogue.

  18. Continuum-kinetic approach to sheath simulations

    NASA Astrophysics Data System (ADS)

    Cagas, Petr; Hakim, Ammar; Srinivasan, Bhuvana

    2016-10-01

    Simulations of sheaths are performed using a novel continuum-kinetic model with collisions including ionization/recombination. A discontinuous Galerkin method is used to directly solve the Boltzmann-Poisson system to obtain a particle distribution function. Direct discretization of the distribution function has advantages of being noise-free compared to particle-in-cell methods. The distribution function, which is available at each node of the configuration space, can be readily used to calculate the collision integrals in order to get ionization and recombination operators. Analytical models are used to obtain the cross-sections as a function of energy. Results will be presented incorporating surface physics with a classical sheath in Hall thruster-relevant geometry. This work was sponsored by the Air Force Office of Scientific Research under Grant Number FA9550-15-1-0193.

  19. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  20. Process-based network decomposition reveals backbone motif structure

    PubMed Central

    Wang, Guanyu; Du, Chenghang; Chen, Hao; Simha, Rahul; Rong, Yongwu; Xiao, Yi; Zeng, Chen

    2010-01-01

    A central challenge in systems biology today is to understand the network of interactions among biomolecules and, especially, the organizing principles underlying such networks. Recent analysis of known networks has identified small motifs that occur ubiquitously, suggesting that larger networks might be constructed in the manner of electronic circuits by assembling groups of these smaller modules. Using a unique process-based approach to analyzing such networks, we show for two cell-cycle networks that each of these networks contains a giant backbone motif spanning all the network nodes that provides the main functional response. The backbone is in fact the smallest network capable of providing the desired functionality. Furthermore, the remaining edges in the network form smaller motifs whose role is to confer stability properties rather than provide function. The process-based approach used in the above analysis has additional benefits: It is scalable, analytic (resulting in a single analyzable expression that describes the behavior), and computationally efficient (all possible minimal networks for a biological process can be identified and enumerated). PMID:20498084

  1. Large-deviation theory for diluted Wishart random matrices

    NASA Astrophysics Data System (ADS)

    Castillo, Isaac Pérez; Metz, Fernando L.

    2018-03-01

    Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.

  2. Analytic Steering: Inserting Context into the Information Dialog

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohn, Shawn J.; Calapristi, Augustin J.; Brown, Shyretha D.

    2011-10-23

    An analyst’s intrinsic domain knowledge is a primary asset in almost any analysis task. Unstructured text analysis systems that apply un-supervised content analysis approaches can be more effective if they can leverage this domain knowledge in a manner that augments the information discovery process without obfuscating new or unexpected content. Current unsupervised approaches rely upon the prowess of the analyst to submit the right queries or observe generalized document and term relationships from ranked or visual results. We propose a new approach which allows the user to control or steer the analytic view within the unsupervised space. This process ismore » controlled through the data characterization process via user supplied context in the form of a collection of key terms. We show that steering with an appropriate choice of key terms can provide better relevance to the analytic domain and still enable the analyst to uncover un-expected relationships; this paper discusses cases where various analytic steering approaches can provide enhanced analysis results and cases where analytic steering can have a negative impact on the analysis process.« less

  3. Understanding the Scalability of Bayesian Network Inference using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob

    2009-01-01

    Bayesian networks (BNs) are used to represent and efficiently compute with multi-variate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagation in a clique tree compiled from a Bayesian network. There is a lack of understanding of how clique tree computation time, and BN computation time in more general, depends on variations in BN size and structure. On the one hand, complexity results tell us that many interesting BN queries are NP-hard or worse to answer, and it is not hard to find application BNs where the clique tree approach in practice cannot be used. On the other hand, it is well-known that tree-structured BNs can be used to answer probabilistic queries in polynomial time. In this article, we develop an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN's non-root nodes to the number of root nodes, or (ii) the expected number of moral edges in their moral graphs. Our approach is based on combining analytical and experimental results. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for each set. For the special case of bipartite BNs, we consequently have two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, we systematically increase the degree of the root nodes in bipartite Bayesian networks, and find that root clique growth is well-approximated by Gompertz growth curves. It is believed that this research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms, and presents an aid for analytical trade-off studies of clique tree clustering using growth curves.

  4. Prediction of brain maturity in infants using machine-learning algorithms.

    PubMed

    Smyser, Christopher D; Dosenbach, Nico U F; Smyser, Tara A; Snyder, Abraham Z; Rogers, Cynthia E; Inder, Terrie E; Schlaggar, Bradley L; Neil, Jeffrey J

    2016-08-01

    Recent resting-state functional MRI investigations have demonstrated that much of the large-scale functional network architecture supporting motor, sensory and cognitive functions in older pediatric and adult populations is present in term- and prematurely-born infants. Application of new analytical approaches can help translate the improved understanding of early functional connectivity provided through these studies into predictive models of neurodevelopmental outcome. One approach to achieving this goal is multivariate pattern analysis, a machine-learning, pattern classification approach well-suited for high-dimensional neuroimaging data. It has previously been adapted to predict brain maturity in children and adolescents using structural and resting state-functional MRI data. In this study, we evaluated resting state-functional MRI data from 50 preterm-born infants (born at 23-29weeks of gestation and without moderate-severe brain injury) scanned at term equivalent postmenstrual age compared with data from 50 term-born control infants studied within the first week of life. Using 214 regions of interest, binary support vector machines distinguished term from preterm infants with 84% accuracy (p<0.0001). Inter- and intra-hemispheric connections throughout the brain were important for group categorization, indicating that widespread changes in the brain's functional network architecture associated with preterm birth are detectable by term equivalent age. Support vector regression enabled quantitative estimation of birth gestational age in single subjects using only term equivalent resting state-functional MRI data, indicating that the present approach is sensitive to the degree of disruption of brain development associated with preterm birth (using gestational age as a surrogate for the extent of disruption). This suggests that support vector regression may provide a means for predicting neurodevelopmental outcome in individual infants. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Prediction of brain maturity in infants using machine-learning algorithms

    PubMed Central

    Smyser, Christopher D.; Dosenbach, Nico U.F.; Smyser, Tara A.; Snyder, Abraham Z.; Rogers, Cynthia E.; Inder, Terrie E.; Schlaggar, Bradley L.; Neil, Jeffrey J.

    2016-01-01

    Recent resting-state functional MRI investigations have demonstrated that much of the large-scale functional network architecture supporting motor, sensory and cognitive functions in older pediatric and adult populations is present in term- and prematurely-born infants. Application of new analytical approaches can help translate the improved understanding of early functional connectivity provided through these studies into predictive models of neurodevelopmental outcome. One approach to achieving this goal is multivariate pattern analysis, a machine-learning, pattern classification approach well-suited for high-dimensional neuroimaging data. It has previously been adapted to predict brain maturity in children and adolescents using structural and resting state-functional MRI data. In this study, we evaluated resting state-functional MRI data from 50 preterm-born infants (born at 23–29 weeks of gestation and without moderate–severe brain injury) scanned at term equivalent postmenstrual age compared with data from 50 term-born control infants studied within the first week of life. Using 214 regions of interest, binary support vector machines distinguished term from preterm infants with 84% accuracy (p < 0.0001). Inter- and intra-hemispheric connections throughout the brain were important for group categorization, indicating that widespread changes in the brain's functional network architecture associated with preterm birth are detectable by term equivalent age. Support vector regression enabled quantitative estimation of birth gestational age in single subjects using only term equivalent resting state-functional MRI data, indicating that the present approach is sensitive to the degree of disruption of brain development associated with preterm birth (using gestational age as a surrogate for the extent of disruption). This suggests that support vector regression may provide a means for predicting neurodevelopmental outcome in individual infants. PMID:27179605

  6. Implementing Operational Analytics using Big Data Technologies to Detect and Predict Sensor Anomalies

    NASA Astrophysics Data System (ADS)

    Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.

    2016-09-01

    Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.

  7. Poly(3,4-ethylenedioxypyrrole) Modified Emitter Electrode for Substitution of Homogeneous Redox Buffer Agent Hydroquinone in Electrospray Ionization Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peintler-Krivan, Emese; Van Berkel, Gary J; Kertesz, Vilmos

    2010-01-01

    The electrolysis inherent to the operation of the electrospray ionization (ESI) source used with mass spectrometry (MS) is a well-known attendant effect of generating unipolar spray droplets and may affect the analysis of the analyte of interest. Undesirable electrolysis of an analyte may be prevented by limiting the emitter electrode current and/or the mass transport characteristics of the system. However, these ways to avoid analyte electrolysis may not be applcable in all ESI-MS experiments. For example, in the case of specific nanospray systems (e.g. the wire-in-a-capillary bulk-loaded or chip-based tip-loaded nanospray configurations), the solution flow rate is fixed in themore » 50-500 nL/min range and the electrode surface to volume ratio is large presenting a very effcient analyte to electrode mass transport configuration. In these situations, control over the interfacial potential of the working electrode via homogeneous or traditional heterogeneous (sacrificial metal) redox buffering is a possible way to prevent analyte electrolysis. However, byproducts of these redox buffering approaches can appear in the mass spectra and/or they can chemically alter the analyte. For example, the main reason for using hydroquinone as a homogeneous redox buffer, in addition to its relatively low oxidation potential, is that neither the original compound nor its oxidation product benzoquinone can be detected directly by ESI-MS. However, benzoquinone can alter analytes with thiol functional groups by reacting with those groups via a 1,4-Michael addition.« less

  8. Magnetothermoelectric effects in graphene and their dependence on scatterer concentration, magnetic field, and band gap

    NASA Astrophysics Data System (ADS)

    Kundu, Arpan; Alrefae, Majed A.; Fisher, Timothy S.

    2017-03-01

    Using a semiclassical Boltzmann transport equation approach, we derive analytical expressions for electric and thermoelectric transport coefficients of graphene in the presence and absence of a magnetic field. Scattering due to acoustic phonons, charged impurities, and vacancies is considered in the model. Seebeck (Sxx) and Nernst (N) coefficients are evaluated as functions of carrier density, temperature, scatterer concentration, magnetic field, and induced band gap, and the results are compared to experimental data. Sxx is an odd function of Fermi energy, while N is an even function, as observed in experiments. The peak values of both coefficients are found to increase with the decreasing scatterer concentration and increasing temperature. Furthermore, opening a band gap decreases N but increases Sxx. Applying a magnetic field introduces an asymmetry in the variation of Sxx with Fermi energy across the Dirac point. The formalism is more accurate and computationally efficient than the conventional Green's function approach used to model transport coefficients and can be used to explore transport properties of other materials with Dirac cones such as Weyl semimetals.

  9. A partial entropic lattice Boltzmann MHD simulation of the Orszag-Tang vortex

    NASA Astrophysics Data System (ADS)

    Flint, Christopher; Vahala, George

    2018-02-01

    Karlin has introduced an analytically determined entropic lattice Boltzmann (LB) algorithm for Navier-Stokes turbulence. Here, this is partially extended to an LB model of magnetohydrodynamics, on using the vector distribution function approach of Dellar for the magnetic field (which is permitted to have field reversal). The partial entropic algorithm is benchmarked successfully against standard simulations of the Orszag-Tang vortex [Orszag, S.A.; Tang, C.M. J. Fluid Mech. 1979, 90 (1), 129-143].

  10. Material design and structural color inspired by biomimetic approach

    PubMed Central

    Saito, Akira

    2011-01-01

    Generation of structural color is one of the essential functions realized by living organisms, and its industrial reproduction can result in numerous applications. From this viewpoint, the mechanisms, materials, analytical methods and fabrication technologies of the structural color are reviewed in this paper. In particular, the basic principles of natural photonic materials, the ideas developed from these principles, the directions of applications and practical industrial realizations are presented by summarizing the recent research results. PMID:27877459

  11. An Analytical Framework for Soft and Hard Data Fusion: A Dempster-Shafer Belief Theoretic Approach

    DTIC Science & Technology

    2012-08-01

    fusion. Therefore, we provide a detailed discussion on uncertain data types, their origins and three uncertainty pro- cessing formalisms that are popular...suitable membership functions corresponding to the fuzzy sets. 3.2.3 DS Theory The DS belief theory, originally proposed by Dempster, can be thought of as... originated and various imperfections of the source. Uncertainty handling formalisms provide techniques for modeling and working with these uncertain data types

  12. Measurement of nanoscale three-dimensional diffusion in the interior of living cells by STED-FCS.

    PubMed

    Lanzanò, Luca; Scipioni, Lorenzo; Di Bona, Melody; Bianchini, Paolo; Bizzarri, Ranieri; Cardarelli, Francesco; Diaspro, Alberto; Vicidomini, Giuseppe

    2017-07-06

    The observation of molecular diffusion at different spatial scales, and in particular below the optical diffraction limit (<200 nm), can reveal details of the subcellular topology and its functional organization. Stimulated-emission depletion microscopy (STED) has been previously combined with fluorescence correlation spectroscopy (FCS) to investigate nanoscale diffusion (STED-FCS). However, stimulated-emission depletion fluorescence correlation spectroscopy has only been used successfully to reveal functional organization in two-dimensional space, such as the plasma membrane, while, an efficient implementation for measurements in three-dimensional space, such as the cellular interior, is still lacking. Here we integrate the STED-FCS method with two analytical approaches, the recent separation of photons by lifetime tuning and the fluorescence lifetime correlation spectroscopy, to simultaneously probe diffusion in three dimensions at different sub-diffraction scales. We demonstrate that this method efficiently provides measurement of the diffusion of EGFP at spatial scales tunable from the diffraction size down to ∼80 nm in the cytoplasm of living cells.The measurement of molecular diffusion at sub-diffraction scales has been achieved in 2D space using STED-FCS, but an implementation for 3D diffusion is lacking. Here the authors present an analytical approach to probe diffusion in 3D space using STED-FCS and measure the diffusion of EGFP at different spatial scales.

  13. Of cuts and cracks: data analytics on constrained graphs for early prediction of failure in cementitious materials

    NASA Astrophysics Data System (ADS)

    Kahagalage, Sanath; Tordesillas, Antoinette; Nitka, Michał; Tejchman, Jacek

    2017-06-01

    Using data from discrete element simulations, we develop a data analytics approach using network flow theory to study force transmission and failure in a `dog-bone' concrete specimen submitted to uniaxial tension. With this approach, we establish the extent to which the bottlenecks, i.e., a subset of contacts that impedes flow and are prone to becoming overloaded, can predict the location of the ultimate macro-crack. At the heart of this analysis is a capacity function that quantifies, in relative terms, the maximum force that can be transmitted through the different contacts or edges in the network. Here we set this function to be solely governed by the size of the contact area between the deformable spherical grains. During all the initial stages of the loading history, when no bonds are broken, we find the bottlenecks coincide consistently with, and therefore predict, the location of the crack that later forms in the failure regime after peak force. When bonds do start to break, they are spread throughout the specimen: in, near, and far from, the bottlenecks. In one stage leading up to peak force, bonds collectively break in the lower portion of the specimen, momentarily shifting the bottlenecks to this location. Just before and around peak force, however, the bottlenecks return to their original location and remain there until the macro-crack emerges right along the bottlenecks.

  14. Aquifer response to stream-stage and recharge variations. II. Convolution method and applications

    USGS Publications Warehouse

    Barlow, P.M.; DeSimone, L.A.; Moench, A.F.

    2000-01-01

    In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped) parameter that accounts not only for the resistance of flow at the river-aquifer boundary, but also for the effects of partial penetration of the river and other near-stream flow phenomena not included in the theoretical development of the step-response functions.Analytical step-response functions, developed for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to stream-stage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank seepage rates and bank storage.

  15. Dissociable meta-analytic brain networks contribute to coordinated emotional processing.

    PubMed

    Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R

    2018-06-01

    Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.

  16. Chemical clocks, oscillations, and other temporal effects in analytical chemistry: oddity or viable approach?

    PubMed

    Prabhu, Gurpur Rakesh D; Witek, Henryk A; Urban, Pawel L

    2018-05-31

    Most analytical methods are based on "analogue" inputs from sensors of light, electric potentials, or currents. The signals obtained by such sensors are processed using certain calibration functions to determine concentrations of the target analytes. The signal readouts are normally done after an optimised and fixed time period, during which an assay mixture is incubated. This minireview covers another-and somewhat unusual-analytical strategy, which relies on the measurement of time interval between the occurrences of two distinguishable states in the assay reaction. These states manifest themselves via abrupt changes in the properties of the assay mixture (e.g. change of colour, appearance or disappearance of luminescence, change in pH, variations in optical activity or mechanical properties). In some cases, a correlation between the time of appearance/disappearance of a given property and the analyte concentration can be also observed. An example of an assay based on time measurement is an oscillating reaction, in which the period of oscillations is linked to the concentration of the target analyte. A number of chemo-chronometric assays, relying on the existing (bio)transformations or artificially designed reactions, were disclosed in the past few years. They are very attractive from the fundamental point of view but-so far-only few of them have be validated and used to address real-world problems. Then, can chemo-chronometric assays become a practical tool for chemical analysis? Is there a need for further development of such assays? We are aiming to answer these questions.

  17. Single-analyte to multianalyte fluorescence sensors

    NASA Astrophysics Data System (ADS)

    Lavigne, John J.; Metzger, Axel; Niikura, Kenichi; Cabell, Larry A.; Savoy, Steven M.; Yoo, J. S.; McDevitt, John T.; Neikirk, Dean P.; Shear, Jason B.; Anslyn, Eric V.

    1999-05-01

    The rational design of small molecules for the selective complexation of analytes has reached a level of sophistication such that there exists a high degree of prediction. An effective strategy for transforming these hosts into sensors involves covalently attaching a fluorophore to the receptor which displays some fluorescence modulation when analyte is bound. Competition methods, such as those used with antibodies, are also amenable to these synthetic receptors, yet there are few examples. In our laboratories, the use of common dyes in competition assays with small molecules has proven very effective. For example, an assay for citrate in beverages and an assay for the secondary messenger IP3 in cells have been developed. Another approach we have explored focuses on multi-analyte sensor arrays with attempt to mimic the mammalian sense of taste. Our system utilizes polymer resin beads with the desired sensors covalently attached. These functionalized microspheres are then immobilized into micromachined wells on a silicon chip thereby creating our taste buds. Exposure of the resin to analyte causes a change in the transmittance of the bead. This change can be fluorescent or colorimetric. Optical interrogation of the microspheres, by illuminating from one side of the wafer and collecting the signal on the other, results in an image. These data streams are collected using a CCD camera which creates red, green and blue (RGB) patterns that are distinct and reproducible for their environments. Analysis of this data can identify and quantify the analytes present.

  18. The right-hand side of the Jacobi identity: to be naught or not to be ?

    NASA Astrophysics Data System (ADS)

    Kiselev, Arthemy V.

    2016-01-01

    The geometric approach to iterated variations of local functionals -e.g., of the (master-)action functional - resulted in an extension of the deformation quantisation technique to the set-up of Poisson models of field theory. It also allowed of a rigorous proof for the main inter-relations between the Batalin-Vilkovisky (BV) Laplacian Δ and variational Schouten bracket [,]. The ad hoc use of these relations had been a known analytic difficulty in the BV- formalism for quantisation of gauge systems; now achieved, the proof does actually not require the assumption of graded-commutativity. Explained in our previous work, geometry's self- regularisation is rendered by Gel'fand's calculus of singular linear integral operators supported on the diagonal. We now illustrate that analytic technique by inspecting the validity mechanism for the graded Jacobi identity which the variational Schouten bracket does satisfy (whence Δ2 = 0, i.e., the BV-Laplacian is a differential acting in the algebra of local functionals). By using one tuple of three variational multi-vectors twice, we contrast the new logic of iterated variations - when the right-hand side of Jacobi's identity vanishes altogether - with the old method: interlacing its steps and stops, it could produce some non-zero representative of the trivial class in the top- degree horizontal cohomology. But we then show at once by an elementary counterexample why, in the frames of the old approach that did not rely on Gel'fand's calculus, the BV-Laplacian failed to be a graded derivation of the variational Schouten bracket.

  19. Kramers problem: Numerical Wiener-Hopf-like model characteristics

    NASA Astrophysics Data System (ADS)

    Ezin, A. N.; Samgin, A. L.

    2010-11-01

    Since the Kramers problem cannot be, in general, solved in terms of elementary functions, various numerical techniques or approximate methods must be employed. We present a study of characteristics for a particle in a damped well, which can be considered as a discretized version of the Melnikov [Phys. Rev. E 48, 3271 (1993)]10.1103/PhysRevE.48.3271 turnover theory. The main goal is to justify the direct computational scheme to the basic Wiener-Hopf model. In contrast to the Melnikov approach, which implements factorization through a Cauchy-theorem-based formulation, we employ the Wiener-Levy theorem to reduce the Kramers problem to a Wiener-Hopf sum equation written in terms of Toeplitz matrices. This latter can provide a stringent test for the reliability of analytic approximations for energy distribution functions occurring in the Kramers problems at arbitrary damping. For certain conditions, the simulated characteristics are compared well with those determined using the conventional Fourier-integral formulas, but sometimes may differ slightly depending on the value of a dissipation parameter. Another important feature is that, with our method, we can avoid some complications inherent to the Melnikov method. The calculational technique reported in the present paper may gain particular importance in situations where the energy losses of the particle to the bath are a complex-shaped function of the particle energy and analytic solutions of desired accuracy are not at hand. In order to appreciate more readily the significance and scope of the present numerical approach, we also discuss concrete aspects relating to the field of superionic conductors.

  20. Aptamer-based microfluidic beads array sensor for simultaneous detection of multiple analytes employing multienzyme-linked nanoparticle amplification and quantum dots labels.

    PubMed

    Zhang, He; Hu, Xinjiang; Fu, Xin

    2014-07-15

    This study reports the development of an aptamer-mediated microfluidic beads-based sensor for multiple analytes detection and quantification using multienzyme-linked nanoparticle amplification and quantum dots labels. Adenosine and cocaine were selected as the model analytes to validate the assay design based on strand displacement induced by target-aptamer complex. Microbeads functionalized with the aptamers and modified electron rich proteins were arrayed within a microfluidic channel and were connected with the horseradish peroxidases (HRP) and capture DNA probe derivative gold nanoparticles (AuNPs) via hybridization. The conformational transition of aptamer induced by target-aptamer complex contributes to the displacement of functionalized AuNPs and decreases the fluorescence signal of microbeads. In this approach, increased binding events of HRP on each nanosphere and enhanced mass transport capability inherent from microfluidics are integrated for enhancing the detection sensitivity of analytes. Based on the dual signal amplification strategy, the developed aptamer-based microfluidic bead array sensor could discriminate as low as 0.1 pM of adenosine and 0.5 pM cocaine, and showed a 500-fold increase in detection limit of adenosine compared to the off-chip test. The results proved the microfluidic-based method was a rapid and efficient system for aptamer-based targets assays (adenosine (0.1 pM) and cocaine (0.5 pM)), requiring only minimal (microliter) reagent use. This work demonstrated the successful application of aptamer-based microfluidic beads array sensor for detection of important molecules in biomedical fields. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. An Analytical Calibration Approach for the Polarimetric Airborne C Band Radiometer

    NASA Technical Reports Server (NTRS)

    Pham, Hanh; Kim, Edward J.

    2004-01-01

    Passive microwave remote sensing is sensitive to the quantity and distribution of water in soil and vegetation. During summer 2000, the Microwave Geophysics Group at the University of Michigan conducted the 7th Radiobrightness Energy Balance Experiment (REBEX-7) over a corn canopy in Michigan. Long time series of brightness temperatures, soil moisture and micrometeorology on the plot scale were taken. This paper addresses the calibration of the NASA GSFC polarimetric airborne C band microwave radiometer (ACMR) that participated in REBEX-7. Passive polarimeters are typically calibrated using an end-to-end approach based upon a standard artificial target or a well-known geophysical target. Analyzing the major internal functional subsystems offers a different perspective. The primary goal of this approach is to provide a transfer function that not only describes the system in its entirety but also accounts for the contributions of each subsystem toward the final modified Stokes parameters. This approach also serves as a realistic instrument simulator, a useful tool for future designs. The ACMR architecture can be partitioned into several functional subsystems. Each subsystem was extensively measured and the estimated parameters were imported into the overall system model. We will present the results of polarimetric antenna measurements, the instrument model as well as four Stokes observations from REBEX-7 using a first order inversion.

  2. Path integral approach to the Wigner representation of canonical density operators for discrete systems coupled to harmonic baths.

    PubMed

    Montoya-Castillo, Andrés; Reichman, David R

    2017-01-14

    We derive a semi-analytical form for the Wigner transform for the canonical density operator of a discrete system coupled to a harmonic bath based on the path integral expansion of the Boltzmann factor. The introduction of this simple and controllable approach allows for the exact rendering of the canonical distribution and permits systematic convergence of static properties with respect to the number of path integral steps. In addition, the expressions derived here provide an exact and facile interface with quasi- and semi-classical dynamical methods, which enables the direct calculation of equilibrium time correlation functions within a wide array of approaches. We demonstrate that the present method represents a practical path for the calculation of thermodynamic data for the spin-boson and related systems. We illustrate the power of the present approach by detailing the improvement of the quality of Ehrenfest theory for the correlation function C zz (t)=Re⟨σ z (0)σ z (t)⟩ for the spin-boson model with systematic convergence to the exact sampling function. Importantly, the numerically exact nature of the scheme presented here and its compatibility with semiclassical methods allows for the systematic testing of commonly used approximations for the Wigner-transformed canonical density.

  3. Grade 8 students' capability of analytical thinking and attitude toward science through teaching and learning about soil and its' pollution based on science technology and society (STS) approach

    NASA Astrophysics Data System (ADS)

    Boonprasert, Lapisarin; Tupsai, Jiraporn; Yuenyong, Chokchai

    2018-01-01

    This study reported Grade 8 students' analytical thinking and attitude toward science in teaching and learning about soil and its' pollution through science technology and society (STS) approach. The participants were 36 Grade 8 students in Naklang, Nongbualumphu, Thailand. The teaching and learning about soil and its' pollution through STS approach had carried out for 6 weeks. The soil and its' pollution unit through STS approach was developed based on framework of Yuenyong (2006) that consisted of five stages including (1) identification of social issues, (2) identification of potential solutions, (3) need for knowledge, (4) decision-making, and (5) socialization stage. Students' analytical thinking and attitude toward science was collected during their learning by participant observation, analytical thinking test, students' tasks, and journal writing. The findings revealed that students could gain their capability of analytical thinking. They could give ideas or behave the characteristics of analytical thinking such as thinking for classifying, compare and contrast, reasoning, interpreting, collecting data and decision making. Students' journal writing reflected that the STS class of soil and its' pollution motivated students. The paper will discuss implications of these for science teaching and learning through STS in Thailand.

  4. Characterization and prediction of the backscattered form function of an immersed cylindrical shell using hybrid fuzzy clustering and bio-inspired algorithms.

    PubMed

    Agounad, Said; Aassif, El Houcein; Khandouch, Younes; Maze, Gérard; Décultot, Dominique

    2018-02-01

    The acoustic scattering of a plane wave by an elastic cylindrical shell is studied. A new approach is developed to predict the form function of an immersed cylindrical shell of the radius ratio b/a ('b' is the inner radius and 'a' is the outer radius). The prediction of the backscattered form function is investigated by a combined approach between fuzzy clustering algorithms and bio-inspired algorithms. Four famous fuzzy clustering algorithms: the fuzzy c-means (FCM), the Gustafson-Kessel algorithm (GK), the fuzzy c-regression model (FCRM) and the Gath-Geva algorithm (GG) are combined with particle swarm optimization and genetic algorithm. The symmetric and antisymmetric circumferential waves A, S 0 , A 1 , S 1 and S 2 are investigated in a reduced frequency (k 1 a) range extends over 0.1

  5. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  6. Electron momentum density and Compton profile by a semi-empirical approach

    NASA Astrophysics Data System (ADS)

    Aguiar, Julio C.; Mitnik, Darío; Di Rocco, Héctor O.

    2015-08-01

    Here we propose a semi-empirical approach to describe with good accuracy the electron momentum densities and Compton profiles for a wide range of pure crystalline metals. In the present approach, we use an experimental Compton profile to fit an analytical expression for the momentum densities of the valence electrons. This expression is similar to a Fermi-Dirac distribution function with two parameters, one of which coincides with the ground state kinetic energy of the free-electron gas and the other resembles the electron-electron interaction energy. In the proposed scheme conduction electrons are neither completely free nor completely bound to the atomic nucleus. This procedure allows us to include correlation effects. We tested the approach for all metals with Z=3-50 and showed the results for three representative elements: Li, Be and Al from high-resolution experiments.

  7. Modeling Renewable Penertration Using a Network Economic Model

    NASA Astrophysics Data System (ADS)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.

  8. Path-integral approach to the Wigner-Kirkwood expansion.

    PubMed

    Jizba, Petr; Zatloukal, Václav

    2014-01-01

    We study the high-temperature behavior of quantum-mechanical path integrals. Starting from the Feynman-Kac formula, we derive a functional representation of the Wigner-Kirkwood perturbation expansion for quantum Boltzmann densities. As shown by its applications to different potentials, the presented expansion turns out to be quite efficient in generating analytic form of the higher-order expansion coefficients. To put some flesh on the bare bones, we apply the expansion to obtain basic thermodynamic functions of the one-dimensional anharmonic oscillator. Further salient issues, such as generalization to the Bloch density matrix and comparison with the more customary world-line formulation, are discussed.

  9. A meshless method using radial basis functions for numerical solution of the two-dimensional KdV-Burgers equation

    NASA Astrophysics Data System (ADS)

    Zabihi, F.; Saffarian, M.

    2016-07-01

    The aim of this article is to obtain the numerical solution of the two-dimensional KdV-Burgers equation. We construct the solution by using a different approach, that is based on using collocation points. The solution is based on using the thin plate splines radial basis function, which builds an approximated solution with discretizing the time and the space to small steps. We use a predictor-corrector scheme to avoid solving the nonlinear system. The results of numerical experiments are compared with analytical solutions to confirm the accuracy and efficiency of the presented scheme.

  10. Development of an efficient signal amplification strategy for label-free enzyme immunoassay using two site-specific biotinylated recombinant proteins.

    PubMed

    Tang, Jin-Bao; Tang, Ying; Yang, Hong-Ming

    2015-02-15

    Constructing a recombinant protein between a reporter enzyme and a detector protein to produce a homogeneous immunological reagent is advantageous over random chemical conjugation. However, the approach hardly recombines multiple enzymes in a difunctional fusion protein, which results in insufficient amplification of the enzymatic signal, thereby limiting its application in further enhancement of analytical signal. In this study, two site-specific biotinylated recombinant proteins, namely, divalent biotinylated alkaline phosphatase (AP) and monovalent biotinylated ZZ domain, were produced by employing the Avitag-BirA system. Through the high streptavidin (SA)-biotin interaction, the divalent biotinylated APs were clustered in the SA-biotin complex and then incorporated with the biotinylated ZZ. This incorporation results in the formation of a functional macromolecule that involves numerous APs, thereby enhancing the enzymatic signal, and in the production of several ZZ molecules for the interaction with immunoglobulin G (IgG) antibody. The advantage of this signal amplification strategy is demonstrated through ELISA, in which the analytical signal was substantially enhanced, with a 32-fold increase in the detection sensitivity compared with the ZZ-AP fusion protein approach. The proposed immunoassay without chemical modification can be an alternative strategy to enhance the analytical signals in various applications involving immunosensors and diagnostic chips, given that the label-free IgG antibody is suitable for the ZZ protein. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. First-principles studies of PETN molecular crystal vibrational frequencies under high pressure

    NASA Astrophysics Data System (ADS)

    Perger, Warren; Zhao, Jijun

    2005-07-01

    The vibrational frequencies of the PETN molecular crystal were calculated using the first-principles CRYSTAL03 program which employs an all-electron LCAO approach and calculates analytic first derivatives of the total energy with respect to atomic displacements. Numerical second derivatives were used to enable calculation of the vibrational frequencies at ambient pressure and under various states of compression. Three different density functionals, B3LYP, PW91, and X3LYP were used to examine the effect of the exchange-correlation functional on the vibrational frequencies. The pressure-induced shift of the vibrational frequencies will be presented and compared with experiment. The average deviation with experimental results is shown to be on the order of 2-3%, depending on the functional used.

  12. Determination of localization accuracy based on experimentally acquired image sets: applications to single molecule microscopy

    PubMed Central

    Tahmasbi, Amir; Ward, E. Sally; Ober, Raimund J.

    2015-01-01

    Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function. PMID:25837101

  13. Equivalent-circuit models for electret-based vibration energy harvesters

    NASA Astrophysics Data System (ADS)

    Phu Le, Cuong; Halvorsen, Einar

    2017-08-01

    This paper presents a complete analysis to build a tool for modelling electret-based vibration energy harvesters. The calculational approach includes all possible effects of fringing fields that may have significant impact on output power. The transducer configuration consists of two sets of metal strip electrodes on a top substrate that faces electret strips deposited on a bottom movable substrate functioning as a proof mass. Charge distribution on each metal strip is expressed by series expansion using Chebyshev polynomials multiplied by a reciprocal square-root form. The Galerkin method is then applied to extract all charge induction coefficients. The approach is validated by finite element calculations. From the analytic tool, a variety of connection schemes for power extraction in slot-effect and cross-wafer configurations can be lumped to a standard equivalent circuit with inclusion of parasitic capacitance. Fast calculation of the coefficients is also obtained by a proposed closed-form solution based on leading terms of the series expansions. The achieved analytical result is an important step for further optimisation of the transducer geometry and maximising harvester performance.

  14. Joint nonlinearity effects in the design of a flexible truss structure control system

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1986-01-01

    Nonlinear effects are introduced in the dynamics of large space truss structures by the connecting joints which are designed with rather important tolerances to facilitate the assembly of the structures in space. The purpose was to develop means to investigate the nonlinear dynamics of the structures, particularly the limit cycles that might occur when active control is applied to the structures. An analytical method was sought and derived to predict the occurrence of limit cycles and to determine their stability. This method is mainly based on the quasi-linearization of every joint using describing functions. This approach was proven successful when simple dynamical systems were tested. Its applicability to larger systems depends on the amount of computations it requires, and estimates of the computational task tend to indicate that the number of individual sources of nonlinearity should be limited. Alternate analytical approaches, which do not account for every single nonlinearity, or the simulation of a simplified model of the dynamical system should, therefore, be investigated to determine a more effective way to predict limit cycles in large dynamical systems with an important number of distributed nonlinearities.

  15. Data analytics using canonical correlation analysis and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles

    2017-07-01

    A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.

  16. Reducing post analytical error: perspectives on new formats for the blood sciences pathology report.

    PubMed

    O'Connor, John D

    2015-02-01

    Little has changed in the way we report pathology results from blood sciences over the last 50 years other than moving to electronic display from paper. In part, this is aspiration to preserve the format of a paper report in electronic format. It is also due to the limitations of electronic media to display the data. The advancement of web-based technologies and functionality of hand-held devices together with wireless and other technologies afford the opportunity to rethink data presentation with the aim of emphasising the message in the data, thereby modifying clinical behaviours and potentially reducing post-analytical error. This article takes the form of a commentary which explores new developments in the field of infographics and, together with examples, suggests some new approaches to communicating what is currently just data into information. The combination of graphics and a new approach to provocative interpretative commenting offers a powerful tool in improving pathology utilisation. An additional challenge is the requirement to consider how pathology reports may be issued directly to patients.

  17. Reducing Post Analytical Error: Perspectives on New Formats for the Blood Sciences Pathology Report

    PubMed Central

    O’Connor, John D

    2015-01-01

    Little has changed in the way we report pathology results from blood sciences over the last 50 years other than moving to electronic display from paper. In part, this is aspiration to preserve the format of a paper report in electronic format. It is also due to the limitations of electronic media to display the data. The advancement of web-based technologies and functionality of hand-held devices together with wireless and other technologies afford the opportunity to rethink data presentation with the aim of emphasising the message in the data, thereby modifying clinical behaviours and potentially reducing post-analytical error. This article takes the form of a commentary which explores new developments in the field of infographics and, together with examples, suggests some new approaches to communicating what is currently just data into information. The combination of graphics and a new approach to provocative interpretative commenting offers a powerful tool in improving pathology utilisation. An additional challenge is the requirement to consider how pathology reports may be issued directly to patients. PMID:25944968

  18. Chemical imaging of drug delivery systems with structured surfaces-a combined analytical approach of confocal raman microscopy and optical profilometry.

    PubMed

    Kann, Birthe; Windbergs, Maike

    2013-04-01

    Confocal Raman microscopy is an analytical technique with a steadily increasing impact in the field of pharmaceutics as the instrumental setup allows for nondestructive visualization of component distribution within drug delivery systems. Here, the attention is mainly focused on classic solid carrier systems like tablets, pellets, or extrudates. Due to the opacity of these systems, Raman analysis is restricted either to exterior surfaces or cross sections. As Raman spectra are only recorded from one focal plane at a time, the sample is usually altered to create a smooth and even surface. However, this manipulation can lead to misinterpretation of the analytical results. Here, we present a trendsetting approach to overcome these analytical pitfalls with a combination of confocal Raman microscopy and optical profilometry. By acquiring a topography profile of the sample area of interest prior to Raman spectroscopy, the profile height information allowed to level the focal plane to the sample surface for each spectrum acquisition. We first demonstrated the basic principle of this complementary approach in a case study using a tilted silica wafer. In a second step, we successfully adapted the two techniques to investigate an extrudate and a lyophilisate as two exemplary solid drug carrier systems. Component distribution analysis with the novel analytical approach was neither hampered by the curvature of the cylindrical extrudate nor the highly structured surface of the lyophilisate. Therefore, the combined analytical approach bears a great potential to be implemented in diversified fields of pharmaceutical sciences.

  19. Electron scattering from excited states of hydrogen: Implications for the ionization threshold law

    NASA Astrophysics Data System (ADS)

    Temkin, A.; Shertzer, J.

    2013-05-01

    The elastic scattering wave function for electrons scattered from the Nth excited state of hydrogen is the final state of the matrix element for excitation of that state. This paper deals with the solution of that problem primarily in the context of the Temkin-Poet (TP) model [A. Temkin, Phys. Rev.PHRVAO0031-899X10.1103/PhysRev.126.130 126, 130 (1962); R. Poet, J. Phys. BJPAPEH0022-370010.1088/0022-3700/11/17/019 11, 3081 (1978)], wherein only the radial parts of the interaction are included. The relevant potential for the outer electron is dominated by the Hartree potential, VNH(r). In the first part of the paper, VNH(r) is approximated by a potential WN(r), for which the scattering equation can be analytically solved. The results allow formal analytical continuation of N into the continuum, so that the ionization threshold law can be deduced. Because the analytic continuation involves going from N to an imaginary function of the momentum of the inner electron, the threshold law turns out to be an exponentially damped function of the available energy E, in qualitative accord with the result of Macek and Ihra [J. H. Macek and W. Ihra, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.55.2024 55, 2024 (1997)] for the TP model. Thereafter, the scattering equation for the Hartree potential VNH(r) is solved numerically. The numerical aspects of these calculations have proven to be challenging and required several developments for the difficulties to be overcome. The results for VNH(r) show only a simple energy-dependent shift from the approximate potential WN(r), which therefore does not change the analytic continuation and the form of the threshold law. It is concluded that the relevant optical potential must be included in order to compare directly with the analytic result of Macek and Ihra. The paper concludes with discussions of (a) a quantum mechanical interpretation of the result, and (b) the outlook of this approach for the complete problem.

  20. The Challenge of Understanding Process in Clinical Behavior Analysis: The Case of Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Follette, William C.; Bonow, Jordan T.

    2009-01-01

    Whether explicitly acknowledged or not, behavior-analytic principles are at the heart of most, if not all, empirically supported therapies. However, the change process in psychotherapy is only now being rigorously studied. Functional analytic psychotherapy (FAP; Kohlenberg & Tsai, 1991; Tsai et al., 2009) explicitly identifies behavioral-change…

Top