Culture-Sensitive Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Vandenberghe, L.
2008-01-01
Functional analytic psychotherapy (FAP) is defined as behavior-analytically conceptualized talk therapy. In contrast to the technique-oriented educational format of cognitive behavior therapy and the use of structural mediational models, FAP depends on the functional analysis of the moment-to-moment stream of interactions between client and…
NASA Astrophysics Data System (ADS)
García, Isaac A.; Llibre, Jaume; Maza, Susanna
2018-06-01
In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.
Analytic complexity of functions of two variables
NASA Astrophysics Data System (ADS)
Beloshapka, V. K.
2007-09-01
The definition of analytic complexity of an analytic function of two variables is given. It is proved that the class of functions of a chosen complexity is a differentialalgebraic set. A differential polynomial defining the functions of first class is constructed. An algorithm for obtaining relations defining an arbitrary class is described. Examples of functions are given whose order of complexity is equal to zero, one, two, and infinity. It is shown that the formal order of complexity of the Cardano and Ferrari formulas is significantly higher than their analytic complexity. The complexity classes turn out to be invariant with respect to a certain infinite-dimensional transformation pseudogroup. In this connection, we describe the orbits of the action of this pseudogroup in the jets of orders one, two, and three. The notion of complexity order is extended to plane (or “planar”) 3-webs. It is discovered that webs of complexity order one are the hexagonal webs. Some problems are posed.
NASA Technical Reports Server (NTRS)
Martin, E. Dale
1989-01-01
The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.
Some subclasses of multivalent functions involving a certain linear operator
NASA Astrophysics Data System (ADS)
Srivastava, H. M.; Patel, J.
2005-10-01
The authors investigate various inclusion and other properties of several subclasses of the class of normalized p-valent analytic functions in the open unit disk, which are defined here by means of a certain linear operator. Problems involving generalized neighborhoods of analytic functions in the class are investigated. Finally, some applications of fractional calculus operators are considered.
NASA Astrophysics Data System (ADS)
Asai, Kazuto
2009-02-01
We determine essentially all partial differential equations satisfied by superpositions of tree type and of a further special type. These equations represent necessary and sufficient conditions for an analytic function to be locally expressible as an analytic superposition of the type indicated. The representability of a real analytic function by a superposition of this type is independent of whether that superposition involves real-analytic functions or C^{\\rho}-functions, where the constant \\rho is determined by the structure of the superposition. We also prove that the function u defined by u^n=xu^a+yu^b+zu^c+1 is generally non-representable in any real (resp. complex) domain as f\\bigl(g(x,y),h(y,z)\\bigr) with twice differentiable f and differentiable g, h (resp. analytic f, g, h).
A system of three-dimensional complex variables
NASA Technical Reports Server (NTRS)
Martin, E. Dale
1986-01-01
Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.
Some classes of analytic functions involving Noor integral operator
NASA Astrophysics Data System (ADS)
Patel, J.; Cho, N. E.
2005-12-01
The object of the present paper is to investigate some inclusion properties of certain subclasses of analytic functions defined by using the Noor integral operator. The integral preserving properties in connection with the operator are also considered. Relevant connections of the results presented here with those obtained in earlier works are pointed out.
NASA Astrophysics Data System (ADS)
Vjačeslavov, N. S.
1980-02-01
In this paper estimates are found for L_pR_n(f) - the least deviation in the L_p-metric, 0 < p\\leq\\infty, of a piecewise analytic function f from the rational functions of degree at most n. It is shown that these estimates are sharp in a well-defined sense.Bibliography: 12 titles.
"Analytical" vector-functions I
NASA Astrophysics Data System (ADS)
Todorov, Vladimir Todorov
2017-12-01
In this note we try to give a new (or different) approach to the investigation of analytical vector functions. More precisely a notion of a power xn; n ∈ ℕ+ of a vector x ∈ ℝ3 is introduced which allows to define an "analytical" function f : ℝ3 → ℝ3. Let furthermore f (ξ )= ∑n =0 ∞ anξn be an analytical function of the real variable ξ. Here we replace the power ξn of the number ξ with the power of a vector x ∈ ℝ3 to obtain a vector "power series" f (x )= ∑n =0 ∞ anxn . We research some properties of the vector series as well as some applications of this idea. Note that an "analytical" vector function does not depend of any basis, which may be used in research into some problems in physics.
ERIC Educational Resources Information Center
Kjeldsen, Tinne Hoff; Lützen, Jesper
2015-01-01
In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another…
Gómez Rioja, Rubén; Martínez Espartosa, Débora; Segovia, Marta; Ibarz, Mercedes; Llopis, María Antonia; Bauça, Josep Miquel; Marzana, Itziar; Barba, Nuria; Ventura, Monserrat; García Del Pino, Isabel; Puente, Juan José; Caballero, Andrea; Gómez, Carolina; García Álvarez, Ana; Alsina, María Jesús; Álvarez, Virtudes
2018-05-05
The stability limit of an analyte in a biological sample can be defined as the time required until a measured property acquires a bias higher than a defined specification. Many studies assessing stability and presenting recommendations of stability limits are available, but differences among them are frequent. The aim of this study was to classify and to grade a set of bibliographic studies on the stability of five common blood measurands and subsequently generate a consensus stability function. First, a bibliographic search was made for stability studies for five analytes in blood: alanine aminotransferase (ALT), glucose, phosphorus, potassium and prostate specific antigen (PSA). The quality of every study was evaluated using an in-house grading tool. Second, the different conditions of stability were uniformly defined and the percent deviation (PD%) over time for each analyte and condition were scattered while unifying studies with similar conditions. From the 37 articles considered as valid, up to 130 experiments were evaluated and 629 PD% data were included (106 for ALT, 180 for glucose, 113 for phosphorus, 145 for potassium and 85 for PSA). Consensus stability equations were established for glucose, potassium, phosphorus and PSA, but not for ALT. Time is the main variable affecting stability in medical laboratory samples. Bibliographic studies differ in recommedations of stability limits mainly because of different specifications for maximum allowable error. Definition of a consensus stability function in specific conditions can help laboratories define stability limits using their own quality specifications.
An Analytic Approach to Projectile Motion in a Linear Resisting Medium
ERIC Educational Resources Information Center
Stewart, Sean M.
2006-01-01
The time of flight, range and the angle which maximizes the range of a projectile in a linear resisting medium are expressed in analytic form in terms of the recently defined Lambert W function. From the closed-form solutions a number of results characteristic to the motion of the projectile in a linear resisting medium are analytically confirmed,…
An Analytic Solution for Surface Source Sigma Z Calculations.
1981-01-01
diabatic influence function , dimensionless temperature gradient 20. continued. - urbulence as a function of stability for inversion conditions. The...diabatic influence function (f). The unstable u = u,(In(z/zo) - )/k (4) regime diabatic influence function as defined by Paulson (1970) is presented in
Analytic properties for the honeycomb lattice Green function at the origin
NASA Astrophysics Data System (ADS)
Joyce, G. S.
2018-05-01
The analytic properties of the honeycomb lattice Green function are investigated, where is a complex variable which lies in a plane. This double integral defines a single-valued analytic function provided that a cut is made along the real axis from w = ‑3 to . In order to analyse the behaviour of along the edges of the cut it is convenient to define the limit function where . It is shown that and can be evaluated exactly for all in terms of various hypergeometric functions, where the argument function is always real-valued and rational. The second-order linear Fuchsian differential equation satisfied by is also used to derive series expansions for and which are valid in the neighbourhood of the regular singular points and . Integral representations are established for and , where with . In particular, it is proved that where J 0(z) and Y 0(z) denote Bessel functions of the first and second kind, respectively. The results derived in the paper are utilized to evaluate the associated logarithmic integral where w lies in the cut plane. A new set of orthogonal polynomials which are connected with the honeycomb lattice Green function are also briefly discussed. Finally, a link between and the theory of Pearson random walks in a plane is established.
On B-type Open-Closed Landau-Ginzburg Theories Defined on Calabi-Yau Stein Manifolds
NASA Astrophysics Data System (ADS)
Babalic, Elena Mirela; Doryn, Dmitry; Lazaroiu, Calin Iuliu; Tavakol, Mehdi
2018-05-01
We consider the bulk algebra and topological D-brane category arising from the differential model of the open-closed B-type topological Landau-Ginzburg theory defined by a pair (X,W), where X is a non-compact Calabi-Yau manifold and W is a complex-valued holomorphic function. When X is a Stein manifold (but not restricted to be a domain of holomorphy), we extract equivalent descriptions of the bulk algebra and of the category of topological D-branes which are constructed using only the analytic space associated to X. In particular, we show that the D-brane category is described by projective factorizations defined over the ring of holomorphic functions of X. We also discuss simplifications of the analytic models which arise when X is holomorphically parallelizable and illustrate these in a few classes of examples.
Student Career Decisions: The Limits of Rationality.
ERIC Educational Resources Information Center
Baumgardner, Steve R.; Rappoport, Leon
This study compares modes of cognitive functioning revealed in student selection of a college major. Students were interviewed in-depth concerning reasons for their choice of majors. Protocol data suggested two distinct modes of thinking were evident on an analytic-intuitive dimension. For operational purposes analytic thinking was defined by…
Wang, Huai-Song; Song, Min; Hang, Tai-Jun
2016-02-10
The high-value applications of functional polymers in analytical science generally require well-defined interfaces, including precisely synthesized molecular architectures and compositions. Controlled/living radical polymerization (CRP) has been developed as a versatile and powerful tool for the preparation of polymers with narrow molecular weight distributions and predetermined molecular weights. Among the CRP system, atom transfer radical polymerization (ATRP) and reversible addition-fragmentation chain transfer (RAFT) are well-used to develop new materials for analytical science, such as surface-modified core-shell particles, monoliths, MIP micro- or nanospheres, fluorescent nanoparticles, and multifunctional materials. In this review, we summarize the emerging functional interfaces constructed by RAFT and ATRP for applications in analytical science. Various polymers with precisely controlled architectures including homopolymers, block copolymers, molecular imprinted copolymers, and grafted copolymers were synthesized by CRP methods for molecular separation, retention, or sensing. We expect that the CRP methods will become the most popular technique for preparing functional polymers that can be broadly applied in analytical chemistry.
Transfer function concept for ultrasonic characterization of material microstructures
NASA Technical Reports Server (NTRS)
Vary, A.; Kautz, H. E.
1986-01-01
The approach given depends on treating material microstructures as elastomechanical filters that have analytically definable transfer functions. These transfer functions can be defined in terms of the frequency dependence of the ultrasonic attenuation coefficient. The transfer function concept provides a basis for synthesizing expressions that characterize polycrystalline materials relative to microstructural factors such as mean grain size, grain-size distribution functions, and grain boundary energy transmission. Although the approach is nonrigorous, it leads to a rational basis for combining the previously mentioned diverse and fragmented equations for ultrasonic attenuation coefficients.
A simple, analytical, axisymmetric microburst model for downdraft estimation
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1991-01-01
A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.
ERIC Educational Resources Information Center
Ranga, Marina; Etzkowitz, Henry
2013-01-01
This paper introduces the concept of Triple Helix systems as an analytical construct that synthesizes the key features of university--industry--government (Triple Helix) interactions into an "innovation system" format, defined according to systems theory as a set of components, relationships and functions. Among the components of Triple…
Analyzing Security Breaches in the U.S.: A Business Analytics Case-Study
ERIC Educational Resources Information Center
Parks, Rachida F.; Adams, Lascelles
2016-01-01
This is a real-world applicable case-study and includes background information, functional organization requirements, and real data. Business analytics has been defined as the technologies, skills, and practices needed to iteratively investigate historical performance to gain insight or spot trends. You are asked to utilize/apply critical thinking…
Some properties for integro-differential operator defined by a fractional formal.
Abdulnaby, Zainab E; Ibrahim, Rabha W; Kılıçman, Adem
2016-01-01
Recently, the study of the fractional formal (operators, polynomials and classes of special functions) has been increased. This study not only in mathematics but extended to another topics. In this effort, we investigate a generalized integro-differential operator [Formula: see text] defined by a fractional formal (fractional differential operator) and study some its geometric properties by employing it in new subclasses of analytic univalent functions.
Polynomial Asymptotes of the Second Kind
ERIC Educational Resources Information Center
Dobbs, David E.
2011-01-01
This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and…
NASA Astrophysics Data System (ADS)
Yildiz, Ismet; Uyanik, Neslihan; Albayrak, Hilal; Ay, Hilal
2017-09-01
The Weierstrass's associated function is not elliptic but it is of great use in developing the theory of elliptic function. The Zeta function is defined by the double series ∑'m∑″n{1/z-Wmn +1/Wm n +z/Wmn 2 } , where Wmn = 2mω1 + 2nω2 and m, n are integers, not simultaneously zero; the summation ∑'m∑″n{ 1/z -Wm n +1/Wm n +z/Wmn 2 } extends overall integers, not simultaneously. Which Wmn Lattice points. Evidently Wmn are simple poles of ζ (z) and hence the function is meromorphic in W = m ω1+n ω2:(m ,n )≠(0 ,0 ),m ,n ∈ℤ ,Im τ >0, D *=z :|z |>1 ,|Re z |<1/2 andImτ >0, z ∈ℂ. ζ (z) is uniformly convergent series of analytic functions, so the series can be differentiated term-by-term. ζ (z) is an odd function, hence the coefficients of the terms z2k is evidently zero when k is positive integers. Let A be the class of functions f (z) which are analytic and normalized with f (0) = 0 and f' (0) = 1. Let S be the subclass of A consisting of functions f (z) which are univalent in D. Let P class be univalent functions largely concerned with the family S of functions f analytic and univalent in the unit disk D, and satisfying the conditions f (0) = 0 and f' (0) = 1. One of the basic results of the theory is growth theorem, which asserts in part that for each f ∈ S. In particular, the functions f ∈ S are uniformly bounded on each compact subset of D. Thus the family S is locally bounded, and so by Montel's theorem it is a normal family. A relation was established between S class with function of Weierstrass which is analytic and monomorphic Closes-to-P class in unit disk.
Analytical approach for the fractional differential equations by using the extended tanh method
NASA Astrophysics Data System (ADS)
Pandir, Yusuf; Yildirim, Ayse
2018-07-01
In this study, we consider analytical solutions of space-time fractional derivative foam drainage equation, the nonlinear Korteweg-de Vries equation with time and space-fractional derivatives and time-fractional reaction-diffusion equation by using the extended tanh method. The fractional derivatives are defined in the modified Riemann-Liouville context. As a result, various exact analytical solutions consisting of trigonometric function solutions, kink-shaped soliton solutions and new exact solitary wave solutions are obtained.
On the Relativistic Separable Functions for the Breakup Reactions
NASA Astrophysics Data System (ADS)
Bondarenko, Serge G.; Burov, Valery V.; Rogochaya, Elena P.
2018-02-01
In the paper the so-called modified Yamaguchi function for the Bethe-Salpeter equation with a separable kernel is discussed. The type of the functions is defined by the analytic stucture of the hadron current with breakup - the reactions with interacting nucleon-nucleon pair in the final state (electro-, photo-, and nucleon-disintegration of the deuteron).
Analytical study of fractional equations describing anomalous diffusion of energetic particles
NASA Astrophysics Data System (ADS)
Tawfik, A. M.; Fichtner, H.; Schlickeiser, R.; Elhanbaly, A.
2017-06-01
To present the main influence of anomalous diffusion on the energetic particle propagation, the fractional derivative model of transport is developed by deriving the fractional modified Telegraph and Rayleigh equations. Analytical solutions of the fractional modified Telegraph and the fractional Rayleigh equations, which are defined in terms of Caputo fractional derivatives, are obtained by using the Laplace transform and the Mittag-Leffler function method. The solutions of these fractional equations are given in terms of special functions like Fox’s H, Mittag-Leffler, Hermite and Hyper-geometric functions. The predicted travelling pulse solutions are discussed in each case for different values of fractional order.
BLUES function method in computational physics
NASA Astrophysics Data System (ADS)
Indekeu, Joseph O.; Müller-Nedebock, Kristian K.
2018-04-01
We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.
A new method for constructing analytic elements for groundwater flow.
NASA Astrophysics Data System (ADS)
Strack, O. D.
2007-12-01
The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.
Free and Forced Vibrations of Thick-Walled Anisotropic Cylindrical Shells
NASA Astrophysics Data System (ADS)
Marchuk, A. V.; Gnedash, S. V.; Levkovskii, S. A.
2017-03-01
Two approaches to studying the free and forced axisymmetric vibrations of cylindrical shell are proposed. They are based on the three-dimensional theory of elasticity and division of the original cylindrical shell with concentric cross-sectional circles into several coaxial cylindrical shells. One approach uses linear polynomials to approximate functions defined in plan and across the thickness. The other approach also uses linear polynomials to approximate functions defined in plan, but their variation with thickness is described by the analytical solution of a system of differential equations. Both approaches have approximation and arithmetic errors. When determining the natural frequencies by the semi-analytical finite-element method in combination with the divide and conqure method, it is convenient to find the initial frequencies by the finite-element method. The behavior of the shell during free and forced vibrations is analyzed in the case where the loading area is half the shell thickness
Chandrasekhar equations for infinite dimensional systems
NASA Technical Reports Server (NTRS)
Ito, K.; Powers, R. K.
1985-01-01
Chandrasekhar equations are derived for linear time invariant systems defined on Hilbert spaces using a functional analytic technique. An important consequence of this is that the solution to the evolutional Riccati equation is strongly differentiable in time and one can define a strong solution of the Riccati differential equation. A detailed discussion on the linear quadratic optimal control problem for hereditary differential systems is also included.
Direct Shear Failure in Reinforced Concrete Beams under Impulsive Loading
1983-09-01
115 References ............... ............................. 119 Tables . ............................. 124 Figures ............ 1..............30...8217. : = differentiable functions of time 1 = elastic modulus enhancement function 4) 41’ = constants for a given mode W’, = frequency w tfirst thickness-shear...are defined by linear partial differential equations. The analytic results are compared to data gathered on one-way slabs loaded with impulsive blast
NASA Technical Reports Server (NTRS)
Mcnulty, J. F.
1974-01-01
An analysis of the history and background of the Mars Project Viking is presented. The organization and functions of the engineering group responsible for the project are defined. The design and configuration of the proposed space vehicle are examined. Illustrations and tables of data are provided to complete the coverage of the project.
dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2017-01-01
This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.
ERIC Educational Resources Information Center
Micceri, Theodore; Brigman, Leellen; Spatig, Robert
2009-01-01
An extensive, internally cross-validated analytical study using nested (within academic disciplines) Multilevel Modeling (MLM) on 4,560 students identified functional criteria for defining high school curriculum rigor and further determined which measures could best be used to help guide decision making for marginal applicants. The key outcome…
The shape parameter and its modification for defining coastal profiles
NASA Astrophysics Data System (ADS)
Türker, Umut; Kabdaşli, M. Sedat
2009-03-01
The shape parameter is important for the theoretical description of the sandy coastal profiles. This parameter has previously been defined as a function of the sediment-settling velocity. However, the settling velocity cannot be characterized over a wide range of sediment grains. This, in turn, limits the calculation of the shape parameter over a wide range. This paper provides a simpler and faster analytical equation to describe the shape parameter. The validity of the equation has been tested and compared with the previously estimated values given in both graphical and tabular forms. The results of this study indicate that the analytical solutions of the shape parameter improved the usability of profile better than graphical solutions, predicting better results both at the surf zone and offshore.
NASA Astrophysics Data System (ADS)
Moylan, Andrew; Scott, Susan M.; Searle, Anthony C.
2006-02-01
The software tool GRworkbench is an ongoing project in visual, numerical General Relativity at The Australian National University. Recently, GRworkbench has been significantly extended to facilitate numerical experimentation in analytically-defined space-times. The numerical differential geometric engine has been rewritten using functional programming techniques, enabling objects which are normally defined as functions in the formalism of differential geometry and General Relativity to be directly represented as function variables in the C++ code of GRworkbench. The new functional differential geometric engine allows for more accurate and efficient visualisation of objects in space-times and makes new, efficient computational techniques available. Motivated by the desire to investigate a recent scientific claim using GRworkbench, new tools for numerical experimentation have been implemented, allowing for the simulation of complex physical situations.
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
Defining and Assessing Public Health Functions: A Global Analysis.
Martin-Moreno, Jose M; Harris, Meggan; Jakubowski, Elke; Kluge, Hans
2016-01-01
Given the broad scope and intersectoral nature of public health structures and practices, there are inherent difficulties in defining which services fall under the public health remit and in assessing their capacity and performance. The aim of this study is to analyze how public health functions and practice have been defined and operationalized in different countries and regions around the world, with a specific focus on assessment tools that have been developed to evaluate the performance of essential public health functions, services, and operations. Our review has identified nearly 100 countries that have carried out assessments, using diverse analytical and methodological approaches. The assessment processes have evolved quite differently according to administrative arrangements and resource availability, but some key contextual factors emerge that seem to favor policy-oriented follow-up. These include local ownership of the assessment process, policymakers' commitment to reform, and expert technical advice for implementation.
A Study of the U.S. Coast Guard Aviator Training Requirements.
ERIC Educational Resources Information Center
Hall, Eugene R.; And Others
An analytical study conducted to define functional characteristics of modern, synthetic flight training equipment for the purpose of producing potentially better qualified aviators through a combination of aircraft and simulator training. Relevant training which aviators receive in preparation for specific aircraft duties and training requirements…
Keeping the Focus on Clinically Relevant Behavior: Supervision for Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Vandenberghe, Luc
2009-01-01
The challenges in supervising an experiential-interpersonal treatment like FAP are complex. The present paper addresses this complexity by describing three different supervision contexts. Each of these is defined in relation to specific supervisee needs: skills development; therapist difficulties and skills integration. Each context supports…
On the distribution of local dissipation scales in turbulent flows
NASA Astrophysics Data System (ADS)
May, Ian; Morshed, Khandakar; Venayagamoorthy, Karan; Dasi, Lakshmi
2014-11-01
Universality of dissipation scales in turbulence relies on self-similar scaling and large scale independence. We show that the probability density function of dissipation scales, Q (η) , is analytically defined by the two-point correlation function, and the Reynolds number (Re). We also present a new analytical form for the two-point correlation function for the dissipation scales through a generalized definition of a directional Taylor microscale. Comparison of Q (η) predicted within this framework and published DNS data shows excellent agreement. It is shown that for finite Re no single similarity law exists even for the case of homogeneous isotropic turbulence. Instead a family of scaling is presented, defined by Re and a dimensionless local inhomogeneity parameter based on the spatial gradient of the rms velocity. For moderate Re inhomogeneous flows, we note a strong directional dependence of Q (η) dictated by the principal Reynolds stresses. It is shown that the mode of the distribution Q (η) significantly shifts to sub-Kolmogorov scales along the inhomogeneous directions, as in wall bounded turbulence. This work extends the classical Kolmogorov's theory to finite Re homogeneous isotropic turbulence as well as the case of inhomogeneous anisotropic turbulence.
SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, D; Spaans, J; Kumaraswamy, L
Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty onmore » and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published reports or otherwise compared to the results of other users or vendors should clearly indicate whether the Measurement Uncertainty function is used.« less
Bounds of the error of Gauss-Turan-type quadratures
NASA Astrophysics Data System (ADS)
Milovanovic, Gradimir V.; Spalevic, Miodrag M.
2005-06-01
We consider the remainder term of the Gauss-Turan quadrature formulaefor analytic functions in some region of the complex plane containing the interval [-1,1] in its interior. The remainder term is presented in the form of a contour integral over confocal ellipses or circles. A strong error analysis is given for the case with a generalized class of weight functions, introduced recently by Gori and Micchelli. Also, we discuss a general case with an even weight function defined on [-1,1]. Numerical results are included.
Polynomial asymptotes of the second kind
NASA Astrophysics Data System (ADS)
Dobbs, David E.
2011-03-01
This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and conics. Prerequisites include the division algorithm for polynomials with coefficients in the field of real numbers and elementary facts about limits from calculus. This note could be used as enrichment material in courses ranging from Calculus to Real Analysis to Abstract Algebra.
Optimum design of structures subject to general periodic loads
NASA Technical Reports Server (NTRS)
Reiss, Robert; Qian, B.
1989-01-01
A simplified version of Icerman's problem regarding the design of structures subject to a single harmonic load is discussed. The nature of the restrictive conditions that must be placed on the design space in order to ensure an analytic optimum are discussed in detail. Icerman's problem is then extended to include multiple forcing functions with different driving frequencies. And the conditions that now must be placed upon the design space to ensure an analytic optimum are again discussed. An important finding is that all solutions to the optimality condition (analytic stationary design) are local optima, but the global optimum may well be non-analytic. The more general problem of distributing the fixed mass of a linear elastic structure subject to general periodic loads in order to minimize some measure of the steady state deflection is also considered. This response is explicitly expressed in terms of Green's functional and the abstract operators defining the structure. The optimality criterion is derived by differentiating the response with respect to the design parameters. The theory is applicable to finite element as well as distributed parameter models.
Cutting Solid Figures by Plane--Analytical Solution and Spreadsheet Implementation
ERIC Educational Resources Information Center
Benacka, Jan
2012-01-01
In some secondary mathematics curricula, there is a topic called Stereometry that deals with investigating the position and finding the intersection, angle, and distance of lines and planes defined within a prism or pyramid. Coordinate system is not used. The metric tasks are solved using Pythagoras' theorem, trigonometric functions, and sine and…
Construction of RFIF using VVSFs with application
NASA Astrophysics Data System (ADS)
Katiyar, Kuldip; Prasad, Bhagwati
2017-10-01
A method of variable vertical scaling factors (VVSFs) is proposed to define the recurrent fractal interpolation function (RFIF) for fitting the data sets. A generalization of one of the recent methods using analytic approach is presented for finding variable vertical scaling factors. An application of it in reconstruction of an EEG signal is also given.
Analytic solution of magnetic induction distribution of ideal hollow spherical field sources
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-12-01
The Halbach type hollow spherical permanent magnet arrays (HSPMA) are volume compacted, energy efficient field sources, and capable of producing multi-Tesla field in the cavity of the array, which have attracted intense interests in many practical applications. Here, we present analytical solutions of magnetic induction to the ideal HSPMA in entire space, outside of array, within the cavity of array, and in the interior of the magnet. We obtain solutions using concept of magnetic charge to solve the Poisson's and Laplace's equations for the HSPMA. Using these analytical field expressions inside the material, a scalar demagnetization function is defined to approximately indicate the regions of magnetization reversal, partial demagnetization, and inverse magnetic saturation. The analytical field solution provides deeper insight into the nature of HSPMA and offer guidance in designing optimized one.
Module Architecture for in Situ Space Laboratories
NASA Technical Reports Server (NTRS)
Sherwood, Brent
2010-01-01
The paper analyzes internal outfitting architectures for space exploration laboratory modules. ISS laboratory architecture is examined as a baseline for comparison; applicable insights are derived. Laboratory functional programs are defined for seven planet-surface knowledge domains. Necessary and value-added departures from the ISS architecture standard are defined, and three sectional interior architecture options are assessed for practicality and potential performance. Contemporary guidelines for terrestrial analytical laboratory design are found to be applicable to the in-space functional program. Densepacked racks of system equipment, and high module volume packing ratios, should not be assumed as the default solution for exploration laboratories whose primary activities include un-scriptable investigations and experimentation on the system equipment itself.
Role of man in flight experiment payloads, phase 1. [Spacelab mission planning
NASA Technical Reports Server (NTRS)
Malone, T. B.; Kirkpatrick, M.
1974-01-01
The identification of required data for studies of Spacelab experiment functional allocation, the development of an approach to collecting these data from the payload community, and the specification of analytical methods necessary to quantitatively determine the role of man in specific Spacelab experiments are presented. A generalized Spacelab experiment operation sequence was developed, and the parameters necessary to describe each signle function in the sequence were identified. A set of functional descriptor worksheets were also drawn up. The methodological approach to defining the role of man was defined as a series of trade studies using a digial simulation technique. The tradeoff variables identified include scientific crew size, skill mix, and location. An existing digital simulation program suitable for the required analyses was identified and obtained.
Differential memory in the trilinear model magnetotail
NASA Technical Reports Server (NTRS)
Chen, James; Mitchell, Horage G.; Palmadesso, Peter J.
1990-01-01
The previously proposed concept of 'differential memory' is quantitatively demonstrated using an idealized analytical model of particle dynamics in the magnetotail geometry. In this model (the 'trilinear' tail model) the magnetotail is divided into three regions. The particle orbits are solved exactly in each region, thus reducing the orbit integration to an analytical mapping. It is shown that the trilinear model reproduces the essential phase space features of the earlier model (Chen and Palmadesso, 1986), possessing well-defined entry and exit regions, and stochastic, integrable (regular), and transient orbits, occupying disjoint phase space regions. Different regions have widely separated characteristic time scales corresponding to different types of particle motion. Using the analytical model, the evolution of single-particle distribution functions is calculated.
Structure-function analysis of genetically defined neuronal populations.
Groh, Alexander; Krieger, Patrik
2013-10-01
Morphological and functional classification of individual neurons is a crucial aspect of the characterization of neuronal networks. Systematic structural and functional analysis of individual neurons is now possible using transgenic mice with genetically defined neurons that can be visualized in vivo or in brain slice preparations. Genetically defined neurons are useful for studying a particular class of neurons and also for more comprehensive studies of the neuronal content of a network. Specific subsets of neurons can be identified by fluorescence imaging of enhanced green fluorescent protein (eGFP) or another fluorophore expressed under the control of a cell-type-specific promoter. The advantages of such genetically defined neurons are not only their homogeneity and suitability for systematic descriptions of networks, but also their tremendous potential for cell-type-specific manipulation of neuronal networks in vivo. This article describes a selection of procedures for visualizing and studying the anatomy and physiology of genetically defined neurons in transgenic mice. We provide information about basic equipment, reagents, procedures, and analytical approaches for obtaining three-dimensional (3D) cell morphologies and determining the axonal input and output of genetically defined neurons. We exemplify with genetically labeled cortical neurons, but the procedures are applicable to other brain regions with little or no alterations.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
Defining Delayed Consequences as Reinforcers: Some Do, Some Don't, and Nothing Changes
ERIC Educational Resources Information Center
Bradley, Kelly P.; Poling, Alan
2010-01-01
Results of a survey sent to members of the editorial boards of five behavior-analytic journals in 1990 indicated that there was no consensus among respondents with respect to whether delayed events can function as reinforcers (Schlinger, Blakely, Fillhard, & Poling, 1991). Since that time, several studies with nonhuman animals have demonstrated…
7 CFR 90.2 - General terms defined.
Code of Federal Regulations, 2011 CFR
2011-01-01
... agency, or other agency, organization or person that defines in the general terms the basis on which the... analytical data using proficiency check sample or analyte recovery techniques. In addition, the certainty.... Quality control. The system of close examination of the critical details of an analytical procedure in...
NASA Astrophysics Data System (ADS)
Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.
2018-01-01
We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.
NASA Astrophysics Data System (ADS)
Mahdavi, Ali; Seyyedian, Hamid
2014-05-01
This study presents a semi-analytical solution for steady groundwater flow in trapezoidal-shaped aquifers in response to an areal diffusive recharge. The aquifer is homogeneous, anisotropic and interacts with four surrounding streams of constant-head. Flow field in this laterally bounded aquifer-system is efficiently constructed by means of variational calculus. This is accomplished by minimizing a properly defined penalty function for the associated boundary value problem. Simple yet demonstrative scenarios are defined to investigate anisotropy effects on the water table variation. Qualitative examination of the resulting equipotential contour maps and velocity vector field illustrates the validity of the method, especially in the vicinity of boundary lines. Extension to the case of triangular-shaped aquifer with or without an impervious boundary line is also demonstrated through a hypothetical example problem. The present solution benefits from an extremely simple mathematical expression and exhibits strictly close agreement with the numerical results obtained from Modflow. Overall, the solution may be used to conduct sensitivity analysis on various hydrogeological parameters that affect water table variation in aquifers defined in trapezoidal or triangular-shaped domains.
A convergent functional architecture of the insula emerges across imaging modalities.
Kelly, Clare; Toro, Roberto; Di Martino, Adriana; Cox, Christine L; Bellec, Pierre; Castellanos, F Xavier; Milham, Michael P
2012-07-16
Empirical evidence increasingly supports the hypothesis that patterns of intrinsic functional connectivity (iFC) are sculpted by a history of evoked coactivation within distinct neuronal networks. This, together with evidence of strong correspondence among the networks defined by iFC and those delineated using a variety of other neuroimaging techniques, suggests a fundamental brain architecture detectable across multiple functional and structural imaging modalities. Here, we leverage this insight to examine the functional organization of the human insula. We parcellated the insula on the basis of three distinct neuroimaging modalities - task-evoked coactivation, intrinsic (i.e., task-independent) functional connectivity, and gray matter structural covariance. Clustering of these three different covariance-based measures revealed a convergent elemental organization of the insula that likely reflects a fundamental brain architecture governing both brain structure and function at multiple spatial scales. While not constrained to be hierarchical, our parcellation revealed a pseudo-hierarchical, multiscale organization that was consistent with previous clustering and meta-analytic studies of the insula. Finally, meta-analytic examination of the cognitive and behavioral domains associated with each of the insular clusters obtained elucidated the broad functional dissociations likely underlying the topography observed. To facilitate future investigations of insula function across healthy and pathological states, the insular parcels have been made freely available for download via http://fcon_1000.projects.nitrc.org, along with the analytic scripts used to perform the parcellations. Copyright © 2012 Elsevier Inc. All rights reserved.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
Multi-crosswell profile 3D imaging and method
Washbourne, John K.; Rector, III, James W.; Bube, Kenneth P.
2002-01-01
Characterizing the value of a particular property, for example, seismic velocity, of a subsurface region of ground is described. In one aspect, the value of the particular property is represented using at least one continuous analytic function such as a Chebychev polynomial. The seismic data may include data derived from at least one crosswell dataset for the subsurface region of interest and may also include other data. In either instance, data may simultaneously be used from a first crosswell dataset in conjunction with one or more other crosswell datasets and/or with the other data. In another aspect, the value of the property is characterized in three dimensions throughout the region of interest using crosswell and/or other data. In still another aspect, crosswell datasets for highly deviated or horizontal boreholes are inherently useful. The method is performed, in part, by fitting a set of vertically spaced layer boundaries, represented by an analytic function such as a Chebychev polynomial, within and across the region encompassing the boreholes such that a series of layers is defined between the layer boundaries. Initial values of the particular property are then established between the layer boundaries and across the subterranean region using a series of continuous analytic functions. The continuous analytic functions are then adjusted to more closely match the value of the particular property across the subterranean region of ground to determine the value of the particular property for any selected point within the region.
Vapor ingestion in a cylindrical tank with a concave elliptical bottom
NASA Technical Reports Server (NTRS)
Klavins, A.
1974-01-01
An approximate analytical technique is presented for estimating the liquid residual in a tank of arbitrary geometry due to vapor ingestion at any drain rate and acceleration level. The bulk liquid depth at incipient pull-through is defined in terms of the Weber and Bond numbers and two functions that describe the fluid velocity field and free surface shape appropriate to the tank geometry. Numerical results are obtained for the Centaur LH2 tank using limiting approximations to these functions.
Distribution functions of probabilistic automata
NASA Technical Reports Server (NTRS)
Vatan, F.
2001-01-01
Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.
2017-01-01
In an ideal plasmonic surface sensor, the bioactive area, where analytes are recognized by specific biomolecules, is surrounded by an area that is generally composed of a different material. The latter, often the surface of the supporting chip, is generally hard to be selectively functionalized, with respect to the active area. As a result, cross talks between the active area and the surrounding one may occur. In designing a plasmonic sensor, various issues must be addressed: the specificity of analyte recognition, the orientation of the immobilized biomolecule that acts as the analyte receptor, and the selectivity of surface coverage. The objective of this tutorial review is to introduce the main rational tools required for a correct and complete approach to chemically functionalize plasmonic surface biosensors. After a short introduction, the review discusses, in detail, the most common strategies for achieving effective surface functionalization. The most important issues, such as the orientation of active molecules and spatial and chemical selectivity, are considered. A list of well-defined protocols is suggested for the most common practical situations. Importantly, for the reported protocols, we also present direct comparisons in term of costs, labor demand, and risk vs benefit balance. In addition, a survey of the most used characterization techniques necessary to validate the chemical protocols is reported. PMID:28796479
Viscoplastic Model Development with an Eye Toward Characterization
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Walker, Kevin P.
1995-01-01
A viscoplastic theory is developed that reduces analytically to creep theory under steady-state conditions. A viscoplastic model is constructed within this theoretical framework by defining material functions that have close ties to the physics of inelasticity. As a consequence, this model is easily characterized-only steady-state creep data, monotonic stress-strain curves, and saturated stress-strain hysteresis loops are required.
ERIC Educational Resources Information Center
Granena, Gisela
2012-01-01
Very high-level, functional ability in foreign languages is increasingly important in many walks of life. It is also very rare, and likely requires an early start and/or a special aptitude. This study investigated the extent to which aptitude for explicit learning, defined as "analytic ability" and aptitude for implicit learning, defined…
A theoretical study on pure bending of hexagonal close-packed metal sheet
NASA Astrophysics Data System (ADS)
Mehrabi, Hamed; Yang, Chunhui
2018-05-01
Hexagonal close-packed (HCP) metals have quite different mechanical behaviours in comparison to conventional cubic metals such as steels and aluminum alloys [1, 2]. They exhibit a significant tension-compression asymmetry in initial yielding and subsequent plastic hardening. The reason for this unique behaviour can be attributed to their limited symmetric crystal structure, which leads to twining deformation [3-5]. This unique behaviour strongly influences sheet metal forming of such metals, especially for roll forming, in which the bending is dominant. Hence, it is crucial to represent constitutive relations of HCP metals for accurate estimation of bending moment-curvature behaviours. In this paper, an analytical model for asymmetric elastoplastic pure bending with an application of Cazacu-Barlat asymmetric yield function [6] is presented. This yield function considers the asymmetrical tension-compression behaviour of HCP metals by using second and third invariants of the stress deviator tensor and a specified constant, which can be expressed in terms of uniaxial yield stresses in tension and compression. As a case study, the analytical model is applied to predict the moment-curvature behaviours of AZ31B magnesium alloy sheets under uniaxial loading condition. Furthermore, the analytical model is implemented as a user-defined material through the UMAT interface in Abaqus [7, 8] for conducting pure bending simulations. The results show that the analytical model can reasonably capture the asymmetric tension-compression behaviour of the magnesium alloy. The predicted moment-curvature behaviour has good agreement with the experimental results. Furthermore, numerical results show a better accuracy by the application of the Cazacu-Barlat yield function than those using the von-Mises yield function, which are more conservative than analytical results.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
On the Analytical and Numerical Properties of the Truncated Laplace Transform II
2015-05-29
La,b)∗ ◦ La,b) (un)) (t) = ∫ b a 1 t+ s un(s)ds = α 2 nun (t). (32) Similarly, the left singular functions vn of La,b are eigenfunctions of the...odd in the sense that Un(s) = (−1) nUn (−s). (83) 3.5 Decay of the coefficients Since the left singular function vn (defined in (27)) is a smooth...is associated with the right singular function un via (41) and (42) and it is studied in [12]. Lemma 3.13. Suppose that un be the n+ 1-th right
Scalar curvature in conformal geometry of Connes-Landi noncommutative manifolds
NASA Astrophysics Data System (ADS)
Liu, Yang
2017-11-01
We first propose a conformal geometry for Connes-Landi noncommutative manifolds and study the associated scalar curvature. The new scalar curvature contains its Riemannian counterpart as the commutative limit. Similar to the results on noncommutative two tori, the quantum part of the curvature consists of actions of the modular derivation through two local curvature functions. Explicit expressions for those functions are obtained for all even dimensions (greater than two). In dimension four, the one variable function shows striking similarity to the analytic functions of the characteristic classes appeared in the Atiyah-Singer local index formula, namely, it is roughly a product of the j-function (which defines the A ˆ -class of a manifold) and an exponential function (which defines the Chern character of a bundle). By performing two different computations for the variation of the Einstein-Hilbert action, we obtain deep internal relations between two local curvature functions. Straightforward verification for those relations gives a strong conceptual confirmation for the whole computational machinery we have developed so far, especially the Mathematica code hidden behind the paper.
A two-parameter design storm for Mediterranean convective rainfall
NASA Astrophysics Data System (ADS)
García-Bartual, Rafael; Andrés-Doménech, Ignacio
2017-05-01
The following research explores the feasibility of building effective design storms for extreme hydrological regimes, such as the one which characterizes the rainfall regime of the east and south-east of the Iberian Peninsula, without employing intensity-duration-frequency (IDF) curves as a starting point. Nowadays, after decades of functioning hydrological automatic networks, there is an abundance of high-resolution rainfall data with a reasonable statistic representation, which enable the direct research of temporal patterns and inner structures of rainfall events at a given geographic location, with the aim of establishing a statistical synthesis directly based on those observed patterns. The authors propose a temporal design storm defined in analytical terms, through a two-parameter gamma-type function. The two parameters are directly estimated from 73 independent storms identified from rainfall records of high temporal resolution in Valencia (Spain). All the relevant analytical properties derived from that function are developed in order to use this storm in real applications. In particular, in order to assign a probability to the design storm (return period), an auxiliary variable combining maximum intensity and total cumulated rainfall is introduced. As a result, for a given return period, a set of three storms with different duration, depth and peak intensity are defined. The consistency of the results is verified by means of comparison with the classic method of alternating blocks based on an IDF curve, for the above mentioned study case.
Global stability and exact solution of an arbitrary-solute nonlinear cellular mass transport system.
Benson, James D
2014-12-01
The prediction of the cellular state as a function of extracellular concentrations and temperatures has been of interest to physiologists for nearly a century. One of the most widely used models in the field is one where mass flux is linearly proportional to the concentration difference across the membrane. These fluxes define a nonlinear differential equation system for the intracellular state, which when coupled with appropriate initial conditions, define the intracellular state as a function of the extracellular concentrations of both permeating and nonpermeating solutes. Here we take advantage of a reparametrization scheme to extend existing stability results to a more general setting and to a develop analytical solutions to this model for an arbitrary number of extracellular solutes. Copyright © 2014 Elsevier Inc. All rights reserved.
Generation of three-dimensional delaunay meshes from weakly structured and inconsistent data
NASA Astrophysics Data System (ADS)
Garanzha, V. A.; Kudryavtseva, L. N.
2012-03-01
A method is proposed for the generation of three-dimensional tetrahedral meshes from incomplete, weakly structured, and inconsistent data describing a geometric model. The method is based on the construction of a piecewise smooth scalar function defining the body so that its boundary is the zero isosurface of the function. Such implicit description of three-dimensional domains can be defined analytically or can be constructed from a cloud of points, a set of cross sections, or a "soup" of individual vertices, edges, and faces. By applying Boolean operations over domains, simple primitives can be combined with reconstruction results to produce complex geometric models without resorting to specialized software. Sharp edges and conical vertices on the domain boundary are reproduced automatically without using special algorithms. Refs. 42. Figs. 25.
What Is Spatio-Temporal Data Warehousing?
NASA Astrophysics Data System (ADS)
Vaisman, Alejandro; Zimányi, Esteban
In the last years, extending OLAP (On-Line Analytical Processing) systems with spatial and temporal features has attracted the attention of the GIS (Geographic Information Systems) and database communities. However, there is no a commonly agreed definition of what is a spatio-temporal data warehouse and what functionality such a data warehouse should support. Further, the solutions proposed in the literature vary considerably in the kind of data that can be represented as well as the kind of queries that can be expressed. In this paper we present a conceptual framework for defining spatio-temporal data warehouses using an extensible data type system. We also define a taxonomy of different classes of queries of increasing expressive power, and show how to express such queries using an extension of the tuple relational calculus with aggregated functions.
Impulsive Choice and Workplace Safety: A New Area of Inquiry for Research in Occupational Settings
ERIC Educational Resources Information Center
Reynolds, Brady; Schiffbauer, Ryan M.
2004-01-01
A conceptual argument is presented for the relevance of behavior-analytic research on impulsive choice to issues of occupational safety and health. Impulsive choice is defined in terms of discounting, which is the tendency for the value of a commodity to decrease as a function of various parameters (e.g., having to wait or expend energy to receive…
Padé Approximant and Minimax Rational Approximation in Standard Cosmology
NASA Astrophysics Data System (ADS)
Zaninetti, Lorenzo
2016-02-01
The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.
Kosek, Margaret; Guerrant, Richard L.; Kang, Gagandeep; Bhutta, Zulfiqar; Yori, Pablo Peñataro; Gratz, Jean; Gottlieb, Michael; Lang, Dennis; Lee, Gwenyth; Haque, Rashidul; Mason, Carl J.; Ahmed, Tahmeed; Lima, Aldo; Petri, William A.; Houpt, Eric; Olortegui, Maribel Paredes; Seidman, Jessica C.; Mduma, Estomih; Samie, Amidou; Babji, Sudhir
2014-01-01
Individuals in the developing world live in conditions of intense exposure to enteric pathogens due to suboptimal water and sanitation. These environmental conditions lead to alterations in intestinal structure, function, and local and systemic immune activation that are collectively referred to as environmental enteropathy (EE). This condition, although poorly defined, is likely to be exacerbated by undernutrition as well as being responsible for permanent growth deficits acquired in early childhood, vaccine failure, and loss of human potential. This article addresses the underlying theoretical and analytical frameworks informing the methodology proposed by the Etiology, Risk Factors and Interactions of Enteric Infections and Malnutrition and the Consequences for Child Health and Development (MAL-ED) cohort study to define and quantify the burden of disease caused by EE within a multisite cohort. Additionally, we will discuss efforts to improve, standardize, and harmonize laboratory practices within the MAL-ED Network. These efforts will address current limitations in the understanding of EE and its burden on children in the developing world. PMID:25305293
Practical deviations from Henry`s law for water/air partitioning of volatile organic compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schabron, J.F.; Rovani, J.F. Jr.
A study was conducted to define parameters relating to the use of a down hole submersible photoionization detector (PID) probe to measure volatile organic compounds (VOCs) in an artificial headspace. The partitioning of toluene and trichloroethylene between water and air was studied as a function of analyte concentration and water temperature. The Henry`s law constant governing this partitioning represents an ideal condition at infinite dilution for a particular temperature. The results show that in practice. this partitioning is far from ideal. Conditions resulting in apparent, practical deviations from Henry`s law include temperature and VOC concentration. Thus, a single value ofmore » Henry`s law constant for a particular VOC such as toluene can provide only an approximation of concentration in the field. Detector response in saturated humidity environments as a function of water temperature and analyte concentration was studied also.« less
On pseudo-hyperkähler prepotentials
NASA Astrophysics Data System (ADS)
Devchand, Chandrashekar; Spiro, Andrea
2016-10-01
An explicit surjection from a set of (locally defined) unconstrained holomorphic functions on a certain submanifold of Sp1(ℂ) × ℂ4n onto the set HKp,q of local isometry classes of real analytic pseudo-hyperkähler metrics of signature (4p, 4q) in dimension 4n is constructed. The holomorphic functions, called prepotentials, are analogues of Kähler potentials for Kähler metrics and provide a complete parameterisation of HKp,q. In particular, there exists a bijection between HKp,q and the set of equivalence classes of prepotentials. This affords the explicit construction of pseudo-hyperkähler metrics from specified prepotentials. The construction generalises one due to Galperin, Ivanov, Ogievetsky, and Sokatchev. Their work is given a coordinate-free formulation and complete, self-contained proofs are provided. The Appendix provides a vital tool for this construction: a reformulation of real analytic G-structures in terms of holomorphic frame fields on complex manifolds.
Future Roles for Autonomous Vertical Lift in Disaster Relief and Emergency Response
NASA Technical Reports Server (NTRS)
Young, Larry A.
2006-01-01
System analysis concepts are applied to the assessment of potential collaborative contributions of autonomous system and vertical lift (a.k.a. rotorcraft, VTOL, powered-lift, etc.) technologies to the important, and perhaps underemphasized, application domain of disaster relief and emergency response. In particular, an analytic framework is outlined whereby system design functional requirements for an application domain can be derived from defined societal good goals and objectives.
The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.
NASA Astrophysics Data System (ADS)
Loridan, V.; Ripoll, J. F.; De Vuyst, F.
2017-12-01
Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.
U(1) current from the AdS/CFT: diffusion, conductivity and causality
NASA Astrophysics Data System (ADS)
Bu, Yanyan; Lublinsky, Michael; Sharon, Amir
2016-04-01
For a holographically defined finite temperature theory, we derive an off-shell constitutive relation for a global U(1) current driven by a weak external non-dynamical electromagnetic field. The constitutive relation involves an all order gradient expansion resummed into three momenta-dependent transport coefficient functions: diffusion, electric conductivity, and "magnetic" conductivity. These transport functions are first computed analytically in the hydrodynamic limit, up to third order in the derivative expansion, and then numerically for generic values of momenta. We also compute a diffusion memory function, which, as a result of all order gradient resummation, is found to be causal.
Impact of uncertainty in expected return estimation on stock price volatility
NASA Astrophysics Data System (ADS)
Kostanjcar, Zvonko; Jeren, Branko; Juretic, Zeljan
2012-11-01
We investigate the origin of volatility in financial markets by defining an analytical model for time evolution of stock share prices. The defined model is similar to the GARCH class of models, but can additionally exhibit bimodal behaviour in the supply-demand structure of the market. Moreover, it differs from existing Ising-type models. It turns out that the constructed model is a solution of a thermodynamic limit of a Gibbs probability measure when the number of traders and the number of stock shares approaches infinity. The energy functional of the Gibbs probability measure is derived from the Nash equilibrium of the underlying game.
NASA Astrophysics Data System (ADS)
Kjeldsen, Tinne Hoff; Lützen, Jesper
2015-07-01
In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another variable. The change was required when mathematicians discovered that analytic expressions were not sufficient to represent physical phenomena such as the vibration of a string (Euler) and heat conduction (Fourier and Dirichlet). The introduction of generalized functions or distributions is shown to stem partly from the development of new theories of physics such as electrical engineering and quantum mechanics that led to the use of improper functions such as the delta function that demanded a proper foundation. We argue that the development of student understanding of mathematics and its nature is enhanced by embedding mathematical concepts and theories, within an explicit-reflective framework, into a rich historical context emphasizing its interaction with other disciplines such as physics. Students recognize and become engaged with meta-discursive rules governing mathematics. Mathematics teachers can thereby teach inquiry in mathematics as it occurs in the sciences, as mathematical practice aimed at obtaining new mathematical knowledge. We illustrate such a historical teaching and learning of mathematics within an explicit and reflective framework by two examples of student-directed, problem-oriented project work following the Roskilde Model, in which the connection to physics is explicit and provides a learning space where the nature of mathematics and mathematical practices are linked to natural science.
A Boltzmann machine for the organization of intelligent machines
NASA Technical Reports Server (NTRS)
Moed, Michael C.; Saridis, George N.
1989-01-01
In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved to converge to the minimum of a cost function. Finally, simulations will show the effectiveness of a variety of search techniques for the intelligent machine.
Lessons Learned from Deploying an Analytical Task Management Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen
2007-01-01
Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.
Algebraic approach to small-world network models
NASA Astrophysics Data System (ADS)
Rudolph-Lilith, Michelle; Muller, Lyle E.
2014-01-01
We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.
Roll-Axis Hydrofluidic Stability Augmentation System Development
1975-09-01
lifi .1035 SW 30 left for znro time delay - r Ight for other. 17 Preceding page Hank Recordings of the simulated aircraft performance to...DESIGN The analytical effort defined the gains and shaping networks required for the roll-axis damper system for the OH-58A helicopter, and the...Shaping Networks Usually a combination of resistors and capacitors (bellows) is designed to provide the following functions: a) b) 3.1.4 1 Lag
An exactly solvable coarse-grained model for species diversity
NASA Astrophysics Data System (ADS)
Suweis, Samir; Rinaldo, Andrea; Maritan, Amos
2012-07-01
We present novel analytical results concerning ecosystem species diversity that stem from a proposed coarse-grained neutral model based on birth-death processes. The relevance of the problem lies in the urgency for understanding and synthesizing both theoretical results from ecological neutral theory and empirical evidence on species diversity preservation. The neutral model of biodiversity deals with ecosystems at the same trophic level, where per capita vital rates are assumed to be species independent. Closed-form analytical solutions for the neutral theory are obtained within a coarse-grained model, where the only input is the species persistence time distribution. Our results pertain to: the probability distribution function of the number of species in the ecosystem, both in transient and in stationary states; the n-point connected time correlation function; and the survival probability, defined as the distribution of time spans to local extinction for a species randomly sampled from the community. Analytical predictions are also tested on empirical data from an estuarine fish ecosystem. We find that emerging properties of the ecosystem are very robust and do not depend on specific details of the model, with implications for biodiversity and conservation biology.
Innovative analytical tools to characterize prebiotic carbohydrates of functional food interest.
Corradini, Claudio; Lantano, Claudia; Cavazza, Antonella
2013-05-01
Functional foods are one of the most interesting areas of research and innovation in the food industry. A functional food or functional ingredient is considered to be any food or food component that provides health benefits beyond basic nutrition. Recently, consumers have shown interest in natural bioactive compounds as functional ingredients in the diet owing to their various beneficial effects for health. Water-soluble fibers and nondigestible oligosaccharides and polysaccharides can be defined as functional food ingredients. Fructooligosaccharides (FOS) and inulin are resistant to direct metabolism by the host and reach the caecocolon, where they are used by selected groups of beneficial bacteria. Furthermore, they are able to improve physical and structural properties of food, such as hydration, oil-holding capacity, viscosity, texture, sensory characteristics, and shelf-life. This article reviews major innovative analytical developments to screen and identify FOS, inulins, and the most employed nonstarch carbohydrates added or naturally present in functional food formulations. High-performance anion-exchange chromatography with pulsed electrochemical detection (HPAEC-PED) is one of the most employed analytical techniques for the characterization of those molecules. Mass spectrometry is also of great help, in particularly matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS), which is able to provide extensive information regarding the molecular weight and length profiles of oligosaccharides and polysaccharides. Moreover, MALDI-TOF-MS in combination with HPAEC-PED has been shown to be of great value for the complementary information it can provide. Some other techniques, such as NMR spectroscopy, are also discussed, with relevant examples of recent applications. A number of articles have appeared in the literature in recent years regarding the analysis of inulin, FOS, and other carbohydrates of interest in the field and they are critically reviewed.
2010-11-01
defined herein as terrain whose surface deformation due to a single vehicle traversing the surface is negligible, such as paved roads (both asphalt ...ground vehicle reliability predictions. Current application of this work is limited to the analysis of U.S. Highways, comprised of both asphalt and...Highways that are consistent between asphalt and concrete roads b. The principle terrain characteristics are defined with analytic basis vectors
NASA Astrophysics Data System (ADS)
Rahman, M. Muzibur; Ahmad, S. Reaz
2017-12-01
An analytical investigation of elastic fields for a guided deep beam of orthotropic composite material having three point symmetric bending is carried out using displacement potential boundary modeling approach. Here, the formulation is developed as a single function of space variables defined in terms of displacement components, which has to satisfy the mixed type of boundary conditions. The relevant displacement and stress components are derived into infinite series using Fourier integral along with suitable polynomials coincided with boundary conditions. The results are presented mainly in the form of graphs and verified with finite element solutions using ANSYS. This study shows that the analytical and numerical solutions are in good agreement and thus enhances reliability of the displacement potential approach.
Space station structures and dynamics test program
NASA Technical Reports Server (NTRS)
Moore, Carleton J.; Townsend, John S.; Ivey, Edward W.
1987-01-01
The design, construction, and operation of a low-Earth orbit space station poses unique challenges for development and implementation of new technology. The technology arises from the special requirement that the station be built and constructed to function in a weightless environment, where static loads are minimal and secondary to system dynamics and control problems. One specific challenge confronting NASA is the development of a dynamics test program for: (1) defining space station design requirements, and (2) identifying the characterizing phenomena affecting the station's design and development. A general definition of the space station dynamic test program, as proposed by MSFC, forms the subject of this report. The test proposal is a comprehensive structural dynamics program to be launched in support of the space station. The test program will help to define the key issues and/or problems inherent to large space structure analysis, design, and testing. Development of a parametric data base and verification of the math models and analytical analysis tools necessary for engineering support of the station's design, construction, and operation provide the impetus for the dynamics test program. The philosophy is to integrate dynamics into the design phase through extensive ground testing and analytical ground simulations of generic systems, prototype elements, and subassemblies. On-orbit testing of the station will also be used to define its capability.
Extended Rindler spacetime and a new multiverse structure
NASA Astrophysics Data System (ADS)
Araya, Ignacio J.; Bars, Itzhak
2018-04-01
This is the first of a series of papers in which we use analyticity properties of quantum fields propagating on a spacetime to uncover a new multiverse geometry when the classical geometry has horizons and/or singularities. The nature and origin of the "multiverse" idea presented in this paper, that is shared by the fields in the standard model coupled to gravity, are different from other notions of a multiverse. Via analyticity we are able to establish definite relations among the universes. In this paper we illustrate these properties for the extended Rindler space, while black hole spacetime and the cosmological geometry of mini-superspace (see Appendix B) will appear in later papers. In classical general relativity, extended Rindler space is equivalent to flat Minkowski space; it consists of the union of the four wedges in (u ,v ) light-cone coordinates as in Fig. 1. In quantum mechanics, the wavefunction is an analytic function of (u ,v ) that is sensitive to branch points at the horizons u =0 or v =0 , with branch cuts attached to them. The wave function is uniquely defined by analyticity on an infinite number of sheets in the cut analytic (u ,v ) spacetime. This structure is naturally interpreted as an infinite stack of identical Minkowski geometries, or "universes", connected to each other by analyticity across branch cuts, such that each sheet represents a different Minkowski universe when (u ,v ) are analytically continued to the real axis on any sheet. We show in this paper that, in the absence of interactions, information does not flow from one Rindler sheet to another. By contrast, for an eternal black hole spacetime, which may be viewed as a modification of Rindler that includes gravitational interactions, analyticity shows how information is "lost" due to a flow to other universes, enabled by an additional branch point and cut due to the black hole singularity.
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2016-09-01
Hydrologists and engineers may choose from a range of semidistributed rainfall-runoff models such as VIC, PDM, and TOPMODEL, all of which predict runoff from a distribution of watershed properties. However, these models are not easily compared to event-based data and are missing ready-to-use analytical expressions that are analogous to the SCS-CN method. The SCS-CN method is an event-based model that describes the runoff response with a rainfall-runoff curve that is a function of the cumulative storm rainfall and antecedent wetness condition. Here we develop an event-based probabilistic storage framework and distill semidistributed models into analytical, event-based expressions for describing the rainfall-runoff response. The event-based versions called VICx, PDMx, and TOPMODELx also are extended with a spatial description of the runoff concept of "prethreshold" and "threshold-excess" runoff, which occur, respectively, before and after infiltration exceeds a storage capacity threshold. For total storm rainfall and antecedent wetness conditions, the resulting ready-to-use analytical expressions define the source areas (fraction of the watershed) that produce runoff by each mechanism. They also define the probability density function (PDF) representing the spatial variability of runoff depths that are cumulative values for the storm duration, and the average unit area runoff, which describes the so-called runoff curve. These new event-based semidistributed models and the traditional SCS-CN method are unified by the same general expression for the runoff curve. Since the general runoff curve may incorporate different model distributions, it may ease the way for relating such distributions to land use, climate, topography, ecology, geology, and other characteristics.
From the fundamental rule to the analysing situation.
Donnet, J L
2001-02-01
The analytic method relies on the mental capacity to produce an associative sequence, and, afterwards, to discern its unconscious logic; within the social practice of the analytic cure, the method presents itself as the mastered enactment of the condition through which free association proves to be possible, interpretable and beneficial. There is a contradiction between the necessity of relying on a former theorisation and that of willingly suspending a knowledge that might serve the authenticity of the experience. The author reminds us of the structural links between the fundamental rule and the defined situations within which the analytic process of transformative investigation can take place. He raises the problems that it is suggested arise with the initial objectivation method by acknowledging the transference as the created-found object of interpretation. He shows how the transformation of the patient into analysand implies the functional introjection of the various elements contained by the analytic site. The meaning given to the expression 'analysing situation' is made explicit. The crucial value of the process of enunciation is illustrated by a brief example.
System engineering toolbox for design-oriented engineers
NASA Technical Reports Server (NTRS)
Goldberg, B. E.; Everhart, K.; Stevens, R.; Babbitt, N., III; Clemens, P.; Stout, L.
1994-01-01
This system engineering toolbox is designed to provide tools and methodologies to the design-oriented systems engineer. A tool is defined as a set of procedures to accomplish a specific function. A methodology is defined as a collection of tools, rules, and postulates to accomplish a purpose. For each concept addressed in the toolbox, the following information is provided: (1) description, (2) application, (3) procedures, (4) examples, if practical, (5) advantages, (6) limitations, and (7) bibliography and/or references. The scope of the document includes concept development tools, system safety and reliability tools, design-related analytical tools, graphical data interpretation tools, a brief description of common statistical tools and methodologies, so-called total quality management tools, and trend analysis tools. Both relationship to project phase and primary functional usage of the tools are also delineated. The toolbox also includes a case study for illustrative purposes. Fifty-five tools are delineated in the text.
Risk and utility in portfolio optimization
NASA Astrophysics Data System (ADS)
Cohen, Morrel H.; Natoli, Vincent D.
2003-06-01
Modern portfolio theory (MPT) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets. In the original mean-variance approach of Markowitz, volatility is taken as a proxy for risk, conflating uncertainty with risk. There have been many subsequent attempts to alleviate that weakness which, typically, combine utility and risk. We present here a modification of MPT based on the inclusion of separate risk and utility criteria. We define risk as the probability of failure to meet a pre-established investment goal. We define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield. The emphasis throughout is on long investment horizons for which risk-free assets do not exist. Analytic results are presented for a Gaussian probability distribution. Risk-utility relations are explored via empirical stock-price data, and an illustrative portfolio is optimized using the empirical data.
ERIC Educational Resources Information Center
Azevedo, Roger
2015-01-01
Engagement is one of the most widely misused and overgeneralized constructs found in the educational, learning, instructional, and psychological sciences. The articles in this special issue represent a wide range of traditions and highlight several key conceptual, theoretical, methodological, and analytical issues related to defining and measuring…
The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization
NASA Technical Reports Server (NTRS)
1980-01-01
Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.
The Geoinformatica free and open source software stack
NASA Astrophysics Data System (ADS)
Jolma, A.
2012-04-01
The Geoinformatica free and open source software (FOSS) stack is based mainly on three established FOSS components, namely GDAL, GTK+, and Perl. GDAL provides access to a very large selection of geospatial data formats and data sources, a generic geospatial data model, and a large collection of geospatial analytical and processing functionality. GTK+ and the Cairo graphics library provide generic graphics and graphical user interface capabilities. Perl is a programming language, for which there is a very large set of FOSS modules for a wide range of purposes and which can be used as an integrative tool for building applications. In the Geoinformatica stack, data storages such as FOSS RDBMS PostgreSQL with its geospatial extension PostGIS can be used below the three above mentioned components. The top layer of Geoinformatica consists of a C library and several Perl modules. The C library comprises a general purpose raster algebra library, hydrological terrain analysis functions, and visualization code. The Perl modules define a generic visualized geospatial data layer and subclasses for raster and vector data and graphs. The hydrological terrain functions are already rather old and they suffer for example from the requirement of in-memory rasters. Newer research conducted using the platform include basic geospatial simulation modeling, visualization of ecological data, linking with a Bayesian network engine for spatial risk assessment in coastal areas, and developing standards-based distributed water resources information systems in Internet. The Geoinformatica stack constitutes a platform for geospatial research, which is targeted towards custom analytical tools, prototyping and linking with external libraries. Writing custom analytical tools is supported by the Perl language and the large collection of tools that are available especially in GDAL and Perl modules. Prototyping is supported by the GTK+ library, the GUI tools, and the support for object-oriented programming in Perl. New feature types, geospatial layer classes, and tools as extensions with specific features can be defined, used, and studied. Linking with external libraries is possible using the Perl foreign function interface tools or with generic tools such as Swig. We are interested in implementing and testing linking Geoinformatica with existing or new more specific hydrological FOSS.
2016-02-15
do not quote them here. A sequel details a yet more efficient analytic technique based on holomorphic functions of the internal - state Markov chain...required, though, when synchronizing over a quantum channel? Recent work demonstrated that representing causal similarity as quantum state ...minimal, unifilar predictor4. The -machine’s causal states σ ∈ are defined by the equivalence relation that groups all histories = −∞ ←x x :0 that
Defining a Cancer Dependency Map | Office of Cancer Genomics
Most human epithelial tumors harbor numerous alterations, making it difficult to predict which genes are required for tumor survival. To systematically identify cancer dependencies, we analyzed 501 genome-scale loss-of-function screens performed in diverse human cancer cell lines. We developed DEMETER, an analytical framework that segregates on- from off-target effects of RNAi. 769 genes were differentially required in subsets of these cell lines at a threshold of six SDs from the mean.
Crude Oil Remote Sensing, Characterization and Cleaning with ContinuousWave and Pulsed Lasers
2015-01-23
explained by strong pressure spikes during cavitation in liquid jets . These experiments were not directly tested for the pipe cleaning, but their results...analytical functions (like circular, elliptical and similar shapes). In our case of cylindrical symmetry of the oil film shape is defined by two...the high-pressure (50 – 100 atm) oil and water jets (with cavitations in narrow tubes) revealed a new potential for a more efficient cleaning of
Acoustic Rectification in Dispersive Media
NASA Technical Reports Server (NTRS)
Cantrell, John H.
2008-01-01
It is shown that the shapes of acoustic radiation-induced static strain and displacement pulses (rectified acoustic pulses) are defined locally by the energy density of the generating waveform. Dispersive properties are introduced analytically by assuming that the rectified pulses are functionally dependent on a phase factor that includes both dispersive and nonlinear terms. The dispersion causes an evolutionary change in the shape of the energy density profile that leads to the generation of solitons experimentally observed in fused silica.
Variational Trajectory Optimization Tool Set: Technical description and user's manual
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.
1993-01-01
The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.
Chemoselective synthesis and analysis of naturally occurring phosphorylated cysteine peptides
Bertran-Vicente, Jordi; Penkert, Martin; Nieto-Garcia, Olaia; Jeckelmann, Jean-Marc; Schmieder, Peter; Krause, Eberhard; Hackenberger, Christian P. R.
2016-01-01
In contrast to protein O-phosphorylation, studying the function of the less frequent N- and S-phosphorylation events have lagged behind because they have chemical features that prevent their manipulation through standard synthetic and analytical methods. Here we report on the development of a chemoselective synthetic method to phosphorylate Cys side-chains in unprotected peptides. This approach makes use of a reaction between nucleophilic phosphites and electrophilic disulfides accessible by standard methods. We achieve the stereochemically defined phosphorylation of a Cys residue and verify the modification using electron-transfer higher-energy dissociation (EThcD) mass spectrometry. To demonstrate the use of the approach in resolving biological questions, we identify an endogenous Cys phosphorylation site in IICBGlc, which is known to be involved in the carbohydrate uptake from the bacterial phosphotransferase system (PTS). This new chemical and analytical approach finally allows further investigating the functions and significance of Cys phosphorylation in a wide range of crucial cellular processes. PMID:27586301
van de Geijn, J; Fraass, B A
1984-01-01
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.
Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR
DOE Office of Scientific and Technical Information (OSTI.GOV)
van de Geijn, J.; Fraass, B.A.
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less
A Variational Approach to the Analysis of Dissipative Electromechanical Systems
Allison, Andrew; Pearce, Charles E. M.; Abbott, Derek
2014-01-01
We develop a method for systematically constructing Lagrangian functions for dissipative mechanical, electrical, and electromechanical systems. We derive the equations of motion for some typical electromechanical systems using deterministic principles that are strictly variational. We do not use any ad hoc features that are added on after the analysis has been completed, such as the Rayleigh dissipation function. We generalise the concept of potential, and define generalised potentials for dissipative lumped system elements. Our innovation offers a unified approach to the analysis of electromechanical systems where there are energy and power terms in both the mechanical and electrical parts of the system. Using our novel technique, we can take advantage of the analytic approach from mechanics, and we can apply these powerful analytical methods to electrical and to electromechanical systems. We can analyse systems that include non-conservative forces. Our methodology is deterministic, and does does require any special intuition, and is thus suitable for automation via a computer-based algebra package. PMID:24586221
Dual nozzle aerodynamic and cooling analysis study
NASA Technical Reports Server (NTRS)
Meagher, G. M.
1981-01-01
Analytical models to predict performance and operating characteristics of dual nozzle concepts were developed and improved. Aerodynamic models are available to define flow characteristics and bleed requirements for both the dual throat and dual expander concepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow, boundary layer, and shock effects within dual nozzle engines. Thermal analyses were performed to define cooling requirements for baseline configurations, and special studies of unique dual nozzle cooling problems defined feasible means of achieving adequate cooling.
Feigenbaum, A; Scholler, D; Bouquant, J; Brigot, G; Ferrier, D; Franzl, R; Lillemarktt, L; Riquet, A M; Petersen, J H; van Lierop, B; Yagoubi, N
2002-02-01
The results of a research project (EU AIR Research Programme CT94-1025) aimed to introduce control of migration into good manufacturing practice and into enforcement work are reported. Representative polymer classes were defined on the basis of chemical structure, technological function, migration behaviour and market share. These classes were characterized by analytical methods. Analytical techniques were investigated for identification of potential migrants. High-temperature gas chromatography was shown to be a powerful method and 1H-magnetic resonance provided a convenient fingerprint of plastic materials. Volatile compounds were characterized by headspace techniques, where it was shown to be essential to differentiate volatile compounds desorbed from those generated during the thermal desorption itself. For metal trace analysis, microwave mineralization followed by atomic absorption was employed. These different techniques were introduced into a systematic testing scheme that is envisaged as being suitable both for industrial control and for enforcement laboratories. Guidelines will be proposed in the second part of this paper.
The isolation limits of stochastic vibration
NASA Technical Reports Server (NTRS)
Knopse, C. R.; Allaire, P. E.
1993-01-01
The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.
On differential transformations between Cartesian and curvilinear (geodetic) coordinates
NASA Technical Reports Server (NTRS)
Soler, T.
1976-01-01
Differential transformations are developed between Cartesian and curvilinear orthogonal coordinates. Only matrix algebra is used for the presentation of the basic concepts. After defining the reference systems used the rotation (R), metric (H), and Jacobian (J) matrices of the transformations between cartesian and curvilinear coordinate systems are introduced. A value of R as a function of H and J is presented. Likewise an analytical expression for J(-1) as a function of H(-2) and R is obtained. Emphasis is placed on showing that differential equations are equivalent to conventional similarity transformations. Scaling methods are discussed along with ellipsoidal coordinates. Differential transformations between elipsoidal and geodetic coordinates are established.
Barmpoutis, Angelos
2010-01-01
Registration of Diffusion-Weighted MR Images (DW-MRI) can be achieved by registering the corresponding 2nd-order Diffusion Tensor Images (DTI). However, it has been shown that higher-order diffusion tensors (e.g. order-4) outperform the traditional DTI in approximating complex fiber structures such as fiber crossings. In this paper we present a novel method for unbiased group-wise non-rigid registration and atlas construction of 4th-order diffusion tensor fields. To the best of our knowledge there is no other existing method to achieve this task. First we define a metric on the space of positive-valued functions based on the Riemannian metric of real positive numbers (denoted by ℝ+). Then, we use this metric in a novel functional minimization method for non-rigid 4th-order tensor field registration. We define a cost function that accounts for the 4th-order tensor re-orientation during the registration process and has analytic derivatives with respect to the transformation parameters. Finally, the tensor field atlas is computed as the minimizer of the variance defined using the Riemannian metric. We quantitatively compare the proposed method with other techniques that register scalar-valued or diffusion tensor (rank-2) representations of the DWMRI. PMID:20436782
Demonstration of automated proximity and docking technologies
NASA Astrophysics Data System (ADS)
Anderson, Robert L.; Tsugawa, Roy K.; Bryan, Thomas C.
An autodock was demonstrated using straightforward techniques and real sensor hardware. A simulation testbed was established and validated. The sensor design was refined with improved optical performance and image processing noise mitigation techniques, and the sensor is ready for production from off-the-shelf components. The autonomous spacecraft architecture is defined. The areas of sensors, docking hardware, propulsion, and avionics are included in the design. The Guidance Navigation and Control architecture and requirements are developed. Modular structures suitable for automated control are used. The spacecraft system manager functions including configuration, resource, and redundancy management are defined. The requirements for autonomous spacecraft executive are defined. High level decisionmaking, mission planning, and mission contingency recovery are a part of this. The next step is to do flight demonstrations. After the presentation the following question was asked. How do you define validation? There are two components to validation definition: software simulation with formal and vigorous validation, and hardware and facility performance validated with respect to software already validated against analytical profile.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
Quality control of the tribological coating PS212
NASA Technical Reports Server (NTRS)
Sliney, Harold E.; Dellacorte, Christopher; Deadmore, Daniel L.
1989-01-01
PS212 is a self-lubricating, composite coating that is applied by the plasma spray process. It is a functional lubricating coating from 25 C (or lower) to 900 C. The coating is prepared from a blend of three different powders with very dissimilar properties. Therefore, the final chemical composition and lubricating effectiveness of the coatings are very sensitive to the process variables used in their preparation. Defined here are the relevant variables. The process and analytical procedures that will result in satisfactory tribological coatings are discussed.
Characterization of nutraceuticals and functional foods by innovative HPLC methods.
Corradini, Claudio; Galanti, Roberta; Nicoletti, Isabella
2002-04-01
In recent years there is a growing interest in food and food ingredient which may provide health benefits. Food as well as food ingredients containing health-preserving components, are not considered conventional food, but can be defined as functional food. To characterise such foods, as well as nutraceuticals specific, high sensitive and reproducible analytical methodologies are needed. In light of this importance we set out to develop innovative HPLC methods employing reversed phase narrow bore column and high-performance anion-exchange chromatographic methods coupled with pulsed amperometric detection (HPAEC-PAD), which are specific for carbohydrate analysis. The developed methods were applied for the separation and quantification of citrus flavonoids and to characterize fructooligosaccharide (FOS) and fructans added to functional foods and nutraceuticals.
Definition and properties of the libera operator on mixed norm spaces.
Pavlovic, Miroslav
2014-01-01
We consider the action of the operator ℒg(z) = (1 - z)(-1)∫ z (1)f(ζ)dζ on a class of "mixed norm" spaces of analytic functions on the unit disk, X = H α,ν (p,q) , defined by the requirement g ∈ X ⇔ r ↦ (1 - r) (α) M p (r, g ((ν))) ∈ L (q) ([0,1], dr/(1 - r)), where 1 ≤ p ≤ ∞, 0 < q ≤ ∞, α > 0, and ν is a nonnegative integer. This class contains Besov spaces, weighted Bergman spaces, Dirichlet type spaces, Hardy-Sobolev spaces, and so forth. The expression ℒg need not be defined for g analytic in the unit disk, even for g ∈ X. A sufficient, but not necessary, condition is that Σ(n=0)|(∞)|ĝ(n)/(n + 1) < ∞. We identify the indices p, q, α, and ν for which 1°ℒ is well defined on X, 2 °ℒ acts from X to X, 3° the implication g ∈ X [Symbol: see text] Σ(n = 0)(∞) |/ĝ(n)|(n+1) < ∞ holds. Assertion 2° extends some known results, due to Siskakis and others, and contains some new ones. As an application of 3° we have a generalization of Bernstein's theorem on absolute convergence of power series that belong to a Hölder class.
NASA Astrophysics Data System (ADS)
Ben Torkia, Yosra; Ben Yahia, Manel; Khalfaoui, Mohamed; Al-Muhtaseb, Shaheen A.; Ben Lamine, Abdelmottaleb
2014-01-01
The adsorption energy distribution (AED) function of a commercial activated carbon (BDH-activated carbon) was investigated. For this purpose, the integral equation is derived by using a purely analytical statistical physics treatment. The description of the heterogeneity of the adsorbent is significantly clarified by defining the parameter N(E). This parameter represents the energetic density of the spatial density of the effectively occupied sites. To solve the integral equation, a numerical method was used based on an adequate algorithm. The Langmuir model was adopted as a local adsorption isotherm. This model is developed by using the grand canonical ensemble, which allows defining the physico-chemical parameters involved in the adsorption process. The AED function is estimated by a normal Gaussian function. This method is applied to the adsorption isotherms of nitrogen, methane and ethane at different temperatures. The development of the AED using a statistical physics treatment provides an explanation of the gas molecules behaviour during the adsorption process and gives new physical interpretations at microscopic levels.
A note on φ-analytic conformal vector fields
NASA Astrophysics Data System (ADS)
Deshmukh, Sharief; Bin Turki, Nasser
2017-09-01
Taking clue from the analytic vector fields on a complex manifold, φ-analytic conformal vector fields are defined on a Riemannian manifold (Deshmukh and Al-Solamy in Colloq. Math. 112(1):157-161, 2008). In this paper, we use φ-analytic conformal vector fields to find new characterizations of the n-sphere Sn(c) and the Euclidean space (Rn,<,> ).
Guo, Shaojun; Wang, Erkang
2011-07-19
In order to develop new, high technology devices for a variety of applications, researchers would like to better control the structure and function of micro/nanomaterials through an understanding of the role of size, shape, architecture, composition, hybridization, molecular engineering, assembly, and microstructure. However, researchers continue to face great challenges in the construction of well-defined micro/nanomaterials with diverse morphologies. At the same time, the research interface where micro/nanomaterials meet electrochemistry, analytical chemistry, biomedicine, and other fields provides rich opportunities to reveal new chemical, physical, and biological properties of micro/nanomaterials and to uncover many new functions and applications of these materials. In this Account, we describe our recent progress in the construction of novel inorganic and polymer nanostructures formed through different simple strategies. Our synthetic strategies include wet-chemical and electrochemical methods for the controlled production of inorganic and polymer nanomaterials with well-defined morphologies. These methods are both facile and reliable, allowing us to produce high-quality micro/nanostructures, such as nanoplates, micro/nanoflowers, monodisperse micro/nanoparticles, nanowires, nanobelts, and polyhedron and even diverse hybrid structures. We implemented a series of approaches to address the challenges in the preparation of new functional micro/nanomaterials for a variety of important applications This Account also highlights new or enhanced applications of certain micro/nanomaterials in sensing applications. We singled out analytical techniques that take advantage of particular properties of micro/nanomaterials. Then by rationally tailoring experimental parameters, we readily and selectively obtained different types of micro/nanomaterials with novel morphologies with high performance in applications such as electrochemical sensors, electrochemiluminescent sensors, gene delivery agents, and fuel cell catalysts. We expect that micro/nanomaterials with unique structural characteristics, properties, and functions will attract increasing research interest and will lead to new opportunities in various fields of research.
NASA Astrophysics Data System (ADS)
Wang, Ge; Berk, H. L.
2011-10-01
The frequency chirping signal arising from spontaneous a toroidial Alfven eigenmode (TAE) excited by energetic particles is studied for both numerical and analytic models. The time-dependent numerical model is based on the 1D Vlasov equation. We use a sophisticated tracking method to lock onto the resonant structure to enable the chirping frequency to be nearly constant in the calculation frame. The accuracy of the adiabatic approximation is tested during the simulation which justifies the appropriateness of our analytic model. The analytic model uses the adiabatic approximation which allows us to solve the wave evolution equation in frequency space. Then, the resonant interactions between energetic particles and TAE yield predictions for the chirping rate, wave frequency and amplitudes vs. time. Here, an adiabatic invariant J is defined on the separatrix of a chirping mode to determine the region of confinement of the wave trapped distribution function. We examine the asymptotic behavior of the chirping signal for its long time evolution and find agreement in essential features with the results of the simulation. Work supported by Department of Energy contract DE-FC02-08ER54988.
Correlated scattering states of N-body Coulomb systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berakdar, J.
1997-03-01
For N charged particles of equal masses moving in the field of a heavy residual charge, an approximate analytical solution of the many-body time-independent Schr{umlt o}dinger equation is derived at a total energy above the complete fragmentation threshold. All continuum particles are treated on equal footing. The proposed correlated wave function represents, to leading order, an exact solution of the many-body Schr{umlt o}dinger equation in the asymptotic region defined by large interparticle separations. Thus, in this asymptotic region the N-body Coulomb modifications to the plane-wave motion of free particles are rigorously estimated. It is shown that the Kato cusp conditionsmore » are satisfied by the derived wave function at all two-body coalescence points. An expression of the normalization of this wave function is also given. To render possible the calculations of scattering amplitudes for transitions leading to a four-body scattering state, an effective-charge method is suggested in which the correlations between the continuum particles are completely subsumed into effective interactions with the residual charge. Analytical expressions for these effective interactions are derived and discussed for physical situations. {copyright} {ital 1997} {ital The American Physical Society}« less
Expressions Module for the Satellite Orbit Analysis Program
NASA Technical Reports Server (NTRS)
Edmonds, Karina
2008-01-01
The Expressions Module is a software module that has been incorporated into the Satellite Orbit Analysis Program (SOAP). The module includes an expressions- parser submodule built on top of an analytical system, enabling the user to define logical and numerical variables and constants. The variables can capture output from SOAP orbital-prediction and geometric-engine computations. The module can combine variables and constants with built-in logical operators (such as Boolean AND, OR, and NOT), relational operators (such as >, <, or =), and mathematical operators (such as addition, subtraction, multiplication, division, modulus, exponentiation, differentiation, and integration). Parentheses can be used to specify precedence of operations. The module contains a library of mathematical functions and operations, including logarithms, trigonometric functions, Bessel functions, minimum/ maximum operations, and floating- point-to-integer conversions. The module supports combinations of time, distance, and angular units and has a dimensional- analysis component that checks for correct usage of units. A parser based on the Flex language and the Bison program looks for and indicates errors in syntax. SOAP expressions can be built using other expressions as arguments, thus enabling the user to build analytical trees. A graphical user interface facilitates use.
Sampling and Reconstruction of the Pupil and Electric Field for Phase Retrieval
NASA Technical Reports Server (NTRS)
Dean, Bruce; Smith, Jeffrey; Aronstein, David
2012-01-01
This technology is based on sampling considerations for a band-limited function, which has application to optical estimation generally, and to phase retrieval specifically. The analysis begins with the observation that the Fourier transform of an optical aperture function (pupil) can be implemented with minimal aliasing for Q values down to Q = 1. The sampling ratio, Q, is defined as the ratio of the sampling frequency to the band-limited cut-off frequency. The analytical results are given using a 1-d aperture function, and with the electric field defined by the band-limited sinc(x) function. Perfect reconstruction of the Fourier transform (electric field) is derived using the Whittaker-Shannon sampling theorem for 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, X; Bues, M
2015-06-15
Purpose: To present an analytical formula for deriving mechanical isocenter (MIC) of a rotational gantry treatment unit. The input data to the formula is obtained by a custom-made device. The formula has been implemented and used in an operational proton therapy facility since 2005. Methods: The custom made device consisted of 3 mutually perpendicular dial indicators and 5 clinometers, to obtain displacement data and gantry angle data simultaneously. During measurement, a steel sphere was affixed to the patient couch, and the device was attached to the snout rotating with the gantry. The displacement data and angle data were obtained simultaneouslymore » at angular increments of less than 1 degree. The analytical formula took the displacement and angle as input and derived the positions of dial indicator tips (DIT) position in room-fixed coordinate system. The formula derivation presupposes trigonometry and 3-dimentional coordinate transformations. Due to the symmetry properties of the defining equations, the DIT position can be solved for analytically without using mathematical approximations. We define the mean of all points in the DIT trajectory as the MIC. The formula was implemented in computer code, which has been employed during acceptance test, commissioning, as well as routine QA practice in an operational proton facility since 2005. Results: It took one minute for the custom-made device to acquire the measurement data for a full gantry rotation. The DIT trajectory and MIS are instantaneously available after the measurement. The MIC Result agrees well with vendor’s Result, which came from a different measurement setup, as well as different data analysis algorithm. Conclusion: An analytical formula for deriving mechanical isocenter was developed and validated. The formula is considered to be absolutely accurate mathematically. Be analyzing measured data of radial displacements as function of gantry angle, the formula calculates the MI position in room coordinate.« less
NASA Astrophysics Data System (ADS)
Letzel, Alexander; Gökce, Bilal; Menzel, Andreas; Plech, Anton; Barcikowski, Stephan
2018-03-01
For a known material, the size distribution of a nanoparticle colloid is a crucial parameter that defines its properties. However, measured size distributions are not easy to interpret as one has to consider weighting (e.g. by light absorption, scattering intensity, volume, surface, number) and the way size information was gained. The radius of a suspended nanoparticle can be given as e.g. sphere equivalent, hydrodynamic, Feret or radius of gyration. In this study, gold nanoparticles in water are synthesized by pulsed-laser ablation (LAL) and fragmentation (LFL) in liquids and characterized by various techniques (scanning transmission electron microscopy (STEM), small-angle X-ray scattering (SAXS), analytical disc centrifugation (ADC), dynamic light scattering (DLS) and UV-vis spectroscopy with Mie-Gans Theory) to study the comparability of different analytical techniques and determine the method that is preferable for a given task related to laser-generated nanoparticles. In particular, laser-generated colloids are known to be bimodal and/or polydisperse, but bimodality is sometimes not analytically resolved in literature. In addition, frequently reported small size shifts of the primary particle mode around 10 nm needs evaluation of its statistical significance related to the analytical method. Closely related to earlier studies on SAXS, different colloids in defined proportions are mixed and their size as a function of the nominal mixing ratio is analyzed. It is found that the derived particle size is independent of the nominal mixing ratio if the colloid size fractions do not overlap considerably. Conversely, the obtained size for colloids with overlapping size fractions strongly depends on the nominal mixing ratio since most methods cannot distinguish between such fractions. Overall, SAXS and ADC are very accurate methods for particle size analysis. Further, the ability of different methods to determine the nominal mixing ratio of sizes fractions is studied experimentally.
Kosek, Margaret; Guerrant, Richard L; Kang, Gagandeep; Bhutta, Zulfiqar; Yori, Pablo Peñataro; Gratz, Jean; Gottlieb, Michael; Lang, Dennis; Lee, Gwenyth; Haque, Rashidul; Mason, Carl J; Ahmed, Tahmeed; Lima, Aldo; Petri, William A; Houpt, Eric; Olortegui, Maribel Paredes; Seidman, Jessica C; Mduma, Estomih; Samie, Amidou; Babji, Sudhir
2014-11-01
Individuals in the developing world live in conditions of intense exposure to enteric pathogens due to suboptimal water and sanitation. These environmental conditions lead to alterations in intestinal structure, function, and local and systemic immune activation that are collectively referred to as environmental enteropathy (EE). This condition, although poorly defined, is likely to be exacerbated by undernutrition as well as being responsible for permanent growth deficits acquired in early childhood, vaccine failure, and loss of human potential. This article addresses the underlying theoretical and analytical frameworks informing the methodology proposed by the Etiology, Risk Factors and Interactions of Enteric Infections and Malnutrition and the Consequences for Child Health and Development (MAL-ED) cohort study to define and quantify the burden of disease caused by EE within a multisite cohort. Additionally, we will discuss efforts to improve, standardize, and harmonize laboratory practices within the MAL-ED Network. These efforts will address current limitations in the understanding of EE and its burden on children in the developing world. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Template based rotation: A method for functional connectivity analysis with a priori templates☆
Schultz, Aaron P.; Chhatwal, Jasmeer P.; Huijbers, Willem; Hedden, Trey; van Dijk, Koene R.A.; McLaren, Donald G.; Ward, Andrew M.; Wigman, Sarah; Sperling, Reisa A.
2014-01-01
Functional connectivity magnetic resonance imaging (fcMRI) is a powerful tool for understanding the network level organization of the brain in research settings and is increasingly being used to study large-scale neuronal network degeneration in clinical trial settings. Presently, a variety of techniques, including seed-based correlation analysis and group independent components analysis (with either dual regression or back projection) are commonly employed to compute functional connectivity metrics. In the present report, we introduce template based rotation,1 a novel analytic approach optimized for use with a priori network parcellations, which may be particularly useful in clinical trial settings. Template based rotation was designed to leverage the stable spatial patterns of intrinsic connectivity derived from out-of-sample datasets by mapping data from novel sessions onto the previously defined a priori templates. We first demonstrate the feasibility of using previously defined a priori templates in connectivity analyses, and then compare the performance of template based rotation to seed based and dual regression methods by applying these analytic approaches to an fMRI dataset of normal young and elderly subjects. We observed that template based rotation and dual regression are approximately equivalent in detecting fcMRI differences between young and old subjects, demonstrating similar effect sizes for group differences and similar reliability metrics across 12 cortical networks. Both template based rotation and dual-regression demonstrated larger effect sizes and comparable reliabilities as compared to seed based correlation analysis, though all three methods yielded similar patterns of network differences. When performing inter-network and sub-network connectivity analyses, we observed that template based rotation offered greater flexibility, larger group differences, and more stable connectivity estimates as compared to dual regression and seed based analyses. This flexibility owes to the reduced spatial and temporal orthogonality constraints of template based rotation as compared to dual regression. These results suggest that template based rotation can provide a useful alternative to existing fcMRI analytic methods, particularly in clinical trial settings where predefined outcome measures and conserved network descriptions across groups are at a premium. PMID:25150630
Mode selection and frequency tuning by injection in pulsed TEA-CO2 lasers
NASA Technical Reports Server (NTRS)
Flamant, P. H.; Menzies, R. T.
1983-01-01
An analytical model characterizing pulsed-TEA-CO2-laser injection locking by tunable CW-laser radiation is presented and used to explore the requirements for SLM pulse generation. Photon-density-rate equations describing the laser mechanism are analyzed in terms of the mode competition between photon densities emitted at two frequencies. The expression derived for pulsed dye lasers is extended to homogeneously broadened CO2 lasers, and locking time is defined as a function of laser parameters. The extent to which injected radiation can be detuned from the CO2 line center and continue to produce SLM pulses is investigated experimentally in terms of the analytical framework. The dependence of locking time on the detuning/pressure-broadened-halfwidth ratio is seen as important for spectroscopic applications requiring tuning within the TEA-laser line-gain bandwidth.
Measure and dimension functions: measurability and densities
NASA Astrophysics Data System (ADS)
Mattila, Pertti; Mauldin, R. Daniel
1997-01-01
During the past several years, new types of geometric measure and dimension have been introduced; the packing measure and dimension, see [Su], [Tr] and [TT1]. These notions are playing an increasingly prevalent role in various aspects of dynamics and measure theory. Packing measure is a sort of dual of Hausdorff measure in that it is defined in terms of packings rather than coverings. However, in contrast to Hausdorff measure, the usual definition of packing measure requires two limiting procedures, first the construction of a premeasure and then a second standard limiting process to obtain the measure. This makes packing measure somewhat delicate to deal with. The question arises as to whether there is some simpler method for defining packing measure and dimension. In this paper, we find a basic limitation on this possibility. We do this by determining the descriptive set-theoretic complexity of the packing functions. Whereas the Hausdorff dimension function on the space of compact sets is Borel measurable, the packing dimension function is not. On the other hand, we show that the packing dimension functions are measurable with respect to the [sigma]-algebra generated by the analytic sets. Thus, the usual sorts of measurability properties used in connection with Hausdorff measure, for example measures of sections and projections, remain true for packing measure.
Closed-loop, pilot/vehicle analysis of the approach and landing task
NASA Technical Reports Server (NTRS)
Anderson, M. R.; Schmidt, D. K.
1986-01-01
In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.
Development of methodologies and procedures for identifying STS users and uses
NASA Technical Reports Server (NTRS)
Archer, J. L.; Beauchamp, N. A.; Macmichael, D. C.
1974-01-01
A study was conducted to identify new uses and users of the new Space Transporation System (STS) within the domestic government sector. The study develops a series of analytical techniques and well-defined functions structured as an integrated planning process to assure efficient and meaningful use of the STS. The purpose of the study is to provide NASA with the following functions: (1) to realize efficient and economic use of the STS and other NASA capabilities, (2) to identify new users and uses of the STS, (3) to contribute to organized planning activities for both current and future programs, and (4) to air in analyzing uses of NASA's overall capabilities.
Unsteady non-Newtonian hydrodynamics in granular gases.
Astillero, Antonio; Santos, Andrés
2012-02-01
The temporal evolution of a dilute granular gas, both in a compressible flow (uniform longitudinal flow) and in an incompressible flow (uniform shear flow), is investigated by means of the direct simulation Monte Carlo method to solve the Boltzmann equation. Emphasis is laid on the identification of a first "kinetic" stage (where the physical properties are strongly dependent on the initial state) subsequently followed by an unsteady "hydrodynamic" stage (where the momentum fluxes are well-defined non-Newtonian functions of the rate of strain). The simulation data are seen to support this two-stage scenario. Furthermore, the rheological functions obtained from simulation are well described by an approximate analytical solution of a model kinetic equation. © 2012 American Physical Society
Response of a rigid aircraft to nonstationary atmospheric turbulence.
NASA Technical Reports Server (NTRS)
Verdon, J. M.; Steiner, R.
1973-01-01
The plunging response of an aircraft to a type of nonstationary turbulent excitation is considered. The latter consists of stationary Gaussian noise modulated by a well-defined envelope function. The intent of the investigation is to model the excitation experienced by an airplane flying through turbulence of varying intensity and to examine the influence of intensity variations on exceedance frequencies of the gust velocity and the airplane's plunging velocity and acceleration. One analytical advantage of the proposed model is that the Gaussian assumption for the gust excitation is retained. The analysis described herein is developed in terms of an envelope function of arbitrary form; however, numerical calculations are limited to the case of harmonic modulation.
Insight and Action Analytics: Three Case Studies to Consider
ERIC Educational Resources Information Center
Milliron, Mark David; Malcolm, Laura; Kil, David
2014-01-01
Civitas Learning was conceived as a community of practice, bringing together forward-thinking leaders from diverse higher education institutions to leverage insight and action analytics in their ongoing efforts to help students learn well and finish strong. We define insight and action analytics as drawing, federating, and analyzing data from…
Learning Analytics: Potential for Enhancing School Library Programs
ERIC Educational Resources Information Center
Boulden, Danielle Cadieux
2015-01-01
Learning analytics has been defined as the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. The potential use of data and learning analytics in educational contexts has caught the attention of educators and…
NASA Astrophysics Data System (ADS)
Thompson, Rodger I.
2018-04-01
This investigation explores using the beta function formalism to calculate analytic solutions for the observable parameters in rolling scalar field cosmologies. The beta function in this case is the derivative of the scalar ϕ with respect to the natural log of the scale factor a, β (φ )=d φ /d ln (a). Once the beta function is specified, modulo a boundary condition, the evolution of the scalar ϕ as a function of the scale factor is completely determined. A rolling scalar field cosmology is defined by its action which can contain a range of physically motivated dark energy potentials. The beta function is chosen so that the associated "beta potential" is an accurate, but not exact, representation of the appropriate dark energy model potential. The basic concept is that the action with the beta potential is so similar to the action with the model potential that solutions using the beta action are accurate representations of solutions using the model action. The beta function provides an extra equation to calculate analytic functions of the cosmologies parameters as a function of the scale factor that are that are not calculable using only the model action. As an example this investigation uses a quintessence cosmology to demonstrate the method for power and inverse power law dark energy potentials. An interesting result of the investigation is that the Hubble parameter H is almost completely insensitive to the power of the potentials and that ΛCDM is part of the family of quintessence cosmology power law potentials with a power of zero.
NASA Astrophysics Data System (ADS)
Thompson, Rodger I.
2018-07-01
This investigation explores using the beta function formalism to calculate analytic solutions for the observable parameters in rolling scalar field cosmologies. The beta function in this case is the derivative of the scalar φ with respect to the natural log of the scale factor a, β (φ)=d φ/d ln (a). Once the beta function is specified, modulo a boundary condition, the evolution of the scalar φ as a function of the scale factor is completely determined. A rolling scalar field cosmology is defined by its action which can contain a range of physically motivated dark energy potentials. The beta function is chosen so that the associated `beta potential' is an accurate, but not exact, representation of the appropriate dark energy model potential. The basic concept is that the action with the beta potential is so similar to the action with the model potential that solutions using the beta action are accurate representations of solutions using the model action. The beta function provides an extra equation to calculate analytic functions of the cosmologies parameters as a function of the scale factor that are not calculable using only the model action. As an example, this investigation uses a quintessence cosmology to demonstrate the method for power and inverse power law dark energy potentials. An interesting result of the investigation is that the Hubble parameter H is almost completely insensitive to the power of the potentials and that Λ cold dark matter is part of the family of quintessence cosmology power-law potentials with a power of zero.
NASA Astrophysics Data System (ADS)
Dewar, R. L.; Mills, R.; Hole, M. J.
2009-05-01
The celebration of Allan Kaufman's 80th birthday was an occasion to reflect on a career that has stimulated the mutual exchange of ideas (or memes in the terminology of Richard Dawkins) between many researchers. This paper will revisit a meme Allan encountered in his early career in magnetohydrodynamics, the continuation of a magnetohydrodynamic mode through a singularity, and will also mention other problems where Allan's work has had a powerful cross-fertilizing effect in plasma physics and other areas of physics and mathematics. To resolve the continuation problem we regularize the Newcomb equation, solve it in terms of Legendre functions of imaginary argument, and define the small weak solutions of the Newcomb equation as generalized functions in the manner of Lighthill, i.e. via a limiting sequence of analytic functions that connect smoothly across the singularity.
NASA Astrophysics Data System (ADS)
Zainudin, W. N. R. A.; Ramli, N. A.
2017-09-01
In 2010, Energy Commission (EC) had introduced Incentive Based Regulation (IBR) to ensure sustainable Malaysian Electricity Supply Industry (MESI), promotes transparent and fair returns, encourage maximum efficiency and maintains policy driven end user tariff. To cater such revolutionary transformation, a sophisticated system to generate policy driven electricity tariff structure is in great need. Hence, this study presents a data analytics framework that generates altered revenue function based on varying power consumption distribution and tariff charge function. For the purpose of this study, the power consumption distribution is being proxy using proportion of household consumption and electricity consumed in KwH and the tariff charge function is being proxy using three-tiered increasing block tariff (IBT). The altered revenue function is useful to give an indication on whether any changes in the power consumption distribution and tariff charges will give positive or negative impact to the economy. The methodology used for this framework begins by defining the revenue to be a function of power consumption distribution and tariff charge function. Then, the proportion of household consumption and tariff charge function is derived within certain interval of electricity power. Any changes in those proportion are conjectured to contribute towards changes in revenue function. Thus, these changes can potentially give an indication on whether the changes in power consumption distribution and tariff charge function are giving positive or negative impact on TNB revenue. Based on the finding of this study, major changes on tariff charge function seems to affect altered revenue function more than power consumption distribution. However, the paper concludes that power consumption distribution and tariff charge function can influence TNB revenue to some great extent.
Weng, Naidong; Needham, Shane; Lee, Mike
2015-01-01
The 17th Annual Symposium on Clinical and Pharmaceutical Solutions through Analysis (CPSA) 29 September-2 October 2014, was held at the Sheraton Bucks County Hotel, Langhorne, PA, USA. The CPSA USA 2014 brought the various analytical fields defining the challenges of the modern analytical laboratory. Ongoing discussions focused on the future application of bioanalysis and other disciplines to support investigational new drugs (INDs) and new drug application (NDA) submissions, clinical diagnostics and pathology laboratory personnel that support patient sample analysis, and the clinical researchers that provide insights into new biomarkers within the context of the modern laboratory and personalized medicine.
1985-09-01
whose solution is obtained from an integral I. INTRODUCTION equation for functions defined along the boun- daries of the fluid. Longuet- Higgins and In...408, pp. 345-361 (1981). [15) Thompson, J. F., F. C. Thames and C. W. [4) Longuet- Higgins , M. S. and E. D. Mastin, "Automatic Numerical Generation...the waves are assumed to be steady then analytical work (see eg Longuet- Higgins and This paper makes some progress towards the Fox (1977) and (1978
NASA Technical Reports Server (NTRS)
Madnia, C. K.; Frankel, S. H.; Givi, P.
1992-01-01
Closed form analytical expressions are obtained for predicting the limited rate of reactant conversion in a binary reaction of the type F + rO yields (1 + r) Product in unpremixed homogeneous turbulence. These relations are obtained by means of a single point Probability Density Function (PDF) method based on the Amplitude Mapping Closure. It is demonstrated that with this model, the maximum rate of the reactants' decay can be conveniently expressed in terms of definite integrals of the Parabolic Cylinder Functions. For the cases with complete initial segregation, it is shown that the results agree very closely with those predicted by employing a Beta density of the first kind for an appropriately defined Shvab-Zeldovich scalar variable. With this assumption, the final results can also be expressed in terms of closed form analytical expressions which are based on the Incomplete Beta Functions. With both models, the dependence of the results on the stoichiometric coefficient and the equivalence ratio can be expressed in an explicit manner. For a stoichiometric mixture, the analytical results simplify significantly. In the mapping closure, these results are expressed in terms of simple trigonometric functions. For the Beta density model, they are in the form of Gamma Functions. In all the cases considered, the results are shown to agree well with data generated by Direct Numerical Simulations (DNS). Due to the simplicity of these expressions and because of nice mathematical features of the Parabolic Cylinder and the Incomplete Beta Functions, these models are recommended for estimating the limiting rate of reactant conversion in homogeneous reacting flows. These results also provide useful insights in assessing the extent of validity of turbulence closures in the modeling of unpremixed reacting flows. Some discussions are provided on the extension of the model for treating more complicated reacting systems including realistic kinetics schemes and multi-scalar mixing with finite rate chemical reactions in more complex configurations.
A strategy to determine operating parameters in tissue engineering hollow fiber bioreactors
Shipley, RJ; Davidson, AJ; Chan, K; Chaudhuri, JB; Waters, SL; Ellis, MJ
2011-01-01
The development of tissue engineering hollow fiber bioreactors (HFB) requires the optimal design of the geometry and operation parameters of the system. This article provides a strategy for specifying operating conditions for the system based on mathematical models of oxygen delivery to the cell population. Analytical and numerical solutions of these models are developed based on Michaelis–Menten kinetics. Depending on the minimum oxygen concentration required to culture a functional cell population, together with the oxygen uptake kinetics, the strategy dictates the model needed to describe mass transport so that the operating conditions can be defined. If cmin ≫ Km we capture oxygen uptake using zero-order kinetics and proceed analytically. This enables operating equations to be developed that allow the user to choose the medium flow rate, lumen length, and ECS depth to provide a prescribed value of cmin. When , we use numerical techniques to solve full Michaelis–Menten kinetics and present operating data for the bioreactor. The strategy presented utilizes both analytical and numerical approaches and can be applied to any cell type with known oxygen transport properties and uptake kinetics. PMID:21370228
NASA Astrophysics Data System (ADS)
Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.
2015-06-01
The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
König, Dirk, E-mail: dirk.koenig@unsw.edu.au
2016-08-15
Semiconductor nanocrystals (NCs) experience stress and charge transfer by embedding materials or ligands and impurity atoms. In return, the environment of NCs experiences a NC stress response which may lead to matrix deformation and propagated strain. Up to now, there is no universal gauge to evaluate the stress impact on NCs and their response as a function of NC size d{sub NC}. I deduce geometrical number series as analytical tools to obtain the number of NC atoms N{sub NC}(d{sub NC}[i]), bonds between NC atoms N{sub bnd}(d{sub NC}[i]) and interface bonds N{sub IF}(d{sub NC}[i]) for seven high symmetry zinc-blende (zb) NCsmore » with low-index faceting: {001} cubes, {111} octahedra, {110} dodecahedra, {001}-{111} pyramids, {111} tetrahedra, {111}-{001} quatrodecahedra and {001}-{111} quadrodecahedra. The fundamental insights into NC structures revealed here allow for major advancements in data interpretation and understanding of zb- and diamond-lattice based nanomaterials. The analytical number series can serve as a standard procedure for stress evaluation in solid state spectroscopy due to their deterministic nature, easy use and general applicability over a wide range of spectroscopy methods as well as NC sizes, forms and materials.« less
ERIC Educational Resources Information Center
Watagodakumbura, Chandana
2014-01-01
In this paper, the authentic education system defined with multidisciplinary perspectives (Watagodakumbura, 2013a, 2013b) is viewed from an additional perspective of analytical psychology. Analytical psychology provides insights into human development and is becoming more and more popular among practicing psychologist in the recent past. In…
42 CFR 493.803 - Condition: Successful participation.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...
42 CFR 493.803 - Condition: Successful participation.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...
42 CFR 493.803 - Condition: Successful participation.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...
42 CFR 493.803 - Condition: Successful participation.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...
(U) An Analytic Study of Piezoelectric Ejecta Mass Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, Ian Lee
2017-02-16
We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less
Analytical approximations to seawater optical phase functions of scattering
NASA Astrophysics Data System (ADS)
Haltrin, Vladimir I.
2004-11-01
This paper proposes a number of analytical approximations to the classic and recently measured seawater light scattering phase functions. The three types of analytical phase functions are derived: individual representations for 15 Petzold, 41 Mankovsky, and 91 Gulf of Mexico phase functions; collective fits to Petzold phase functions; and analytical representations that take into account dependencies between inherent optical properties of seawater. The proposed phase functions may be used for problems of radiative transfer, remote sensing, visibility and image propagation in natural waters of various turbidity.
Strehl ratio: a tool for optimizing optical nulls and singularities.
Hénault, François
2015-07-01
In this paper a set of radial and azimuthal phase functions are reviewed that have a null Strehl ratio, which is equivalent to generating a central extinction in the image plane of an optical system. The study is conducted in the framework of Fraunhofer scalar diffraction, and is oriented toward practical cases where optical nulls or singularities are produced by deformable mirrors or phase plates. The identified solutions reveal unexpected links with the zeros of type-J Bessel functions of integer order. They include linear azimuthal phase ramps giving birth to an optical vortex, azimuthally modulated phase functions, and circular phase gratings (CPGs). It is found in particular that the CPG radiometric efficiency could be significantly improved by the null Strehl ratio condition. Simple design rules for rescaling and combining the different phase functions are also defined. Finally, the described analytical solutions could also serve as starting points for an automated searching software tool.
Slushy weightings for the optimal pilot model. [considering visual tracking task
NASA Technical Reports Server (NTRS)
Dillow, J. D.; Picha, D. G.; Anderson, R. O.
1975-01-01
A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.
An Advanced Buffet Load Alleviation System
NASA Technical Reports Server (NTRS)
Burnham, Jay K.; Pitt, Dale M.; White, Edward V.; Henderson, Douglas A.; Moses, Robert W.
2001-01-01
This paper describes the development of an advanced buffet load alleviation (BLA) system that utilizes distributed piezoelectric actuators in conjunction with an active rudder to reduce the structural dynamic response of the F/A-18 aircraft vertical tails to buffet loads. The BLA system was defined analytically with a detailed finite-element-model of the tail structure and piezoelectric actuators. Oscillatory aerodynamics were included along with a buffet forcing function to complete the aeroservoelastic model of the tail with rudder control surface. Two single-input-single-output (SISO) controllers were designed, one for the active rudder and one for the active piezoelectric actuators. The results from the analytical open and closed loop simulations were used to predict the system performance. The objective of this BLA system is to extend the life of vertical tail structures and decrease their life-cycle costs. This system can be applied to other aircraft designs to address suppression of structural vibrations on military and commercial aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassanein, A.; Konkashbaev, I.
1999-03-15
The structure of a collisionless scrape-off-layer (SOL) plasma in tokamak reactors is being studied to define the electron distribution function and the corresponding sheath potential between the divertor plate and the edge plasma. The collisionless model is shown to be valid during the thermal phase of a plasma disruption, as well as during the newly desired low-recycling normal phase of operation with low-density, high-temperature, edge plasma conditions. An analytical solution is developed by solving the Fokker-Planck equation for electron distribution and balance in the SOL. The solution is in good agreement with numerical studies using Monte-Carlo methods. The analytical solutionsmore » provide an insight to the role of different physical and geometrical processes in a collisionless SOL during disruptions and during the enhanced phase of normal operation over a wide range of parameters.« less
Classical Dynamics of Fullerenes
NASA Astrophysics Data System (ADS)
Sławianowski, Jan J.; Kotowski, Romuald K.
2017-06-01
The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.
NASA Astrophysics Data System (ADS)
Park, DaeKil
2018-06-01
The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Escudero, C.
2016-09-01
We present a classical, mesoscopic derivation of the Fokker-Planck equation for diffusion in an expanding medium. To this end, we take a conveniently generalized Chapman-Kolmogorov equation as the starting point. We obtain an analytical expression for the Green's function (propagator) and investigate both analytically and numerically how this function and the associated moments behave. We also study first-passage properties in expanding hyperspherical geometries. We show that in all cases the behavior is determined to a great extent by the so-called Brownian conformal time τ (t ) , which we define via the relation τ ˙=1 /a2 , where a (t ) is the expansion scale factor. If the medium expansion is driven by a power law [a (t ) ∝tγ with γ >0 ] , then we find interesting crossover effects in the mixing effectiveness of the diffusion process when the characteristic exponent γ is varied. Crossover effects are also found at the level of the survival probability and of the moments of the first passage-time distribution with two different regimes separated by the critical value γ =1 /2 . The case of an exponential scale factor is analyzed separately both for expanding and contracting media. In the latter situation, a stationary probability distribution arises in the long-time limit.
NASA Astrophysics Data System (ADS)
Spitoni, E.; Vincenzo, F.; Matteucci, F.
2017-03-01
Context. Analytical models of chemical evolution, including inflow and outflow of gas, are important tools for studying how the metal content in galaxies evolves as a function of time. Aims: We present new analytical solutions for the evolution of the gas mass, total mass, and metallicity of a galactic system when a decaying exponential infall rate of gas and galactic winds are assumed. We apply our model to characterize a sample of local star-forming and passive galaxies from the Sloan Digital Sky Survey data, with the aim of reproducing their observed mass-metallicity relation. Methods: We derived how the two populations of star-forming and passive galaxies differ in their particular distribution of ages, formation timescales, infall masses, and mass loading factors. Results: We find that the local passive galaxies are, on average, older and assembled on shorter typical timescales than the local star-forming galaxies; on the other hand, the star-forming galaxies with higher masses generally show older ages and longer typical formation timescales compared than star-forming galaxies with lower masses. The local star-forming galaxies experience stronger galactic winds than the passive galaxy population. Exploring the effect of assuming different initial mass functions in our model, we show that to reproduce the observed mass-metallicity relation, stronger winds are requested if the initial mass function is top-heavy. Finally, our analytical models predict the assumed sample of local galaxies to lie on a tight surface in the 3D space defined by stellar metallicity, star formation rate, and stellar mass, in agreement with the well-known fundamental relation from adopting gas-phase metallicity. Conclusions: By using a new analytical model of chemical evolution, we characterize an ensemble of SDSS galaxies in terms of their infall timescales, infall masses, and mass loading factors. Local passive galaxies are, on average, older and assembled on shorter typical timescales than the local star-forming galaxies. Moreover, the local star-forming galaxies show stronger galactic winds than the passive galaxy population. Finally, we find that the fundamental relation between metallicity, mass, and star formation rate for these local galaxies is still valid when adopting the average galaxy stellar metallicity.
The generation of criteria for selecting analytical tools for landscape management
Marilyn Duffey-Armstrong
1979-01-01
This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...
Nontrivial thermodynamics in 't Hooft's large-N limit
NASA Astrophysics Data System (ADS)
Cubero, Axel Cortés
2015-05-01
We study the finite volume/temperature correlation functions of the (1 +1 )-dimensional SU (N ) principal chiral sigma model in the planar limit. The exact S-matrix of the sigma model is known to simplify drastically at large N , and this leads to trivial thermodynamic Bethe ansatz (TBA) equations. The partition function, if derived using the TBA, can be shown to be that of free particles. We show that the correlation functions and expectation values of operators at finite volume/temperature are not those of the free theory, and that the TBA does not give enough information to calculate them. Our analysis is done using the Leclair-Mussardo formula for finite-volume correlators, and knowledge of the exact infinite-volume form factors. We present analytical results for the one-point function of the energy-momentum tensor, and the two-point function of the renormalized field operator. The results for the energy-momentum tensor can be used to define a nontrivial partition function.
Sarma, Dominik; Gawlitza, Kornelia; Rurack, Knut
2016-04-19
The need for rapid and high-throughput screening in analytical laboratories has led to significant growth in interest in suspension array technologies (SATs), especially with regard to cytometric assays targeting a low to medium number of analytes. Such SAT or bead-based assays rely on spherical objects that constitute the analytical platform. Usually, functionalized polymer or silica (SiO2) microbeads are used which each have distinct advantages and drawbacks. In this paper, we present a straightforward synthetic route to highly monodisperse SiO2-coated polystyrene core-shell (CS) beads for SAT with controllable architectures from smooth to raspberry- and multilayer-like shells by varying the molecular weight of poly(vinylpyrrolidone) (PVP), which was used as the stabilizer of the cores. The combination of both organic polymer core and a structurally controlled inorganic SiO2 shell in one hybrid particle holds great promises for flexible next-generation design of the spherical platform. The particles were characterized by electron microscopy (SEM, T-SEM, and TEM), thermogravimetry, flow cytometry, and nitrogen adsorption/desorption, offering comprehensive information on the composition, size, structure, and surface area. All particles show ideal cytometric detection patterns and facile handling due to the hybrid structure. The beads are endowed with straightforward modification possibilities through the defined SiO2 shells. We successfully implemented the particles in fluorometric SAT model assays, illustrating the benefits of tailored surface area which is readily available for small-molecule anchoring. Very promising assay performance was shown for DNA hybridization assays with quantification limits down to 8 fmol.
Gene Ontology-Based Analysis of Zebrafish Omics Data Using the Web Tool Comparative Gene Ontology.
Ebrahimie, Esmaeil; Fruzangohar, Mario; Moussavi Nik, Seyyed Hani; Newman, Morgan
2017-10-01
Gene Ontology (GO) analysis is a powerful tool in systems biology, which uses a defined nomenclature to annotate genes/proteins within three categories: "Molecular Function," "Biological Process," and "Cellular Component." GO analysis can assist in revealing functional mechanisms underlying observed patterns in transcriptomic, genomic, and proteomic data. The already extensive and increasing use of zebrafish for modeling genetic and other diseases highlights the need to develop a GO analytical tool for this organism. The web tool Comparative GO was originally developed for GO analysis of bacterial data in 2013 ( www.comparativego.com ). We have now upgraded and elaborated this web tool for analysis of zebrafish genetic data using GOs and annotations from the Gene Ontology Consortium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreno-Bote, Ruben; Parga, Nestor; Center for Theoretical Neuroscience, Center for Neurobiology and Behavior, Columbia University, New York 10032-2695
2006-01-20
An analytical description of the response properties of simple but realistic neuron models in the presence of noise is still lacking. We determine completely up to the second order the firing statistics of a single and a pair of leaky integrate-and-fire neurons receiving some common slowly filtered white noise. In particular, the auto- and cross-correlation functions of the output spike trains of pairs of cells are obtained from an improvement of the adiabatic approximation introduced previously by Moreno-Bote and Parga [Phys. Rev. Lett. 92, 028102 (2004)]. These two functions define the firing variability and firing synchronization between neurons, and aremore » of much importance for understanding neuron communication.« less
Optimal consensus algorithm integrated with obstacle avoidance
NASA Astrophysics Data System (ADS)
Wang, Jianan; Xin, Ming
2013-01-01
This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.
New hybrid voxelized/analytical primitive in Monte Carlo simulations for medical applications
NASA Astrophysics Data System (ADS)
Bert, Julien; Lemaréchal, Yannick; Visvikis, Dimitris
2016-05-01
Monte Carlo simulations (MCS) applied in particle physics play a key role in medical imaging and particle therapy. In such simulations, particles are transported through voxelized phantoms derived from predominantly patient CT images. However, such voxelized object representation limits the incorporation of fine elements, such as artificial implants from CAD modeling or anatomical and functional details extracted from other imaging modalities. In this work we propose a new hYbrid Voxelized/ANalytical primitive (YVAN) that combines both voxelized and analytical object descriptions within the same MCS, without the need to simultaneously run two parallel simulations, which is the current gold standard methodology. Given that YVAN is simply a new primitive object, it does not require any modifications on the underlying MC navigation code. The new proposed primitive was assessed through a first simple MCS. Results from the YVAN primitive were compared against an MCS using a pure analytical geometry and the layer mass geometry concept. A perfect agreement was found between these simulations, leading to the conclusion that the new hybrid primitive is able to accurately and efficiently handle phantoms defined by a mixture of voxelized and analytical objects. In addition, two application-based evaluation studies in coronary angiography and intra-operative radiotherapy showed that the use of YVAN was 6.5% and 12.2% faster than the layered mass geometry method, respectively, without any associated loss of accuracy. However, the simplification advantages and differences in computational time improvements obtained with YVAN depend on the relative proportion of the analytical and voxelized structures used in the simulation as well as the size and number of triangles used in the description of the analytical object meshes.
NASA Astrophysics Data System (ADS)
Wollocko, Arthur; Danczyk, Jennifer; Farry, Michael; Jenkins, Michael; Voshell, Martin
2015-05-01
The proliferation of sensor technologies continues to impact Intelligence Analysis (IA) work domains. Historical procurement focus on sensor platform development and acquisition has resulted in increasingly advanced collection systems; however, such systems often demonstrate classic data overload conditions by placing increased burdens on already overtaxed human operators and analysts. Support technologies and improved interfaces have begun to emerge to ease that burden, but these often focus on single modalities or sensor platforms rather than underlying operator and analyst support needs, resulting in systems that do not adequately leverage their natural human attentional competencies, unique skills, and training. One particular reason why emerging support tools often fail is due to the gap between military applications and their functions, and the functions and capabilities afforded by cutting edge technology employed daily by modern knowledge workers who are increasingly "digitally native." With the entry of Generation Y into these workplaces, "net generation" analysts, who are familiar with socially driven platforms that excel at giving users insight into large data sets while keeping cognitive burdens at a minimum, are creating opportunities for enhanced workflows. By using these ubiquitous platforms, net generation analysts have trained skills in discovering new information socially, tracking trends among affinity groups, and disseminating information. However, these functions are currently under-supported by existing tools. In this paper, we describe how socially driven techniques can be contextualized to frame complex analytical threads throughout the IA process. This paper focuses specifically on collaborative support technology development efforts for a team of operators and analysts. Our work focuses on under-supported functions in current working environments, and identifies opportunities to improve a team's ability to discover new information and disseminate insightful analytic findings. We describe our Cognitive Systems Engineering approach to developing a novel collaborative enterprise IA system that combines modern collaboration tools with familiar contemporary social technologies. Our current findings detail specific cognitive and collaborative work support functions that defined the design requirements for a prototype analyst collaborative support environment.
Comparison of specificity and information for fuzzy domains
NASA Technical Reports Server (NTRS)
Ramer, Arthur
1992-01-01
This paper demonstrates how an integrated theory can be built on the foundation of possibility theory. Information and uncertainty were considered in 'fuzzy' literature since 1982. Our departing point is the model proposed by Klir for the discrete case. It was elaborated axiomatically by Ramer, who also introduced the continuous model. Specificity as a numerical function was considered mostly within Dempster-Shafer evidence theory. An explicity definition was given first by Yager, who has also introduced it in the context of possibility theory. Axiomatic approach and the continuous model have been developed very recently by Ramer and Yager. They also establish a close analytical correspondence between specificity and information. In literature to date, specificity and uncertainty are defined only for the discrete finite domains, with a sole exception. Our presentation removes these limitations. We define specificity measures for arbitrary measurable domains.
Quantitative Analysis of Fullerene Nanomaterials in Environmental Systems: A Critical Review
Isaacson, Carl W.; Kleber, Markus; Field, Jennifer A.
2009-01-01
The increasing production and use of fullerene nanomaterials has led to calls for more information regarding the potential impacts that releases of these materials may have on human and environmental health. Fullerene nanomaterials, which are comprised of both fullerenes and surface-functionalized fullerenes, are used in electronic, optic, medical and cosmetic applications. Measuring fullerene nanomaterial concentrations in natural environments is difficult because they exhibit a duality of physical and chemical characteristics as they transition from hydrophobic to polar forms upon exposure to water. In aqueous environments, this is expressed as their tendency to initially (i) self assemble into aggregates of appreciable size and hydrophobicity, and subsequently (ii) interact with the surrounding water molecules and other chemical constituents in natural environments thereby acquiring negative surface charge. Fullerene nanomaterials may therefore deceive the application of any single analytical method that is applied with the assumption that fullerenes have but one defining characteristic (e.g., hydrophobicity). [1] We find that analytical procedures are needed to account for the potentially transitory nature of fullerenes in natural environments through the use of approaches that provide chemically-explicit information including molecular weight and the number and identity of surface functional groups. [2] We suggest that sensitive and mass-selective detection, such as that offered by mass spectrometry when combined with optimized extraction procedures, offers the greatest potential to achieve this goal. [3] With this review, we show that significant improvements in analytical rigor would result from an increased availability of well characterized authentic standards, reference materials, and isotopically-labeled internal standards. Finally, the benefits of quantitative and validated analytical methods for advancing the knowledge on fullerene occurrence, fate, and behavior are indicated. PMID:19764203
Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.
1990-01-01
The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.
Specialized data analysis of SSME and advanced propulsion system vibration measurements
NASA Technical Reports Server (NTRS)
Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi
1993-01-01
The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.
Decomposing Oncogenic Transcriptional Signatures to Generate Maps of Divergent Cellular States.
Kim, Jong Wook; Abudayyeh, Omar O; Yeerna, Huwate; Yeang, Chen-Hsiang; Stewart, Michelle; Jenkins, Russell W; Kitajima, Shunsuke; Konieczkowski, David J; Medetgul-Ernar, Kate; Cavazos, Taylor; Mah, Clarence; Ting, Stephanie; Van Allen, Eliezer M; Cohen, Ofir; Mcdermott, John; Damato, Emily; Aguirre, Andrew J; Liang, Jonathan; Liberzon, Arthur; Alexe, Gabriella; Doench, John; Ghandi, Mahmoud; Vazquez, Francisca; Weir, Barbara A; Tsherniak, Aviad; Subramanian, Aravind; Meneses-Cime, Karina; Park, Jason; Clemons, Paul; Garraway, Levi A; Thomas, David; Boehm, Jesse S; Barbie, David A; Hahn, William C; Mesirov, Jill P; Tamayo, Pablo
2017-08-23
The systematic sequencing of the cancer genome has led to the identification of numerous genetic alterations in cancer. However, a deeper understanding of the functional consequences of these alterations is necessary to guide appropriate therapeutic strategies. Here, we describe Onco-GPS (OncoGenic Positioning System), a data-driven analysis framework to organize individual tumor samples with shared oncogenic alterations onto a reference map defined by their underlying cellular states. We applied the methodology to the RAS pathway and identified nine distinct components that reflect transcriptional activities downstream of RAS and defined several functional states associated with patterns of transcriptional component activation that associates with genomic hallmarks and response to genetic and pharmacological perturbations. These results show that the Onco-GPS is an effective approach to explore the complex landscape of oncogenic cellular states across cancers, and an analytic framework to summarize knowledge, establish relationships, and generate more effective disease models for research or as part of individualized precision medicine paradigms. Copyright © 2017 Elsevier Inc. All rights reserved.
Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.
Lo, Y C; Armbruster, David A
2012-04-01
Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.
NASA Astrophysics Data System (ADS)
Obracaj, Piotr; Fabianowski, Dariusz
2017-10-01
Implementations concerning adaptation of historic facilities for public utility objects are associated with the necessity of solving many complex, often conflicting expectations of future users. This mainly concerns the function that includes construction, technology and aesthetic issues. The list of issues is completed with proper protection of historic values, different in each case. The procedure leading to obtaining the expected solution is a multicriteria procedure, usually difficult to accurately define and requiring designer’s large experience. An innovative approach has been used for the analysis, namely - the modified EA FAHP (Extent Analysis Fuzzy Analytic Hierarchy Process) Chang’s method of a multicriteria analysis for the assessment of complex functional and spatial issues. Selection of optimal spatial form of an adapted historic building intended for the multi-functional public utility facility was analysed. The assumed functional flexibility was determined in the scope of: education, conference, and chamber spectacles, such as drama, concerts, in different stage-audience layouts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas
The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.
A differential delay equation arising from the sieve of Eratosthenes
NASA Astrophysics Data System (ADS)
Cheer, A. Y.; Goldston, D. A.
1990-07-01
The differential delay equation defined by ω (u) = 1/u for 1 ≤ u ≤ 2 and (uω (u))' = ω (u - 1) for u ≥ 2 was introduced by Buchstab in connection with an asymptotic formula for the number of uncanceled terms in the sieve of Eratosthenes. Maier has recently used this result to show there is unexpected irregularity in the distribution of primes in short intervals. The function ω (u) is studied in this paper using numerical and analytical techniques. The results are applied to give some numerical constants in Maier's theorem.
The technological raw material heating furnaces operation efficiency improving issue
NASA Astrophysics Data System (ADS)
Paramonov, A. M.
2017-08-01
The issue of fuel oil applying efficiency improving in the technological raw material heating furnaces by means of its combustion intensification is considered in the paper. The technical and economic optimization problem of the fuel oil heating before combustion is solved. The fuel oil heating optimal temperature defining method and algorithm analytically considering the correlation of thermal, operating parameters and discounted costs for the heating furnace were developed. The obtained optimization functionality provides the heating furnace appropriate thermal indices achievement at minimum discounted costs. The carried out research results prove the expediency of the proposed solutions using.
Beyond the bulk: disclosing the life of single microbial cells
Rosenthal, Katrin; Oehling, Verena
2017-01-01
Abstract Microbial single cell analysis has led to discoveries that are beyond what can be resolved with population-based studies. It provides a pristine view of the mechanisms that organize cellular physiology, unbiased by population heterogeneity or uncontrollable environmental impacts. A holistic description of cellular functions at the single cell level requires analytical concepts beyond the miniaturization of existing technologies, defined but uncontrolled by the biological system itself. This review provides an overview of the latest advances in single cell technologies and demonstrates their potential. Opportunities and limitations of single cell microbiology are discussed using selected application-related examples. PMID:29029257
Diffraction of Nondiverging Bessel Beams by Fork-Shaped and Rectilinear Grating
NASA Astrophysics Data System (ADS)
Janicijevic, Ljiljana; Topuzoski, Suzana
2007-04-01
We present an investigation about Fresnel diffraction of Bessel beams, propagating as nondiverging within a distance Ln, with or without phase singularities, by rectilinear and fork-shaped gratings. The common general transmission function of these gratings is defined and specialized for three different cases: binary amplitude gratings, amplitude holograms and their phase versions. Solving the Fresnel diffraction integral in cylindrical coordinates, we obtain analytical expressions for the diffracted wave amplitude for all types of proposed gratings, and make conclusions about the existence of phase singularities and corresponding topological charges in the created by the gratings beams of different diffraction orders.
The path integral on the pseudosphere
NASA Astrophysics Data System (ADS)
Grosche, C.; Steiner, F.
1988-02-01
A rigorous path integral treatment for the d-dimensional pseudosphere Λd-1 , a Riemannian manifold of constant negative curvature, is presented. The path integral formulation is based on a canonical approach using Weyl-ordering and the Hamiltonian path integral defined on midpoints. The time-dependent and energy-dependent Feynman kernels obtain different expressions in the even- and odd-dimensional cases, respectively. The special case of the three-dimensional pseudosphere, which is analytically equivalent to the Poincaré upper half plane, the Poincaré disc, and the hyperbolic strip, is discussed in detail including the energy spectrum and the normalised wave-functions.
DOT National Transportation Integrated Search
1983-09-01
This report describes analytical studies carried out to define the relationship between track parameters and safety from derailment. Problematic track scenarios are identified reflecting known accident data. Vehicle response is investigated in the 10...
DOT National Transportation Integrated Search
1983-09-01
This report describes analytical studies carried out to define the relationship between track parameters and safety from derailment. Problematic track scenarios are identified reflecting known accident data. Vehicle response is investigated in the 10...
NASA Astrophysics Data System (ADS)
Nagar, Alessandro; Akcay, Sarp
2012-02-01
We propose, within the effective-one-body approach, a new, resummed analytical representation of the gravitational-wave energy flux absorbed by a system of two circularized (nonspinning) black holes. This expression is such that it is well-behaved in the strong-field, fast-motion regime, notably up to the effective-one-body-defined last unstable orbit. Building conceptually upon the procedure adopted to resum the multipolar asymptotic energy flux, we introduce a multiplicative decomposition of the multipolar absorbed flux made by three factors: (i) the leading-order contribution, (ii) an “effective source” and (iii) a new residual amplitude correction (ρ˜ℓmH)2ℓ. In the test-mass limit, we use a frequency-domain perturbative approach to accurately compute numerically the horizon-absorbed fluxes along a sequence of stable and unstable circular orbits, and we extract from them the functions ρ˜ℓmH. These quantities are then fitted via rational functions. The resulting analytically represented test-mass knowledge is then suitably hybridized with lower-order analytical information that is valid for any mass ratio. This yields a resummed representation of the absorbed flux for a generic, circularized, nonspinning black-hole binary. Our result adds new information to the state-of-the-art calculation of the absorbed flux at fractional 5 post-Newtonian order [S. Taylor and E. Poisson, Phys. Rev. D 78, 084016 (2008)], which is recovered in the weak-field limit approximation by construction.
Wood, Paul L
2014-01-01
Metabolomics research has the potential to provide biomarkers for the detection of disease, for subtyping complex disease populations, for monitoring disease progression and therapy, and for defining new molecular targets for therapeutic intervention. These potentials are far from being realized because of a number of technical, conceptual, financial, and bioinformatics issues. Mass spectrometry provides analytical platforms that address the technical barriers to success in metabolomics research; however, the limited commercial availability of analytical and stable isotope standards has created a bottleneck for the absolute quantitation of a number of metabolites. Conceptual and financial factors contribute to the generation of statistically under-powered clinical studies, whereas bioinformatics issues result in the publication of a large number of unidentified metabolites. The path forward in this field involves targeted metabolomics analyses of large control and patient populations to define both the normal range of a defined metabolite and the potential heterogeneity (eg, bimodal) in complex patient populations. This approach requires that metabolomics research groups, in addition to developing a number of analytical platforms, build sufficient chemistry resources to supply the analytical standards required for absolute metabolite quantitation. Examples of metabolomics evaluations of sulfur amino-acid metabolism in psychiatry, neurology, and neuro-oncology and of lipidomics in neurology will be reviewed. PMID:23842599
Wood, Paul L
2014-01-01
Metabolomics research has the potential to provide biomarkers for the detection of disease, for subtyping complex disease populations, for monitoring disease progression and therapy, and for defining new molecular targets for therapeutic intervention. These potentials are far from being realized because of a number of technical, conceptual, financial, and bioinformatics issues. Mass spectrometry provides analytical platforms that address the technical barriers to success in metabolomics research; however, the limited commercial availability of analytical and stable isotope standards has created a bottleneck for the absolute quantitation of a number of metabolites. Conceptual and financial factors contribute to the generation of statistically under-powered clinical studies, whereas bioinformatics issues result in the publication of a large number of unidentified metabolites. The path forward in this field involves targeted metabolomics analyses of large control and patient populations to define both the normal range of a defined metabolite and the potential heterogeneity (eg, bimodal) in complex patient populations. This approach requires that metabolomics research groups, in addition to developing a number of analytical platforms, build sufficient chemistry resources to supply the analytical standards required for absolute metabolite quantitation. Examples of metabolomics evaluations of sulfur amino-acid metabolism in psychiatry, neurology, and neuro-oncology and of lipidomics in neurology will be reviewed.
NASA Technical Reports Server (NTRS)
Sawdy, D. T.; Beckemeyer, R. J.; Patterson, J. D.
1976-01-01
Results are presented from detailed analytical studies made to define methods for obtaining improved multisegment lining performance by taking advantage of relative placement of each lining segment. Properly phased liner segments reflect and spatially redistribute the incident acoustic energy and thus provide additional attenuation. A mathematical model was developed for rectangular ducts with uniform mean flow. Segmented acoustic fields were represented by duct eigenfunction expansions, and mode-matching was used to ensure continuity of the total field. Parametric studies were performed to identify attenuation mechanisms and define preliminary liner configurations. An optimization procedure was used to determine optimum liner impedance values for a given total lining length, Mach number, and incident modal distribution. Optimal segmented liners are presented and it is shown that, provided the sound source is well-defined and flow environment is known, conventional infinite duct optimum attenuation rates can be improved. To confirm these results, an experimental program was conducted in a laboratory test facility. The measured data are presented in the form of analytical-experimental correlations. Excellent agreement between theory and experiment verifies and substantiates the analytical prediction techniques. The results indicate that phased liners may be of immediate benefit in the development of improved aircraft exhaust duct noise suppressors.
Tensor Minkowski Functionals for random fields on the sphere
NASA Astrophysics Data System (ADS)
Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom
2017-12-01
We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.
Nano-Enabled Approaches to Chemical Imaging in Biosystems
Retterer, Scott T.; Morrell-Falvey, Jennifer L.; Doktycz, Mitchel John
2018-02-28
Understanding and predicting how biosystems function require knowledge about the dynamic physicochemical environments with which they interact and alter by their presence. Yet, identifying specific components, tracking the dynamics of the system, and monitoring local environmental conditions without disrupting biosystem function present significant challenges for analytical measurements. Nanomaterials, by their very size and nature, can act as probes and interfaces to biosystems and offer solutions to some of these challenges. At the nanoscale, material properties emerge that can be exploited for localizing biomolecules and making chemical measurements at cellular and subcellular scales. Here, we review advances in chemical imaging enabledmore » by nanoscale structures, in the use of nanoparticles as chemical and environmental probes, and in the development of micro- and nanoscale fluidic devices to define and manipulate local environments and facilitate chemical measurements of complex biosystems. As a result, integration of these nano-enabled methods will lead to an unprecedented understanding of biosystem function.« less
NASA Astrophysics Data System (ADS)
Ahmadov, A. I.; Naeem, Maria; Qocayeva, M. V.; Tarverdiyeva, V. A.
2018-01-01
In this paper, the bound-state solution of the modified radial Schrödinger equation is obtained for the Manning-Rosen plus Hulthén potential by using new developed scheme to overcome the centrifugal part. The energy eigenvalues and corresponding radial wave functions are defined for any l≠0 angular momentum case via the Nikiforov-Uvarov (NU) and supersymmetric quantum mechanics (SUSY QM) methods. Thanks to both methods, equivalent expressions are obtained for the energy eigenvalues, and the expression of radial wave functions transformations to each other is presented. The energy levels and the corresponding normalized eigenfunctions are represented in terms of the Jacobi polynomials for arbitrary l states. A closed form of the normalization constant of the wave functions is also found. It is shown that, the energy eigenvalues and eigenfunctions are sensitive to nr radial and l orbital quantum numbers.
Nano-Enabled Approaches to Chemical Imaging in Biosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Retterer, Scott T.; Morrell-Falvey, Jennifer L.; Doktycz, Mitchel John
Understanding and predicting how biosystems function require knowledge about the dynamic physicochemical environments with which they interact and alter by their presence. Yet, identifying specific components, tracking the dynamics of the system, and monitoring local environmental conditions without disrupting biosystem function present significant challenges for analytical measurements. Nanomaterials, by their very size and nature, can act as probes and interfaces to biosystems and offer solutions to some of these challenges. At the nanoscale, material properties emerge that can be exploited for localizing biomolecules and making chemical measurements at cellular and subcellular scales. Here, we review advances in chemical imaging enabledmore » by nanoscale structures, in the use of nanoparticles as chemical and environmental probes, and in the development of micro- and nanoscale fluidic devices to define and manipulate local environments and facilitate chemical measurements of complex biosystems. As a result, integration of these nano-enabled methods will lead to an unprecedented understanding of biosystem function.« less
Analyticity without Differentiability
ERIC Educational Resources Information Center
Kirillova, Evgenia; Spindler, Karlheinz
2008-01-01
In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…
NASA Astrophysics Data System (ADS)
Donohue, Randall; Yang, Yuting; McVicar, Tim; Roderick, Michael
2016-04-01
A fundamental question in climate and ecosystem science is "how does climate regulate the land surface carbon budget?" To better answer that question, here we develop an analytical model for estimating mean annual terrestrial gross primary productivity (GPP), which is the largest carbon flux over land, based on a rate-limitation framework. Actual GPP (climatological mean from 1982 to 2010) is calculated as a function of the balance between two GPP potentials defined by the climate (i.e., precipitation and solar radiation) and a third parameter that encodes other environmental variables and modifies the GPP-climate relationship. The developed model was tested at three spatial scales using different GPP sources, i.e., (1) observed GPP from 94 flux-sites, (2) modelled GPP (using the model-tree-ensemble approach) at 48654 (0.5 degree) grid-cells and (3) at 32 large catchments across the globe. Results show that the proposed model could account for the spatial GPP patterns, with a root-mean-square error of 0.70, 0.65 and 0.3 g C m-2 d-1 and R2 of 0.79, 0.92 and 0.97 for the flux-site, grid-cell and catchment scales, respectively. This analytical GPP model shares a similar form with the Budyko hydroclimatological model, which opens the possibility of a general analytical framework to analyze the linked carbon-water-energy cycles.
Psychosocial functioning in the context of diagnosis: assessment and theoretical issues.
Ro, Eunyoe; Clark, Lee Anna
2009-09-01
Psychosocial functioning is an important focus of attention in the revision of the Diagnostic and Statistical Manual of Mental Disorders. Researchers and clinicians are converging upon the opinion that psychometrically strong, comprehensive assessment of individuals' functioning is needed to characterize disorder fully. Also shared is the realization that existing theory and research in this domain have critical shortcomings. The authors urge that the field reexamine the empirical evidence and address theoretical issues to guide future development of the construct and its measurement. The authors first discuss several theoretical issues relevant to the conceptualization and assessment of functioning: (a) definitions of functioning, (b) the role of functioning in defining disorder, and (c) understanding functioning within environmental contexts. The authors then present data regarding empirical domains of psychosocial functioning and their interrelations. Self-reported data on multiple domains of psychosocial functioning were collected from 429 participants. Factor-analytic results (promax rotation) suggest a 4-factor structure of psychosocial functioning: Well-Being, Basic Functioning, Self-Mastery, and Interpersonal and Social Relationships. Finally, the authors propose an integration of theory and empirical findings, which they believe will better incorporate psychosocial functioning into future diagnostic systems. Copyright 2009 APA, all rights reserved.
Retrieving cirrus microphysical properties from stellar aureoles
NASA Astrophysics Data System (ADS)
DeVore, J. G.; Kristl, J. A.; Rappaport, S. A.
2013-06-01
The aureoles around stars caused by thin cirrus limit nighttime measurement opportunities for ground-based astronomy, but can provide information on high-altitude ice crystals for climate research. In this paper we attempt to demonstrate quantitatively how this works. Aureole profiles can be followed out to ~0.2° from stars and ~0.5° from Jupiter. Interpretation of diffracted starlight is similar to that for sunlight, but emphasizes larger particles. Stellar diffraction profiles are very distinctive, typically being approximately flat out to a critical angle followed by gradually steepening power-law falloff with slope less steep than -3. Using the relationship between the phase function for diffraction and the average Fourier transform of the projected area of complex ice crystals, we show that defining particle size in terms of average projected area normal to the propagation direction of the starlight leads to a simple, analytic approximation representing large-particle diffraction that is nearly independent of crystal habit. A similar analytic approximation for the diffraction aureole allows it to be separated from the point spread function and the sky background. Multiple scattering is deconvolved using the Hankel transform leading to the diffraction phase function. Application of constrained numerical inversion to the phase function then yields a solution for the particle size distribution in the range between ~50 μm and ~400 μm. Stellar aureole measurements can provide one of the very few, as well as least expensive, methods for retrieving cirrus microphysical properties from ground-based observations.
NASA Technical Reports Server (NTRS)
Kempler, Steve; Mathews, Tiffany
2016-01-01
The continuum of ever-evolving data management systems affords great opportunities to the enhancement of knowledge and facilitation of science research. To take advantage of these opportunities, it is essential to understand and develop methods that enable data relationships to be examined and the information to be manipulated. This presentation describes the efforts of the Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster to understand, define, and facilitate the implementation of ESDA to advance science research. As a result of the void of Earth science data analytics publication material, the cluster has defined ESDA along with 10 goals to set the framework for a common understanding of tools and techniques that are available and still needed to support ESDA.
Flat-plate solar array project. Volume 6: Engineering sciences and reliability
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.; Smokler, M. I.
1986-01-01
The Flat-Plate Solar Array (FSA) Project activities directed at developing the engineering technology base required to achieve modules that meet the functional, safety, and reliability requirements of large scale terrestrial photovoltaic systems applications are reported. These activities included: (1) development of functional, safety, and reliability requirements for such applications; (2) development of the engineering analytical approaches, test techniques, and design solutions required to meet the requirements; (3) synthesis and procurement of candidate designs for test and evaluation; and (4) performance of extensive testing, evaluation, and failure analysis of define design shortfalls and, thus, areas requiring additional research and development. A summary of the approach and technical outcome of these activities are provided along with a complete bibliography of the published documentation covering the detailed accomplishments and technologies developed.
The analytical design of spectral measurements for multispectral remote sensor systems
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Landgrebe, D. A. (Principal Investigator)
1979-01-01
The author has identified the following significant results. In order to choose a design which will be optimal for the largest class of remote sensing problems, a method was developed which attempted to represent the spectral response function from a scene as accurately as possible. The performance of the overall recognition system was studied relative to the accuracy of the spectral representation. The spectral representation was only one of a set of five interrelated parameter categories which also included the spatial representation parameter, the signal to noise ratio, ancillary data, and information classes. The spectral response functions observed from a stratum were modeled as a stochastic process with a Gaussian probability measure. The criterion for spectral representation was defined by the minimum expected mean-square error.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
Cutting solid figures by plane - analytical solution and spreadsheet implementation
NASA Astrophysics Data System (ADS)
Benacka, Jan
2012-07-01
In some secondary mathematics curricula, there is a topic called Stereometry that deals with investigating the position and finding the intersection, angle, and distance of lines and planes defined within a prism or pyramid. Coordinate system is not used. The metric tasks are solved using Pythagoras' theorem, trigonometric functions, and sine and cosine rules. The basic problem is to find the section of the figure by a plane that is defined by three points related to the figure. In this article, a formula is derived that gives the positions of the intersection points of such a plane and the figure edges, that is, the vertices of the section polygon. Spreadsheet implementations of the formula for cuboid and right rectangular pyramids are presented. The user can check his/her graphical solution, or proceed if he/she is not able to complete the section.
Advanced Video Analysis Needs for Human Performance Evaluation
NASA Technical Reports Server (NTRS)
Campbell, Paul D.
1994-01-01
Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.
NASA Astrophysics Data System (ADS)
Wang, Dong
2018-05-01
Thanks to the great efforts made by Antoni (2006), spectral kurtosis has been recognized as a milestone for characterizing non-stationary signals, especially bearing fault signals. The main idea of spectral kurtosis is to use the fourth standardized moment, namely kurtosis, as a function of spectral frequency so as to indicate how repetitive transients caused by a bearing defect vary with frequency. Moreover, spectral kurtosis is defined based on an analytic bearing fault signal constructed from either a complex filter or Hilbert transform. On the other hand, another attractive work was reported by Borghesani et al. (2014) to mathematically reveal the relationship between the kurtosis of an analytical bearing fault signal and the square of the squared envelope spectrum of the analytical bearing fault signal for explaining spectral correlation for quantification of bearing fault signals. More interestingly, it was discovered that the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum corresponds to the raw 4th order moment. Inspired by the aforementioned works, in this paper, we mathematically show that: (1) spectral kurtosis can be decomposed into squared envelope and squared L2/L1 norm so that spectral kurtosis can be explained as spectral squared L2/L1 norm; (2) spectral L2/L1 norm is formally defined for characterizing bearing fault signals and its two geometrical explanations are made; (3) spectral L2/L1 norm is proportional to the square root of the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum; (4) some extensions of spectral L2/L1 norm for characterizing bearing fault signals are pointed out.
Contemporary Privacy Theory Contributions to Learning Analytics
ERIC Educational Resources Information Center
Heath, Jennifer
2014-01-01
With the continued adoption of learning analytics in higher education institutions, vast volumes of data are generated and "big data" related issues, including privacy, emerge. Privacy is an ill-defined concept and subject to various interpretations and perspectives, including those of philosophers, lawyers, and information systems…
USDA-ARS?s Scientific Manuscript database
The antibody is central to the performance of an ELISA providing the basis of analyte selection and detection. It is the interaction of antibody with analyte under defined conditions that dictates the outcome of the ELISA and deviations in those conditions will impact assay performance. The aim of...
Mining functionally relevant gene sets for analyzing physiologically novel clinical expression data.
Turcan, Sevin; Vetter, Douglas E; Maron, Jill L; Wei, Xintao; Slonim, Donna K
2011-01-01
Gene set analyses have become a standard approach for increasing the sensitivity of transcriptomic studies. However, analytical methods incorporating gene sets require the availability of pre-defined gene sets relevant to the underlying physiology being studied. For novel physiological problems, relevant gene sets may be unavailable or existing gene set databases may bias the results towards only the best-studied of the relevant biological processes. We describe a successful attempt to mine novel functional gene sets for translational projects where the underlying physiology is not necessarily well characterized in existing annotation databases. We choose targeted training data from public expression data repositories and define new criteria for selecting biclusters to serve as candidate gene sets. Many of the discovered gene sets show little or no enrichment for informative Gene Ontology terms or other functional annotation. However, we observe that such gene sets show coherent differential expression in new clinical test data sets, even if derived from different species, tissues, and disease states. We demonstrate the efficacy of this method on a human metabolic data set, where we discover novel, uncharacterized gene sets that are diagnostic of diabetes, and on additional data sets related to neuronal processes and human development. Our results suggest that our approach may be an efficient way to generate a collection of gene sets relevant to the analysis of data for novel clinical applications where existing functional annotation is relatively incomplete.
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Howell, D. T.; Collins, I. K.; Hayes, C.
1979-01-01
Tabulated surface pressure data for a series of four forebodies which have analytically defined cross sections and which are based on a parabolic arc profile having a 20 deg half angle at the nose are presented without analysis. The first forebody has a circular cross section, and the second has a cross section which is an ellipse with an axis ratio of 2/1. The third has a cross section defined by a lobed analytic curve. The fourth forebody has cross sections which develop smoothly from circular at the pointed nose through the lobed analytic curve and back to circular at the aft end. The data generally cover angles of attack from -5 deg to 20 deg at angles of sideslip from 0 deg to 5 deg for Mach numbers of 1.70, 2.50, 3.95, and 4.50 at a constant Reynolds number.
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
Extremal Correlators in the Ads/cft Correspondence
NASA Astrophysics Data System (ADS)
D'Hoker, Eric; Freedman, Daniel Z.; Mathur, Samir D.; Matusis, Alec; Rastelli, Leonardo
The non-renormalization of the 3-point functions
NASA Technical Reports Server (NTRS)
Farassat, Fereidoun; Myers, Michael K.
2011-01-01
This paper is the first part of a three part tutorial on multidimensional generalized functions (GFs) and their applications in aeroacoustics and fluid mechanics. The subject is highly fascinating and essential in many areas of science and, in particular, wave propagation problems. In this tutorial, we strive to present rigorously and clearly the basic concepts and the tools that are needed to use GFs in applications effectively and with ease. We give many examples to help the readers in understanding the mathematical ideas presented here. The first part of the tutorial is on the basic concepts of GFs. Here we define GFs, their properties and some common operations on them. We define the important concept of generalized differentiation and then give some interesting elementary and advanced examples on Green's functions and wave propagation problems. Here, the analytic power of GFs in applications is demonstrated with ease and elegance. Part 2 of this tutorial is on the diverse applications of generalized derivatives (GDs). Part 3 is on generalized Fourier transformations and some more advanced topics. One goal of writing this tutorial is to convince readers that, because of their powerful operational properties, GFs are absolutely essential and useful in engineering and physics, particularly in aeroacoustics and fluid mechanics.
Natural learning in NLDA networks.
González, Ana; Dorronsoro, José R
2007-07-01
Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.
FAPRS Manual: Manual for the Functional Analytic Psychotherapy Rating Scale
ERIC Educational Resources Information Center
Callaghan, Glenn M.; Follette, William C.
2008-01-01
The Functional Analytic Psychotherapy Rating Scale (FAPRS) is behavioral coding system designed to capture those essential client and therapist behaviors that occur during Functional Analytic Psychotherapy (FAP). The FAPRS manual presents the purpose and rules for documenting essential aspects of FAP. The FAPRS codes are exclusive and exhaustive…
The Nutritional Phenotype in the Age of Metabolomics
Zeisel, S. H.; Freake, H. C.; Bauman, D. E.; Bier, D. M.; Burrin, D. G.; German, J. B.; Klein, S.; Marquis, G. S.; Milner, J. A.; Pelto, G. H.; Rasmussen, K. M.
2008-01-01
The concept of the nutritional phenotype is proposed as a defined and integrated set of genetic, proteomic, metabolomic, functional, and behavioral factors that, when measured, form the basis for assessment of human nutritional status. The nutritional phenotype integrates the effects of diet on disease/wellness and is the quantitative indication of the paths by which genes and environment exert their effects on health. Advances in technology and in fundamental biological knowledge make it possible to define and measure the nutritional phenotype accurately in a cross section of individuals with various states of health and disease. This growing base of data and knowledge could serve as a resource for all scientific disciplines involved in human health. Nutritional sciences should be a prime mover in making key decisions that include: what environmental inputs (in addition to diet) are needed; what genes/proteins/metabolites should be measured; what end-point phenotypes should be included; and what informatics tools are available to ask nutritionally relevant questions. Nutrition should be the major discipline establishing how the elements of the nutritional phenotype vary as a function of diet. Nutritional sciences should also be instrumental in linking the elements that are responsive to diet with the functional outcomes in organisms that derive from them. As the first step in this initiative, a prioritized list of genomic, proteomic, and metabolomic as well as functional and behavioral measures that defines a practically useful subset of the nutritional phenotype for use in clinical and epidemiological investigations must be developed. From this list, analytic platforms must then be identified that are capable of delivering highly quantitative data on these endpoints. This conceptualization of a nutritional phenotype provides a concrete form and substance to the recognized future of nutritional sciences as a field addressing diet, integrated metabolism, and health. PMID:15987837
Multiplicity distributions of gluon and quark jets and a test of QCD analytic calculations
NASA Astrophysics Data System (ADS)
Gary, J. William
1999-03-01
Gluon jets are identified in e +e - hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon hets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The mean and first few higher moments of the gluon jet charged particle multiplicity distribution are compared to the analogous results found for light quark (uds) jets, also defined inclusively. Large differences are observed between the mean, skew and curtosis values of the gluon and quark jets, but not between their dispersions. The cumulant factorial moments of the distributions are also measured, and are used to test the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the separated gluon and quark jet cumulant moments compared to a next-to-leading order calculation without energy conservation. There is good quantitative agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets. The data sample used is the LEP-1 sample of the OPAL experiment at LEP.
Multiplicity distributions of gluon and quark jets and tests of QCD analytic predictions
NASA Astrophysics Data System (ADS)
OPAL Collaboration; Ackerstaff, K.; et al.
Gluon jets are identified in e+e^- hadronic annihilation events by tagging two quark jets in the same hemisphere of an event. The gluon jet is defined inclusively as all the particles in the opposite hemisphere. Gluon jets defined in this manner have a close correspondence to gluon jets as they are defined for analytic calculations, and are almost independent of a jet finding algorithm. The charged particle multiplicity distribution of the gluon jets is presented, and is analyzed for its mean, dispersion, skew, and curtosis values, and for its factorial and cumulant moments. The results are compared to the analogous results found for a sample of light quark (uds) jets, also defined inclusively. We observe differences between the mean, skew and curtosis values of gluon and quark jets, but not between their dispersions. The cumulant moment results are compared to the predictions of QCD analytic calculations. A calculation which includes next-to-next-to-leading order corrections and energy conservation is observed to provide a much improved description of the data compared to a next-to-leading order calculation without energy conservation. There is agreement between the data and calculations for the ratios of the cumulant moments between gluon and quark jets.
CEDS Addresses: Rubric Elements
ERIC Educational Resources Information Center
US Department of Education, 2015
2015-01-01
Common Education Data Standards (CEDS) Version 4 introduced a common data vocabulary for defining rubrics in a data system. The CEDS elements support digital representations of both holistic and analytic rubrics. This document shares examples of holistic and analytic project rubrics, available CEDS Connections, and a logical model showing the…
BPS/CFT Correspondence III: Gauge Origami Partition Function and qq-Characters
NASA Astrophysics Data System (ADS)
Nekrasov, Nikita
2018-03-01
We study generalized gauge theories engineered by taking the low energy limit of the Dp branes wrapping {X × {T}^{p-3}}, with X a possibly singular surface in a Calabi-Yau fourfold Z. For toric Z and X the partition function can be computed by localization, making it a statistical mechanical model, called the gauge origami. The random variables are the ensembles of Young diagrams. The building block of the gauge origami is associated with a tetrahedron, whose edges are colored by vector spaces. We show the properly normalized partition function is an entire function of the Coulomb moduli, for generic values of the {Ω} -background parameters. The orbifold version of the theory defines the qq-character operators, with and without the surface defects. The analytic properties are the consequence of a relative compactness of the moduli spaces M({ěc n}, k) of crossed and spiked instantons, demonstrated in "BPS/CFT correspondence II: instantons at crossroads, moduli and compactness theorem".
Does boundary quantum mechanics imply quantum mechanics in the bulk?
NASA Astrophysics Data System (ADS)
Kabat, Daniel; Lifschytz, Gilad
2018-03-01
Perturbative bulk reconstruction in AdS/CFT starts by representing a free bulk field ϕ (0) as a smeared operator in the CFT. A series of 1 /N corrections must be added to ϕ (0) to represent an interacting bulk field ϕ. These corrections have been determined in the literature from several points of view. Here we develop a new perspective. We show that correlation functions involving ϕ (0) suffer from ambiguities due to analytic continuation. As a result ϕ (0) fails to be a well-defined linear operator in the CFT. This means bulk reconstruction can be understood as a procedure for building up well-defined operators in the CFT which thereby singles out the interacting field ϕ. We further propose that the difficulty with defining ϕ (0) as a linear operator can be re-interpreted as a breakdown of associativity. Presumably ϕ (0) can only be corrected to become an associative operator in perturbation theory. This suggests that quantum mechanics in the bulk is only valid in perturbation theory around a semiclassical bulk geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.
This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics
Enactment controversies: a critical review of current debates.
Ivey, Gavin
2008-02-01
This critical review of the current disputes concerning countertransference enactment systematically outlines the various issues and the perspectives adopted by the relevant psychoanalytic authors. In the light of this the 'common ground ' hypothesis concerning the unifying influence of contemporary countertransference theory is challenged. While the existence of enactments, minimally defined as the analyst's inadvertent actualization of the patient's transference fantasies, is widely accepted, controversies regarding the specific scope, nature, prevalence, relationship to countertransference experience, impact on the analytic process, role played by the analyst's subjectivity, and the correct handling of enactments abound. Rather than taking a stand based on ideological allegiance to any particular psychoanalytic school or philosophical position, the author argues that the relative merits of contending perspectives is best evaluated with reference to close process scrutiny of the context, manifestation and impact of specific enactments on patients' intrapsychic functioning and the analytic relationship. A detailed account of an interpretative enactment provides a context for the author's position on these debates.
Subtracting infrared renormalons from Wilson coefficients: Uniqueness and power dependences on ΛQCD
NASA Astrophysics Data System (ADS)
Mishima, Go; Sumino, Yukinari; Takaura, Hiromasa
2017-06-01
In the context of operator product expansion (OPE) and using the large-β0 approximation, we propose a method to define Wilson coefficients free from uncertainties due to IR renormalons. We first introduce a general observable X (Q2) with an explicit IR cutoff, and then we extract a genuine UV contribution XUV as a cutoff-independent part. XUV includes power corrections ˜(ΛQCD2/Q2)n which are independent of renormalons. Using the integration-by-regions method, we observe that XUV coincides with the leading Wilson coefficient in OPE and also clarify that the power corrections originate from UV region. We examine scheme dependence of XUV and single out a specific scheme favorable in terms of analytical properties. Our method would be optimal with respect to systematicity, analyticity and stability. We test our formulation with the examples of the Adler function, QCD force between Q Q ¯, and R -ratio in e+e- collision.
A Functional Analytic Approach to Group Psychotherapy
ERIC Educational Resources Information Center
Vandenberghe, Luc
2009-01-01
This article provides a particular view on the use of Functional Analytical Psychotherapy (FAP) in a group therapy format. This view is based on the author's experiences as a supervisor of Functional Analytical Psychotherapy Groups, including groups for women with depression and groups for chronic pain patients. The contexts in which this approach…
STABILITY OF FMRI STRIATAL RESPONSE TO ALCOHOL CUES: A HIERARCHICAL LINEAR MODELING APPROACH
Schacht, Joseph P.; Anton, Raymond F.; Randall, Patrick K.; Li, Xingbao; Henderson, Scott; Myrick, Hugh
2011-01-01
In functional magnetic resonance imaging (fMRI) studies of alcohol-dependent individuals, alcohol cues elicit activation of the ventral and dorsal aspects of the striatum (VS and DS), which are believed to underlie aspects of reward learning critical to the initiation and maintenance of alcohol dependence. Cue-elicited striatal activation may represent a biological substrate through which treatment efficacy may be measured. However, to be useful for this purpose, VS or DS activation must first demonstrate stability across time. Using hierarchical linear modeling (HLM), this study tested the stability of cue-elicited activation in anatomically and functionally defined regions of interest in bilateral VS and DS. Nine non-treatment-seeking alcohol-dependent participants twice completed an alcohol cue reactivity task during two fMRI scans separated by 14 days. HLM analyses demonstrated that, across all participants, alcohol cues elicited significant activation in each of the regions of interest. At the group level, these activations attenuated slightly between scans, but session-wise differences were not significant. Within-participants stability was best in the anatomically defined right VS and DS and in a functionally defined region that encompassed right caudate and putamen (intraclass correlation coefficients of .75, .81, and .76, respectively). Thus, within this small sample, alcohol cue-elicited fMRI activation had good reliability in the right striatum, though a larger sample is necessary to ensure generalizability and further evaluate stability. This study also demonstrates the utility of HLM analytic techniques for serial fMRI studies, in which separating within-participants variance (individual changes in activation) from between-participants factors (time or treatment) is critical. PMID:21316465
Transonic buffet behavior of Northrop F-5A aircraft
NASA Technical Reports Server (NTRS)
Hwang, C.; Pi, W. S.
1974-01-01
Flight tests were performed on an F-5A aircraft to investigate the dynamic buffet pressure distribution on the wing surfaces and the responses during a series of transonic maneuvers called wind-up turns. The conditions under which the tests were conducted are defined. The fluctuating buffet pressure data on the right wing of the aircraft were acquired by miniaturized semiconductor-type pressure transducers flush mounted on the wing. Processing of the fluctuating pressures and responses included the generation of the auto- and cross-power spectra, and of the spatial correlation functions. An analytical correlation procedure was introduced to compute the aircraft response spectra based on the measured buffet pressures.
The Thick Level-Set model for dynamic fragmentation
Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas
2017-01-04
The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.
Analytical evaluation of ILM sensors, volume 1
NASA Technical Reports Server (NTRS)
Kirk, R. J.
1975-01-01
The functional requirements and operating environment constraints are defined for an independent landing monitor ILM which provides the flight crew with an independent assessment of the operation of the primary automatic landing system. The capabilities of radars, TV, forward looking infrared radiometers, multilateration, microwave radiometers, interferometers, and nuclear sensing concepts to meet the ILM conditions are analyzed. The most critical need for the ILM appears in the landing sequence from 1000 to 2000 meters from threshold through rollout. Of the sensing concepts analyzed, the following show potential of becoming feasible ILM's: redundant microwave landings systems, precision approach radar, airborne triangulation radar, multilateration with radar altimetry, and nuclear sensing.
NASA Technical Reports Server (NTRS)
Wahls, Richard A.
1990-01-01
The method presented is designed to improve the accuracy and computational efficiency of existing numerical methods for the solution of flows with compressible turbulent boundary layers. A compressible defect stream function formulation of the governing equations assuming an arbitrary turbulence model is derived. This formulation is advantageous because it has a constrained zero-order approximation with respect to the wall shear stress and the tangential momentum equation has a first integral. Previous problems with this type of formulation near the wall are eliminated by using empirically based analytic expressions to define the flow near the wall. The van Driest law of the wall for velocity and the modified Crocco temperature-velocity relationship are used. The associated compressible law of the wake is determined and it extends the valid range of the analytical expressions beyond the logarithmic region of the boundary layer. The need for an inner-region eddy viscosity model is completely avoided. The near-wall analytic expressions are patched to numerically computed outer region solutions at a point determined during the computation. A new boundary condition on the normal derivative of the tangential velocity at the surface is presented; this condition replaces the no-slip condition and enables numerical integration to the surface with a relatively coarse grid using only an outer region turbulence model. The method was evaluated for incompressible and compressible equilibrium flows and was implemented into an existing Navier-Stokes code using the assumption of local equilibrium flow with respect to the patching. The method has proven to be accurate and efficient.
Nutrition economics: towards comprehensive understanding of the benefits of nutrition
Koponen, Aki; Sandell, Mari; Salminen, Seppo; Lenoir-Wijnkoop, Irene
2012-01-01
There has been an increase in the knowledge and interest on nutrition, and functional foods have gained popularity over the last few decades, and the trend is increasing. Probiotics and prebiotics are among the most studied functional foods. Nutrition economics has been defined as the discipline dedicated to researching and characterising health and economic outcomes in nutrition for the benefit of society. The concept and its application to probiotics and prebiotics will be discussed in terms of health and economic benefits and their evaluation. Health economics and concrete applications showing how to maximise long-term nutritional benefits will contribute to motivate consumers in making food choices based on a rational understanding of their own interest. We present a model that shows that nutrition economics can be used as an analytical tool for product and service network development. PMID:23990809
Nutrition economics: towards comprehensive understanding of the benefits of nutrition.
Koponen, Aki; Sandell, Mari; Salminen, Seppo; Lenoir-Wijnkoop, Irene
2012-01-01
There has been an increase in the knowledge and interest on nutrition, and functional foods have gained popularity over the last few decades, and the trend is increasing. Probiotics and prebiotics are among the most studied functional foods. Nutrition economics has been defined as the discipline dedicated to researching and characterising health and economic outcomes in nutrition for the benefit of society. The concept and its application to probiotics and prebiotics will be discussed in terms of health and economic benefits and their evaluation. Health economics and concrete applications showing how to maximise long-term nutritional benefits will contribute to motivate consumers in making food choices based on a rational understanding of their own interest. We present a model that shows that nutrition economics can be used as an analytical tool for product and service network development.
Electron-Beam Lithographic Grafting of Functional Polymer Structures from Fluoropolymer Substrates.
Gajos, Katarzyna; Guzenko, Vitaliy A; Dübner, Matthias; Haberko, Jakub; Budkowski, Andrzej; Padeste, Celestino
2016-10-07
Well-defined submicrometer structures of poly(dimethylaminoethyl methacrylate) (PDMAEMA) were grafted from 100 μm thick films of poly(ethene-alt-tetrafluoroethene) after electron-beam lithographic exposure. To explore the possibilities and limits of the method under different exposure conditions, two different acceleration voltages (2.5 and 100 keV) were employed. First, the influence of electron energy and dose on the extent of grafting and on the structure's morphology was determined via atomic force microscopy. The surface grafting with PDMAEMA was confirmed by advanced surface analytical techniques such as time-of-flight secondary ion mass spectrometry and X-ray photoelectron spectroscopy. Additionally, the possibility of effective postpolymerization modification of grafted structures was demonstrated by quaternization of the grafted PDMAEMA to the polycationic QPDMAEMA form and by exploiting electrostatic interactions to bind charged organic dyes and functional proteins.
Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments
Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria
2015-01-01
Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162
Cooling of solar flares plasmas. 1: Theoretical considerations
NASA Technical Reports Server (NTRS)
Cargill, Peter J.; Mariska, John T.; Antiochos, Spiro K.
1995-01-01
Theoretical models of the cooling of flare plasma are reexamined. By assuming that the cooling occurs in two separate phase where conduction and radiation, respectively, dominate, a simple analytic formula for the cooling time of a flare plasma is derived. Unlike earlier order-of-magnitude scalings, this result accounts for the effect of the evolution of the loop plasma parameters on the cooling time. When the conductive cooling leads to an 'evaporation' of chromospheric material, the cooling time scales L(exp 5/6)/p(exp 1/6), where the coronal phase (defined as the time maximum temperature). When the conductive cooling is static, the cooling time scales as L(exp 3/4)n(exp 1/4). In deriving these results, use was made of an important scaling law (T proportional to n(exp 2)) during the radiative cooling phase that was forst noted in one-dimensional hydrodynamic numerical simulations (Serio et al. 1991; Jakimiec et al. 1992). Our own simulations show that this result is restricted to approximately the radiative loss function of Rosner, Tucker, & Vaiana (1978). for different radiative loss functions, other scaling result, with T and n scaling almost linearly when the radiative loss falls off as T(exp -2). It is shown that these scaling laws are part of a class of analytic solutions developed by Antiocos (1980).
User-Centered Evaluation of Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean C.
Visual analytics systems are becoming very popular. More domains now use interactive visualizations to analyze the ever-increasing amount and heterogeneity of data. More novel visualizations are being developed for more tasks and users. We need to ensure that these systems can be evaluated to determine that they are both useful and usable. A user-centered evaluation for visual analytics needs to be developed for these systems. While many of the typical human-computer interaction (HCI) evaluation methodologies can be applied as is, others will need modification. Additionally, new functionality in visual analytics systems needs new evaluation methodologies. There is a difference betweenmore » usability evaluations and user-centered evaluations. Usability looks at the efficiency, effectiveness, and user satisfaction of users carrying out tasks with software applications. User-centered evaluation looks more specifically at the utility provided to the users by the software. This is reflected in the evaluations done and in the metrics used. In the visual analytics domain this is very challenging as users are most likely experts in a particular domain, the tasks they do are often not well defined, the software they use needs to support large amounts of different kinds of data, and often the tasks last for months. These difficulties are discussed more in the section on User-centered Evaluation. Our goal is to provide a discussion of user-centered evaluation practices for visual analytics, including existing practices that can be carried out and new methodologies and metrics that need to be developed and agreed upon by the visual analytics community. The material provided here should be of use for both researchers and practitioners in the field of visual analytics. Researchers and practitioners in HCI and interested in visual analytics will find this information useful as well as a discussion on changes that need to be made to current HCI practices to make them more suitable to visual analytics. A history of analysis and analysis techniques and problems is provided as well as an introduction to user-centered evaluation and various evaluation techniques for readers from different disciplines. The understanding of these techniques is imperative if we wish to support analysis in the visual analytics software we develop. Currently the evaluations that are conducted and published for visual analytics software are very informal and consist mainly of comments from users or potential users. Our goal is to help researchers in visual analytics to conduct more formal user-centered evaluations. While these are time-consuming and expensive to carryout, the outcomes of these studies will have a defining impact on the field of visual analytics and help point the direction for future features and visualizations to incorporate. While many researchers view work in user-centered evaluation as a less-than-exciting area to work, the opposite is true. First of all, the goal is user-centered evaluation is to help visual analytics software developers, researchers, and designers improve their solutions and discover creative ways to better accommodate their users. Working with the users is extremely rewarding as well. While we use the term “users” in almost all situations there are a wide variety of users that all need to be accommodated. Moreover, the domains that use visual analytics are varied and expanding. Just understanding the complexities of a number of these domains is exciting. Researchers are trying out different visualizations and interactions as well. And of course, the size and variety of data are expanding rapidly. User-centered evaluation in this context is rapidly changing. There are no standard processes and metrics and thus those of us working on user-centered evaluation must be creative in our work with both the users and with the researchers and developers.« less
Distributed data networks: a blueprint for Big Data sharing and healthcare analytics.
Popovic, Jennifer R
2017-01-01
This paper defines the attributes of distributed data networks and outlines the data and analytic infrastructure needed to build and maintain a successful network. We use examples from one successful implementation of a large-scale, multisite, healthcare-related distributed data network, the U.S. Food and Drug Administration-sponsored Sentinel Initiative. Analytic infrastructure-development concepts are discussed from the perspective of promoting six pillars of analytic infrastructure: consistency, reusability, flexibility, scalability, transparency, and reproducibility. This paper also introduces one use case for machine learning algorithm development to fully utilize and advance the portfolio of population health analytics, particularly those using multisite administrative data sources. © 2016 New York Academy of Sciences.
Pursuing Information: A Conversation Analytic Perspective on Communication Strategies
ERIC Educational Resources Information Center
Burch, Alfred R.
2014-01-01
Research on second language (L2) communication strategies over the past three decades has concerned itself broadly with defining their usage in terms of planning and compensation, as well as with the use of taxonomies for coding different types of strategies. Taking a Conversation Analytic (CA) perspective, this article examines the fine-grained…
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
ERIC Educational Resources Information Center
Lu, Owen H. T.; Huang, Anna Y. Q.; Huang, Jeff C. H.; Lin, Albert J. Q.; Ogata, Hiroaki; Yang, Stephen J. H.
2018-01-01
Blended learning combines online digital resources with traditional classroom activities and enables students to attain higher learning performance through well-defined interactive strategies involving online and traditional learning activities. Learning analytics is a conceptual framework and is a part of our Precision education used to analyze…
A MDMP for All Seasons: Modifying the MDMP for Success
2004-05-26
4 Rational Decision - Making Theory ............................................................................. 5 Limited Rationality ... making instead of using the MDMP, which is an analytical decision - making process. Limited rationality and analytical decision - making will be discussed...limited rationality decision - making theories. FM 5.0 defines fundamentals of planning, such as commander’s involvement and developing creative plans
NASA Astrophysics Data System (ADS)
Qin, Yuxiang; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Geil, Paul M.; Mesinger, Andrei; Wyithe, J. Stuart B.
2018-06-01
We study dwarf galaxy formation at high redshift (z ≥ 5) using a suite of high-resolution, cosmological hydrodynamic simulations and a semi-analytic model (SAM). We focus on gas accretion, cooling, and star formation in this work by isolating the relevant process from reionization and supernova feedback, which will be further discussed in a companion paper. We apply the SAM to halo merger trees constructed from a collisionless N-body simulation sharing identical initial conditions to the hydrodynamic suite, and calibrate the free parameters against the stellar mass function predicted by the hydrodynamic simulations at z = 5. By making comparisons of the star formation history and gas components calculated by the two modelling techniques, we find that semi-analytic prescriptions that are commonly adopted in the literature of low-redshift galaxy formation do not accurately represent dwarf galaxy properties in the hydrodynamic simulation at earlier times. We propose three modifications to SAMs that will provide more accurate high-redshift simulations. These include (1) the halo mass and baryon fraction which are overestimated by collisionless N-body simulations; (2) the star formation efficiency which follows a different cosmic evolutionary path from the hydrodynamic simulation; and (3) the cooling rate which is not well defined for dwarf galaxies at high redshift. Accurate semi-analytic modelling of dwarf galaxy formation informed by detailed hydrodynamical modelling will facilitate reliable semi-analytic predictions over the large volumes needed for the study of reionization.
McDermott, Imelda; Checkland, Kath; Harrison, Stephen; Snow, Stephanie; Coleman, Anna
2013-01-01
The language used by National Health Service (NHS) "commissioning" managers when discussing their roles and responsibilities can be seen as a manifestation of "identity work", defined as a process of identifying. This paper aims to offer a novel approach to analysing "identity work" by triangulation of multiple analytical methods, combining analysis of the content of text with analysis of its form. Fairclough's discourse analytic methodology is used as a framework. Following Fairclough, the authors use analytical methods associated with Halliday's systemic functional linguistics. While analysis of the content of interviews provides some information about NHS Commissioners' perceptions of their roles and responsibilities, analysis of the form of discourse that they use provides a more detailed and nuanced view. Overall, the authors found that commissioning managers have a higher level of certainty about what commissioning is not rather than what commissioning is; GP managers have a high level of certainty of their identity as a GP rather than as a manager; and both GP managers and non-GP managers oscillate between multiple identities depending on the different situations they are in. This paper offers a novel approach to triangulation, based not on the usual comparison of multiple data sources, but rather based on the application of multiple analytical methods to a single source of data. This paper also shows the latent uncertainty about the nature of commissioning enterprise in the English NHS.
(U) An Analytic Examination of Piezoelectric Ejecta Mass Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, Ian Lee
2017-02-02
Ongoing efforts to validate a Richtmyer-Meshkov instability (RMI) based ejecta source model [1, 2, 3] in LANL ASC codes use ejecta areal masses derived from piezoelectric sensor data [4, 5, 6]. However, the standard technique for inferring masses from sensor voltages implicitly assumes instantaneous ejecta creation [7], which is not a feature of the RMI source model. To investigate the impact of this discrepancy, we define separate “areal mass functions” (AMFs) at the source and sensor in terms of typically unknown distribution functions for the ejecta particles, and derive an analytic relationship between them. Then, for the case of single-shockmore » ejection into vacuum, we use the AMFs to compare the analytic (or “true”) accumulated mass at the sensor with the value that would be inferred from piezoelectric voltage measurements. We confirm the inferred mass is correct when creation is instantaneous, and furthermore prove that when creation is not instantaneous, the inferred values will always overestimate the true mass. Finally, we derive an upper bound for the error imposed on a perfect system by the assumption of instantaneous ejecta creation. When applied to shots in the published literature, this bound is frequently less than several percent. Errors exceeding 15% may require velocities or timescales at odds with experimental observations.« less
Fibrinolysis standards: a review of the current status.
Thelwell, C
2010-07-01
Biological standards are used to calibrate measurements of components of the fibrinolytic system, either for assigning potency values to therapeutic products, or to determine levels in human plasma as an indicator of thrombotic risk. Traditionally WHO International Standards are calibrated in International Units based on consensus values from collaborative studies. The International Unit is defined by the response activity of a given amount of the standard in a bioassay, independent of the method used. Assay validity is based on the assumption that both standard and test preparation contain the same analyte, and the response in an assay is a true function of this analyte. This principle is reflected in the diversity of source materials used to prepare fibrinolysis standards, which has depended on the contemporary preparations they were employed to measure. With advancing recombinant technology, and improved analytical techniques, a reference system based on reference materials and associated reference methods has been recommended for future fibrinolysis standards. Careful consideration and scientific judgement must however be applied when deciding on an approach to develop a new standard, with decisions based on the suitability of a standard to serve its purpose, and not just to satisfy a metrological ideal. 2010 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lohmann, R. A.; Riecke, G. T.
1977-01-01
An analytical screening study was conducted to identify duct burner concepts capable of providing low emissions and high performance in advanced supersonic engines. Duct burner configurations ranging from current augmenter technology to advanced concepts such as premix-prevaporized burners were defined. Aerothermal and mechanical design studies provided the basis for screening these configurations using the criteria of emissions, performance, engine compatibility, cost, weight and relative risk. Technology levels derived from recently defined experimental low emissions main burners are required to achieve both low emissions and high performance goals. A configuration based on the Vorbix (Vortex burning and mixing) combustor concept was analytically determined to meet the performance goals and is consistent with the fan duct envelope of a variable cycle engine. The duct burner configuration has a moderate risk level compatible with the schedule of anticipated experimental programs.
Orbiter middeck/payload standard interfaces control document
NASA Technical Reports Server (NTRS)
1984-01-01
The interfaces which shall be provided by the baseline shuttle mid-deck for payload use within the mid-deck area are defined, as well as all constraints which shall be observed by all the users of the defined interfaces. Commonality was established with respect to analytical approaches, analytical models, technical data and definitions for integrated analyses by all the interfacing parties. Any payload interfaces that are out of scope with the standard interfaces defined shall be defined in a Payload Unique Interface Control Document (ICD) for a given payload. Each Payload Unique ICD will have comparable paragraphs to this ICD and will have a corresponding notation of A, for applicable; N/A, for not applicable; N, for note added for explanation; and E, for exception. On any flight, the STS reserves the right to assign locations to both payloads mounted on an adapter plate(s) and payloads stored within standard lockers. Specific locations requests and/or requirements exceeding standard mid-deck payload requirements may result in a reduction in manifesting opportunities.
Advancing Collaboration through Hydrologic Data and Model Sharing
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.
2015-12-01
HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.
Robust estimation for ordinary differential equation models.
Cao, J; Wang, L; Xu, J
2011-12-01
Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.
The Mass Function of Cosmic Structures
NASA Astrophysics Data System (ADS)
Audit, E.; Teyssier, R.; Alimi, J.-M.
We investigate some modifications to the Press and Schechter (1974) (PS) prescription resulting from shear and tidal effects. These modifications rely on more realistic treatments of the collapse process than the standard approach based on the spherical model. First, we show that the mass function resulting from a new approximate Lagrangian dynamic (Audit and Alimi, A&A 1996), contains more objects at high mass, than the classical PS mass function and is well fitted by a PS-like function with a threshold density of deltac ≍ 1.4. However, such a Lagrangian description can underestimate the epoch of structure formation since it defines it as the collapse of the first principal axis. We therefore suggest some analytical prescriptions, for computing the collapse time along the second and third principal axes, and we deduce the corresponding mass functions. The collapse along the third axis is delayed by the shear and the number of objects of high mass then decreases. Finally, we show that the shear also strongly affects the formation of low-mass halos. This dynamical effect implies a modification of the low-mass slope of the mass function and allows the reproduction of the observed luminosity function of field galaxies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jae Gil, E-mail: jgchoi@dankook.ac.kr; Chang, Seung Jun, E-mail: sejchang@dankook.ac.kr
In this paper we derive a Cameron-Storvick theorem for the analytic Feynman integral of functionals on product abstract Wiener space B{sup 2}. We then apply our result to obtain an evaluation formula for the analytic Feynman integral of unbounded functionals on B{sup 2}. We also present meaningful examples involving functionals which arise naturally in quantum mechanics.
Functionalized magnetic nanoparticle analyte sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yantasee, Wassana; Warner, Maryin G; Warner, Cynthia L
2014-03-25
A method and system for simply and efficiently determining quantities of a preselected material in a particular solution by the placement of at least one superparamagnetic nanoparticle having a specified functionalized organic material connected thereto into a particular sample solution, wherein preselected analytes attach to the functionalized organic groups, these superparamagnetic nanoparticles are then collected at a collection site and analyzed for the presence of a particular analyte.
Two-dimensional analytic weighting functions for limb scattering
NASA Astrophysics Data System (ADS)
Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.
2017-10-01
Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.
3-D discrete analytical ridgelet transform.
Helbert, David; Carré, Philippe; Andres, Eric
2006-12-01
In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.
A Model for Axial Magnetic Bearings Including Eddy Currents
NASA Technical Reports Server (NTRS)
Kucera, Ladislav; Ahrens, Markus
1996-01-01
This paper presents an analytical method of modelling eddy currents inside axial bearings. The problem is solved by dividing an axial bearing into elementary geometric forms, solving the Maxwell equations for these simplified geometries, defining boundary conditions and combining the geometries. The final result is an analytical solution for the flux, from which the impedance and the force of an axial bearing can be derived. Several impedance measurements have shown that the analytical solution can fit the measured data with a precision of approximately 5%.
van Eijk, Ruben PA; Eijkemans, Marinus JC; Rizopoulos, Dimitris
2018-01-01
Objective Amyotrophic lateral sclerosis (ALS) clinical trials based on single end points only partially capture the full treatment effect when both function and mortality are affected, and may falsely dismiss efficacious drugs as futile. We aimed to investigate the statistical properties of several strategies for the simultaneous analysis of function and mortality in ALS clinical trials. Methods Based on the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database, we simulated longitudinal patterns of functional decline, defined by the revised amyotrophic lateral sclerosis functional rating scale (ALSFRS-R) and conditional survival time. Different treatment scenarios with varying effect sizes were simulated with follow-up ranging from 12 to 18 months. We considered the following analytical strategies: 1) Cox model; 2) linear mixed effects (LME) model; 3) omnibus test based on Cox and LME models; 4) composite time-to-6-point decrease or death; 5) combined assessment of function and survival (CAFS); and 6) test based on joint modeling framework. For each analytical strategy, we calculated the empirical power and sample size. Results Both Cox and LME models have increased false-negative rates when treatment exclusively affects either function or survival. The joint model has superior power compared to other strategies. The composite end point increases false-negative rates among all treatment scenarios. To detect a 15% reduction in ALSFRS-R decline and 34% decline in hazard with 80% power after 18 months, the Cox model requires 524 patients, the LME model 794 patients, the omnibus test 526 patients, the composite end point 1,274 patients, the CAFS 576 patients and the joint model 464 patients. Conclusion Joint models have superior statistical power to analyze simultaneous effects on survival and function and may circumvent pitfalls encountered by other end points. Optimizing trial end points is essential, as selecting suboptimal outcomes may disguise important treatment clues. PMID:29593436
Negative effective mass in acoustic metamaterial with nonlinear mass-in-mass subsystems
NASA Astrophysics Data System (ADS)
Cveticanin, L.; Zukovic, M.
2017-10-01
In this paper the dynamics of the nonlinear mass-in-mass system as the basic subsystem of the acoustic metamaterial is investigated. The excitation of the system is in the form of the Jacobi elliptic function. The corresponding model to this forcing is the mass-in-mass system with cubic nonlinearity of the Duffing type. Mathematical model of the motion is a system of two coupled strong nonlinear and nonhomogeneous second order differential equations. Particular solution to the system is obtained. The analytical solution of the problem is based on the simple and double integral of the cosine Jacobi function. In the paper the integrals are given in the form of series of trigonometric functions. These results are new one. After some modification the simplified solution in the first approximation is obtained. The result is convenient for discussion. Conditions for elimination of the motion of the mass 1 by connection of the nonlinear dynamic absorber (mass - spring system) are defined. In the consideration the effective mass ratio is introduced in the nonlinear mass-in-mass system. Negative effective mass ratio gives the absorption of vibrations with certain frequencies. The advantage of the nonlinear subunit in comparison to the linear one is that the frequency gap is significantly wider. Nevertheless, it has to be mentioned that the amplitude of vibration differs from zero for a small value. In the paper the analytical results are compared with numerical one and are in agreement.
Meta-connectomics: human brain network and connectivity meta-analyses.
Crossley, N A; Fox, P T; Bullmore, E T
2016-04-01
Abnormal brain connectivity or network dysfunction has been suggested as a paradigm to understand several psychiatric disorders. We here review the use of novel meta-analytic approaches in neuroscience that go beyond a summary description of existing results by applying network analysis methods to previously published studies and/or publicly accessible databases. We define this strategy of combining connectivity with other brain characteristics as 'meta-connectomics'. For example, we show how network analysis of task-based neuroimaging studies has been used to infer functional co-activation from primary data on regional activations. This approach has been able to relate cognition to functional network topology, demonstrating that the brain is composed of cognitively specialized functional subnetworks or modules, linked by a rich club of cognitively generalized regions that mediate many inter-modular connections. Another major application of meta-connectomics has been efforts to link meta-analytic maps of disorder-related abnormalities or MRI 'lesions' to the complex topology of the normative connectome. This work has highlighted the general importance of network hubs as hotspots for concentration of cortical grey-matter deficits in schizophrenia, Alzheimer's disease and other disorders. Finally, we show how by incorporating cellular and transcriptional data on individual nodes with network models of the connectome, studies have begun to elucidate the microscopic mechanisms underpinning the macroscopic organization of whole-brain networks. We argue that meta-connectomics is an exciting field, providing robust and integrative insights into brain organization that will likely play an important future role in consolidating network models of psychiatric disorders.
Rational approximations from power series of vector-valued meromorphic functions
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.
Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.
Sun, Dong; Zhao, Daomu
2005-08-01
By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve
1992-01-01
It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senesac, Larry R; Datskos, Panos G; Sepaniak, Michael J
2006-01-01
In the present work, we have performed analyte species and concentration identification using an array of ten differentially functionalized microcantilevers coupled with a back-propagation artificial neural network pattern recognition algorithm. The array consists of ten nanostructured silicon microcantilevers functionalized by polymeric and gas chromatography phases and macrocyclic receptors as spatially dense, differentially responding sensing layers for identification and quantitation of individual analyte(s) and their binary mixtures. The array response (i.e. cantilever bending) to analyte vapor was measured by an optical readout scheme and the responses were recorded for a selection of individual analytes as well as several binary mixtures. Anmore » artificial neural network (ANN) was designed and trained to recognize not only the individual analytes and binary mixtures, but also to determine the concentration of individual components in a mixture. To the best of our knowledge, ANNs have not been applied to microcantilever array responses previously to determine concentrations of individual analytes. The trained ANN correctly identified the eleven test analyte(s) as individual components, most with probabilities greater than 97%, whereas it did not misidentify an unknown (untrained) analyte. Demonstrated unique aspects of this work include an ability to measure binary mixtures and provide both qualitative (identification) and quantitative (concentration) information with array-ANN-based sensor methodologies.« less
Swain, Eric D.; Chin, David A.
2003-01-01
A predominant cause of dispersion in groundwater is advective mixing due to variability in seepage rates. Hydraulic conductivity variations have been extensively researched as a cause of this seepage variability. In this paper the effect of variations in surface recharge to a shallow surficial aquifer is investigated as an important additional effect. An analytical formulation has been developed that relates aquifer parameters and the statistics of recharge variability to increases in the dispersivity. This is accomplished by solving Fourier transforms of the small perturbation forms of the groundwater flow equations. Two field studies are presented in this paper to determine the statistics of recharge variability for input to the analytical formulation. A time series of water levels at a continuous groundwater recorder is used to investigate the temporal statistics of hydraulic head caused by recharge, and a series of infiltrometer measurements are used to define the spatial variability in the recharge parameters. With these field statistics representing head fluctuations due to recharge, the analytical formulation can be used to compute the dispersivity without an explicit representation of the recharge boundary. Results from a series of numerical experiments are used to define the limits of this analytical formulation and to provide some comparison. A sophisticated model has been developed using a particle‐tracking algorithm (modified to account for temporal variations) to estimate groundwater dispersion. Dispersivity increases of 9 percent are indicated by the analytical formulation for the aquifer at the field site. A comparison with numerical model results indicates that the analytical results are reasonable for shallow surficial aquifers in which two‐dimensional flow can be assumed.
Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko
2017-07-10
This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.
An extension of the Laplace transform to Schwartz distributions
NASA Technical Reports Server (NTRS)
Price, D. R.
1974-01-01
A characterization of the Laplace transform is developed which extends the transform to the Schwartz distributions. The class of distributions includes the impulse functions and other singular functions which occur as solutions to ordinary and partial differential equations. The standard theorems on analyticity, uniqueness, and invertibility of the transform are proved by using the characterization as the definition of the Laplace transform. The definition uses sequences of linear transformations on the space of distributions which extends the Laplace transform to another class of generalized functions, the Mikusinski operators. It is shown that the sequential definition of the transform is equivalent to Schwartz' extension of the ordinary Laplace transform to distributions but, in contrast to Schwartz' definition, does not use the distributional Fourier transform. Several theorems concerning the particular linear transformations used to define the Laplace transforms are proved. All the results proved in one dimension are extended to the n-dimensional case, but proofs are presented only for those situations that require methods different from their one-dimensional analogs.
NASA Astrophysics Data System (ADS)
Ahmadov, A. I.; Naeem, Maria; Qocayeva, M. V.; Tarverdiyeva, V. A.
2018-02-01
In this paper, the bound state solution of the modified radial Schrödinger equation is obtained for the Manning-Rosen plus Hulthén potential by implementing the novel improved scheme to surmount the centrifugal term. The energy eigenvalues and corresponding radial wave functions are defined for any l ≠ 0 angular momentum case via the Nikiforov-Uvarov (NU) and supersymmetric quantum mechanics (SUSYQM) methods. By using these two different methods, equivalent expressions are obtained for the energy eigenvalues, and the expression of radial wave functions transformations to each other is demonstrated. The energy levels are worked out and the corresponding normalized eigenfunctions are represented in terms of the Jacobi polynomials for arbitrary l states. A closed form of the normalization constant of the wave functions is also found. It is shown that, the energy eigenvalues and eigenfunctions are sensitive to nr radial and l orbital quantum numbers.
Shape dependence of two-cylinder Rényi entropies for free bosons on a lattice
NASA Astrophysics Data System (ADS)
Chojnacki, Leilee; Cook, Caleb Q.; Dalidovich, Denis; Hayward Sierens, Lauren E.; Lantagne-Hurtubise, Étienne; Melko, Roger G.; Vlaar, Tiffany J.
2016-10-01
Universal scaling terms occurring in Rényi entanglement entropies have the potential to bring new understanding to quantum critical points in free and interacting systems. Quantitative comparisons between analytical continuum theories and numerical calculations on lattice models play a crucial role in advancing such studies. In this paper, we exactly calculate the universal two-cylinder shape dependence of entanglement entropies for free bosons on finite-size square lattices, and compare to approximate functions derived in the continuum using several different Ansätze. Although none of these Ansätze are exact in the thermodynamic limit, we find that numerical fits are in good agreement with continuum functions derived using the anti-de Sitter/conformal field theory correspondence, an extensive mutual information model, and a quantum Lifshitz model. We use fits of our lattice data to these functions to calculate universal scalars defined in the thin-cylinder limit, and compare to values previously obtained for the free boson field theory in the continuum.
A continuous function model for path prediction of entities
NASA Astrophysics Data System (ADS)
Nanda, S.; Pray, R.
2007-04-01
As militaries across the world continue to evolve, the roles of humans in various theatres of operation are being increasingly targeted by military planners for substitution with automation. Forward observation and direction of supporting arms to neutralize threats from dynamic adversaries is one such example. However, contemporary tracking and targeting systems are incapable of serving autonomously for they do not embody the sophisticated algorithms necessary to predict the future positions of adversaries with the accuracy offered by the cognitive and analytical abilities of human operators. The need for these systems to incorporate methods characterizing such intelligence is therefore compelling. In this paper, we present a novel technique to achieve this goal by modeling the path of an entity as a continuous polynomial function of multiple variables expressed as a Taylor series with a finite number of terms. We demonstrate the method for evaluating the coefficient of each term to define this function unambiguously for any given entity, and illustrate its use to determine the entity's position at any point in time in the future.
ERIC Educational Resources Information Center
Mendiburo, Maria; Williams, Laura; Segedy, James; Hasselbring, Ted
2013-01-01
In this paper, the authors explore the use of learning analytics as a method for easing the cognitive demands on teachers implementing the HALF instructional model. Learning analytics has been defined as "the measurement, collection, analysis and reporting of data about learners and their contexts for the purposes of understanding and…
NASA Astrophysics Data System (ADS)
Noah, Joyce E.
Time correlation functions of density fluctuations of liquids at equilibrium can be used to relate the microscopic dynamics of a liquid to its macroscopic transport properties. Time correlation functions are especially useful since they can be generated in a variety of ways, from scattering experiments to computer simulation to analytic theory. The kinetic theory of fluctuations in equilibrium liquids is an analytic theory for calculating correlation functions using memory functions. In this work, we use a diagrammatic formulation of the kinetic theory to develop a series of binary collision approximations for the collisional part of the memory function. We define binary collisions as collisions between two distinct density fluctuations whose identities are fixed during the duration of a collsion. R approximations are for the short time part of the memory function, and build upon the work of Ranganathan and Andersen. These approximations have purely repulsive interactions between the fluctuations. The second type of approximation, RA approximations, is for the longer time part of the memory function, where the density fluctuations now interact via repulsive and attractive forces. Although RA approximations are a natural extension of R approximations, they permit two density fluctuations to become trapped in the wells of the interaction potential, leading to long-lived oscillatory behavior, which is unphysical. Therefore we consider S approximations which describe binary particles which experience the random effect of the surroundings while interacting via repulsive or repulsive and attractive interactions. For each of these approximations for the memory function we numerically solve the kinetic equation to generate correlation functions. These results are compared to molecular dynamics results for the correlation functions. Comparing the successes and failures of the different approximations, we conclude that R approximations give more accurate intermediate and long time results while RA and S approximations do particularly well at predicting the short time behavior. Lastly, we also develop a series of non-graphically derived approximations and use an optimization procedure to determine the underlying memory function from the simulation data. These approaches provide valuable information about the memory function that will be used in the development of future kinetic theories.
Likelihood-Based Clustering of Meta-Analytic SROC Curves
ERIC Educational Resources Information Center
Holling, Heinz; Bohning, Walailuck; Bohning, Dankmar
2012-01-01
Meta-analysis of diagnostic studies experience the common problem that different studies might not be comparable since they have been using a different cut-off value for the continuous or ordered categorical diagnostic test value defining different regions for which the diagnostic test is defined to be positive. Hence specificities and…
Basic Western Lviv Region Conversational Ukrainian
ERIC Educational Resources Information Center
Petryshyn, Ivan
2015-01-01
Purpose: To present the first complete Guide for studying the Western-Ukrainian Dialect and its scientific description of Phonology. Methodology: descriptive, contrastive and analytical methods of defining the peculiarities of the Dialect. Results: the regularities and the laws have been defined as to the specifics of the Western-Ukrainian Dialect…
Improved methods for fan sound field determination
NASA Technical Reports Server (NTRS)
Cicon, D. E.; Sofrin, T. G.; Mathews, D. C.
1981-01-01
Several methods for determining acoustic mode structure in aircraft turbofan engines using wall microphone data were studied. A method for reducing data was devised and implemented which makes the definition of discrete coherent sound fields measured in the presence of engine speed fluctuation more accurate. For the analytical methods, algorithms were developed to define the dominant circumferential modes from full and partial circumferential arrays of microphones. Axial arrays were explored to define mode structure as a function of cutoff ratio, and the use of data taken at several constant speeds was also evaluated in an attempt to reduce instrumentation requirements. Sensitivities of the various methods to microphone density, array size and measurement error were evaluated and results of these studies showed these new methods to be impractical. The data reduction method used to reduce the effects of engine speed variation consisted of an electronic circuit which windowed the data so that signal enhancement could occur only when the speed was within a narrow range.
NASA Technical Reports Server (NTRS)
Gnoffo, P. A.
1978-01-01
A coordinate transformation, which can approximate many different two-dimensional and axisymmetric body shapes with an analytic function, is used as a basis for solving the Navier-Stokes equations for the purpose of predicting 0 deg angle of attack supersonic flow fields. The transformation defines a curvilinear, orthogonal coordinate system in which coordinate lines are perpendicular to the body and the body is defined by one coordinate line. This system is mapped in to a rectangular computational domain in which the governing flow field equations are solved numerically. Advantages of this technique are that the specification of boundary conditions are simplified and, most importantly, the entire flow field can be obtained, including flow in the wake. Good agreement has been obtained with experimental data for pressure distributions, density distributions, and heat transfer over spheres and cylinders in supersonic flow. Approximations to the Viking aeroshell and to a candidate Jupiter probe are presented and flow fields over these shapes are calculated.
Immunoelectron microscopy in embryos.
Sierralta, W D
2001-05-01
Immunogold labeling of proteins in sections of embryos embedded in acrylate media provides an important analytical tool when the resolving power of the electron microscope is required to define sites of protein function. The protocol presented here was established to analyze the role and dynamics of the activated protein kinase C/Rack1 regulatory system in the patterning and outgrowth of limb bud mesenchyme. With minor changes, especially in the composition of the fixative solution, the protocol should be easily adaptable for the postembedding immunogold labeling of any other antigen in tissues of embryos of diverse species. Quantification of the labeling can be achieved by using electron microscope systems capable of supporting digital image analysis. Copyright 2001 Academic Press.
Dust motions in quasi-statically charged binary asteroid systems
NASA Astrophysics Data System (ADS)
Maruskin, Jared M.; Bellerose, Julie; Wong, Macken; Mitchell, Lara; Richardson, David; Mathews, Douglas; Nguyen, Tri; Ganeshalingam, Usha; Ma, Gina
2013-03-01
In this paper, we discuss dust motion and investigate possible mass transfer of charged particles in a binary asteroid system, in which the asteroids are electrically charged due to solar radiation. The surface potential of the asteroids is assumed to be a piecewise function, with positive potential on the sunlit half and negative potential on the shadow half. We derive the nonautonomous equations of motion for charged particles and an analytic representation for their lofting conditions. Particle trajectories and temporary relative equilibria are examined in relation to their moving forbidden regions, a concept we define and discuss. Finally, we use a Monte Carlo simulation for a case study on mass transfer and loss rates between the asteroids.
Spacecraft attitude calibration/verification baseline study
NASA Technical Reports Server (NTRS)
Chen, L. C.
1981-01-01
A baseline study for a generalized spacecraft attitude calibration/verification system is presented. It can be used to define software specifications for three major functions required by a mission: the pre-launch parameter observability and data collection strategy study; the in-flight sensor calibration; and the post-calibration attitude accuracy verification. Analytical considerations are given for both single-axis and three-axis spacecrafts. The three-axis attitudes considered include the inertial-pointing attitudes, the reference-pointing attitudes, and attitudes undergoing specific maneuvers. The attitude sensors and hardware considered include the Earth horizon sensors, the plane-field Sun sensors, the coarse and fine two-axis digital Sun sensors, the three-axis magnetometers, the fixed-head star trackers, and the inertial reference gyros.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report describes two code-generating tools used to speed design and implementation of relational databases and user interfaces: CREATE-SCHEMA and BUILD-SCREEN. CREATE-SCHEMA produces the SQL commands that actually create and define the database. BUILD-SCREEN takes templates for data entry screens and generates the screen management system routine calls to display the desired screen. Both tools also generate the related FORTRAN declaration statements and precompiled SQL calls. Included with this report is the source code for a number of FORTRAN routines and functions used by the user interface. This code is broadly applicable to a number of different databases.
Linking matrices in systems with periodic boundary conditions
NASA Astrophysics Data System (ADS)
Panagiotou, Eleni; Millett, Kenneth C.
2018-06-01
We study the linking matrix, a measure of entanglement for a collection of closed or open chains in 3-space based on the Gauss linking number. Periodic boundary conditions (PBC) are often used in the simulation of physical systems of filaments. To measure entanglement of closed or open chains in systems employing PBC we use the periodic linking matrix, based on the periodic linking number, defined in Panagiotou (2015 J. Comput. Phys. 300 533–73). We study the properties of the periodic linking matrix as a function of cell size. We provide analytical results concerning the eigenvalues of the periodic linking matrix and show that some of them are invariant of cell-size.
A New Reactive FMIPv6 Mechanism for Minimizing Packet Loss
NASA Astrophysics Data System (ADS)
Kim, Pyungsoo
This paper considers a new reactive fast handover MIPv6 (FMIPv6) mechanism to minimize packet loss of the existing mechanism. The primary idea of the proposed reactive FMIPv6 mechanism is that the serving access router buffers packets toward the mobile node (MN) as soon as the link layer between MN and serving base station is disconnected. To implement the proposed mechanism, the router discovery message exchanged between MN and serving access router is extended. In addition, the IEEE 802.21 Media Independent Handover Function event service message is defined newly. Through analytic performance evaluation and experiments, the proposed reactive FMIPv6 mechanism can be shown to minimize packet loss much than the existing mechanism.
Point-of-Need bioanalytics based on planar optical interferometry.
Makarona, E; Petrou, P; Kakabakos, S; Misiakos, K; Raptis, I
2016-01-01
This review brings about a comprehensive presentation of the research on interferometric transducers, which have emerged as extremely promising candidates for viable, truly-marketable solutions for PoN applications due to the attested performance that has reached down to 10(-8) in term of effective refractive index changes. The review explores the operation of the various interferometric architectures along with their design, fabrication, and analytical performance aspects. The issues of biosensor functionalization and immobilization of receptors are also addressed. As a conclusion, the comparison among them is attempted in order to delve into and acknowledge their current limitations, and define the future trends. Copyright © 2016 Elsevier Inc. All rights reserved.
Pläschke, Rachel N; Cieslik, Edna C; Müller, Veronika I; Hoffstaedter, Felix; Plachti, Anna; Varikuti, Deepthi P; Goosses, Mareike; Latz, Anne; Caspers, Svenja; Jockwitz, Christiane; Moebus, Susanne; Gruber, Oliver; Eickhoff, Claudia R; Reetz, Kathrin; Heller, Julia; Südmeyer, Martin; Mathys, Christian; Caspers, Julian; Grefkes, Christian; Kalenscher, Tobias; Langner, Robert; Eickhoff, Simon B
2017-12-01
Previous whole-brain functional connectivity studies achieved successful classifications of patients and healthy controls but only offered limited specificity as to affected brain systems. Here, we examined whether the connectivity patterns of functional systems affected in schizophrenia (SCZ), Parkinson's disease (PD), or normal aging equally translate into high classification accuracies for these conditions. We compared classification performance between pre-defined networks for each group and, for any given network, between groups. Separate support vector machine classifications of 86 SCZ patients, 80 PD patients, and 95 older adults relative to their matched healthy/young controls, respectively, were performed on functional connectivity in 12 task-based, meta-analytically defined networks using 25 replications of a nested 10-fold cross-validation scheme. Classification performance of the various networks clearly differed between conditions, as those networks that best classified one disease were usually non-informative for the other. For SCZ, but not PD, emotion-processing, empathy, and cognitive action control networks distinguished patients most accurately from controls. For PD, but not SCZ, networks subserving autobiographical or semantic memory, motor execution, and theory-of-mind cognition yielded the best classifications. In contrast, young-old classification was excellent based on all networks and outperformed both clinical classifications. Our pattern-classification approach captured associations between clinical and developmental conditions and functional network integrity with a higher level of specificity than did previous whole-brain analyses. Taken together, our results support resting-state connectivity as a marker of functional dysregulation in specific networks known to be affected by SCZ and PD, while suggesting that aging affects network integrity in a more global way. Hum Brain Mapp 38:5845-5858, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The tensor distribution function.
Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M
2009-01-01
Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.
NASA Astrophysics Data System (ADS)
Frenken, Koen
2001-06-01
The biological evolution of complex organisms, in which the functioning of genes is interdependent, has been analyzed as "hill-climbing" on NK fitness landscapes through random mutation and natural selection. In evolutionary economics, NK fitness landscapes have been used to simulate the evolution of complex technological systems containing elements that are interdependent in their functioning. In these models, economic agents randomly search for new technological design by trial-and-error and run the risk of ending up in sub-optimal solutions due to interdependencies between the elements in a complex system. These models of random search are legitimate for reasons of modeling simplicity, but remain limited as these models ignore the fact that agents can apply heuristics. A specific heuristic is one that sequentially optimises functions according to their ranking by users of the system. To model this heuristic, a generalized NK-model is developed. In this model, core elements that influence many functions can be distinguished from peripheral elements that affect few functions. The concept of paradigmatic search can then be analytically defined as search that leaves core elements in tact while concentrating on improving functions by mutation of peripheral elements.
Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media
Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.
2009-01-01
Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.
Stochastic inference with spiking neurons in the high-conductance state
NASA Astrophysics Data System (ADS)
Petrovici, Mihai A.; Bill, Johannes; Bytschok, Ilja; Schemmel, Johannes; Meier, Karlheinz
2016-10-01
The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
Methods and limitations in radar target imagery
NASA Astrophysics Data System (ADS)
Bertrand, P.
An analytical examination of the reflectivity of radar targets is presented for the two-dimensional case of flat targets. A complex backscattering coefficient is defined for the amplitude and phase of the received field in comparison with the emitted field. The coefficient is dependent on the frequency of the emitted signal and the orientation of the target with respect to the transmitter. The target reflection is modeled in terms of the density of illumined, colored points independent from one another. The target therefore is represented as an infinite family of densities indexed by the observational angle. Attention is given to the reflectivity parameters and their distribution function, and to the conjunct distribution function for the color, position, and the directivity of bright points. It is shown that a fundamental ambiguity exists between the localization of the illumined points and the determination of their directivity and color.
Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko
2016-06-01
Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
Time-delayed autosynchronous swarm control.
Biggs, James D; Bennet, Derek J; Dadzie, S Kokou
2012-01-01
In this paper a general Morse potential model of self-propelling particles is considered in the presence of a time-delayed term and a spring potential. It is shown that the emergent swarm behavior is dependent on the delay term and weights of the time-delayed function, which can be set to induce a stationary swarm, a rotating swarm with uniform translation, and a rotating swarm with a stationary center of mass. An analysis of the mean field equations shows that without a spring potential the motion of the center of mass is determined explicitly by a multivalued function. For a nonzero spring potential the swarm converges to a vortex formation about a stationary center of mass, except at discrete bifurcation points where the center of mass will periodically trace an ellipse. The analytical results defining the behavior of the center of mass are shown to correspond with the numerical swarm simulations.
Lindsay, Stuart; He, Jin; Sankey, Otto; Hapala, Prokop; Jelinek, Pavel; Zhang, Peiming; Chang, Shuai; Huang, Shuo
2010-01-01
Single molecules in a tunnel junction can now be interrogated reliably using chemically-functionalized electrodes. Monitoring stochastic bonding fluctuations between a ligand bound to one electrode and its target bound to a second electrode (“tethered molecule-pair” configuration) gives insight into the nature of the intermolecular bonding at a single molecule-pair level, and defines the requirements for reproducible tunneling data. Simulations show that there is an instability in the tunnel gap at large currents, and this results in a multiplicity of contacts with a corresponding spread in the measured currents. At small currents (i.e. large gaps) the gap is stable, and functionalizing a pair of electrodes with recognition reagents (the “free analyte” configuration) can generate a distinct tunneling signal when an analyte molecule is trapped in the gap. This opens up a new interface between chemistry and electronics with immediate implications for rapid sequencing of single DNA molecules. PMID:20522930
NASA Technical Reports Server (NTRS)
Scudder, J. D.; Olbert, S.
1983-01-01
The breakdown of the classical (CBES) field aligned transport relations for electrons in an inhomogeneous, fully ionized plasma as a mathematical issue of radius of convergence is addressed, the finite Knudsen number conditions when CBES results are accurate is presented and a global-local (GL) way to describe the results of Coulomb physics moderated conduction that is more nearly appropriate for astrophysical plasmas are defined. This paper shows the relationship to and points of departure of the present work from the CBES approach. The CBES heat law in current use is shown to be an especially restrictive special case of the new, more general GL result. A preliminary evaluation of the dimensionless heat function, using analytic formulas, shows that the dimensionless heat function profiles versus density of the type necessary for a conduction supported high speed solar wind appear possible.
Landau damping of Langmuir twisted waves with kappa distributed electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arshad, Kashif, E-mail: kashif.arshad.butt@gmail.com; Aman-ur-Rehman; Mahmood, Shahzad
2015-11-15
The kinetic theory of Landau damping of Langmuir twisted modes is investigated in the presence of orbital angular momentum of the helical (twisted) electric field in plasmas with kappa distributed electrons. The perturbed distribution function and helical electric field are considered to be decomposed by Laguerre-Gaussian mode function defined in cylindrical geometry. The Vlasov-Poisson equation is obtained and solved analytically to obtain the weak damping rates of the Langmuir twisted waves in a nonthermal plasma. The strong damping effects of the Langmuir twisted waves at wavelengths approaching Debye length are also obtained by using an exact numerical method and aremore » illustrated graphically. The damping rates of the planar Langmuir waves are found to be larger than the twisted Langmuir waves in plasmas which shows opposite behavior as depicted in Fig. 3 by J. T. Mendoça [Phys. Plasmas 19, 112113 (2012)].« less
Chimera states in two-dimensional networks of locally coupled oscillators
NASA Astrophysics Data System (ADS)
Kundu, Srilena; Majhi, Soumen; Bera, Bidesh K.; Ghosh, Dibakar; Lakshmanan, M.
2018-02-01
Chimera state is defined as a mixed type of collective state in which synchronized and desynchronized subpopulations of a network of coupled oscillators coexist and the appearance of such anomalous behavior has strong connection to diverse neuronal developments. Most of the previous studies on chimera states are not extensively done in two-dimensional ensembles of coupled oscillators by taking neuronal systems with nonlinear coupling function into account while such ensembles of oscillators are more realistic from a neurobiological point of view. In this paper, we report the emergence and existence of chimera states by considering locally coupled two-dimensional networks of identical oscillators where each node is interacting through nonlinear coupling function. This is in contrast with the existence of chimera states in two-dimensional nonlocally coupled oscillators with rectangular kernel in the coupling function. We find that the presence of nonlinearity in the coupling function plays a key role to produce chimera states in two-dimensional locally coupled oscillators. We analytically verify explicitly in the case of a network of coupled Stuart-Landau oscillators in two dimensions that the obtained results using Ott-Antonsen approach and our analytical finding very well matches with the numerical results. Next, we consider another type of important nonlinear coupling function which exists in neuronal systems, namely chemical synaptic function, through which the nearest-neighbor (locally coupled) neurons interact with each other. It is shown that such synaptic interacting function promotes the emergence of chimera states in two-dimensional lattices of locally coupled neuronal oscillators. In numerical simulations, we consider two paradigmatic neuronal oscillators, namely Hindmarsh-Rose neuron model and Rulkov map for each node which exhibit bursting dynamics. By associating various spatiotemporal behaviors and snapshots at particular times, we study the chimera states in detail over a large range of coupling parameter. The existence of chimera states is confirmed by instantaneous angular frequency, order parameter and strength of incoherence.
Chimera states in two-dimensional networks of locally coupled oscillators.
Kundu, Srilena; Majhi, Soumen; Bera, Bidesh K; Ghosh, Dibakar; Lakshmanan, M
2018-02-01
Chimera state is defined as a mixed type of collective state in which synchronized and desynchronized subpopulations of a network of coupled oscillators coexist and the appearance of such anomalous behavior has strong connection to diverse neuronal developments. Most of the previous studies on chimera states are not extensively done in two-dimensional ensembles of coupled oscillators by taking neuronal systems with nonlinear coupling function into account while such ensembles of oscillators are more realistic from a neurobiological point of view. In this paper, we report the emergence and existence of chimera states by considering locally coupled two-dimensional networks of identical oscillators where each node is interacting through nonlinear coupling function. This is in contrast with the existence of chimera states in two-dimensional nonlocally coupled oscillators with rectangular kernel in the coupling function. We find that the presence of nonlinearity in the coupling function plays a key role to produce chimera states in two-dimensional locally coupled oscillators. We analytically verify explicitly in the case of a network of coupled Stuart-Landau oscillators in two dimensions that the obtained results using Ott-Antonsen approach and our analytical finding very well matches with the numerical results. Next, we consider another type of important nonlinear coupling function which exists in neuronal systems, namely chemical synaptic function, through which the nearest-neighbor (locally coupled) neurons interact with each other. It is shown that such synaptic interacting function promotes the emergence of chimera states in two-dimensional lattices of locally coupled neuronal oscillators. In numerical simulations, we consider two paradigmatic neuronal oscillators, namely Hindmarsh-Rose neuron model and Rulkov map for each node which exhibit bursting dynamics. By associating various spatiotemporal behaviors and snapshots at particular times, we study the chimera states in detail over a large range of coupling parameter. The existence of chimera states is confirmed by instantaneous angular frequency, order parameter and strength of incoherence.
Singular perturbations with boundary conditions and the Casimir effect in the half space
NASA Astrophysics Data System (ADS)
Albeverio, S.; Cognola, G.; Spreafico, M.; Zerbini, S.
2010-06-01
We study the self-adjoint extensions of a class of nonmaximal multiplication operators with boundary conditions. We show that these extensions correspond to singular rank 1 perturbations (in the sense of Albeverio and Kurasov [Singular Perturbations of Differential Operaters (Cambridge University Press, Cambridge, 2000)]) of the Laplace operator, namely, the formal Laplacian with a singular delta potential, on the half space. This construction is the appropriate setting to describe the Casimir effect related to a massless scalar field in the flat space-time with an infinite conducting plate and in the presence of a pointlike "impurity." We use the relative zeta determinant (as defined in the works of Müller ["Relative zeta functions, relative determinants and scattering theory," Commun. Math. Phys. 192, 309 (1998)] and Spreafico and Zerbini ["Finite temperature quantum field theory on noncompact domains and application to delta interactions," Rep. Math. Phys. 63, 163 (2009)]) in order to regularize the partition function of this model. We study the analytic extension of the associated relative zeta function, and we present explicit results for the partition function and for the Casimir force.
Voelz, David G; Roggemann, Michael C
2009-11-10
Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.
Do, Thanh Nhut; Gelin, Maxim F; Tan, Howe-Siang
2017-10-14
We derive general expressions that incorporate finite pulse envelope effects into a coherent two-dimensional optical spectroscopy (2DOS) technique. These expressions are simpler and less computationally intensive than the conventional triple integral calculations needed to simulate 2DOS spectra. The simplified expressions involving multiplications of arbitrary pulse spectra with 2D spectral response function are shown to be exactly equal to the conventional triple integral calculations of 2DOS spectra if the 2D spectral response functions do not vary with population time. With minor modifications, they are also accurate for 2D spectral response functions with quantum beats and exponential decay during population time. These conditions cover a broad range of experimental 2DOS spectra. For certain analytically defined pulse spectra, we also derived expressions of 2D spectra for arbitrary population time dependent 2DOS spectral response functions. Having simpler and more efficient methods to calculate experimentally relevant 2DOS spectra with finite pulse effect considered will be important in the simulation and understanding of the complex systems routinely being studied by using 2DOS.
Fluctuating observation time ensembles in the thermodynamics of trajectories
NASA Astrophysics Data System (ADS)
Budini, Adrián A.; Turner, Robert M.; Garrahan, Juan P.
2014-03-01
The dynamics of stochastic systems, both classical and quantum, can be studied by analysing the statistical properties of dynamical trajectories. The properties of ensembles of such trajectories for long, but fixed, times are described by large-deviation (LD) rate functions. These LD functions play the role of dynamical free energies: they are cumulant generating functions for time-integrated observables, and their analytic structure encodes dynamical phase behaviour. This ‘thermodynamics of trajectories’ approach is to trajectories and dynamics what the equilibrium ensemble method of statistical mechanics is to configurations and statics. Here we show that, just like in the static case, there are a variety of alternative ensembles of trajectories, each defined by their global constraints, with that of trajectories of fixed total time being just one of these. We show how the LD functions that describe an ensemble of trajectories where some time-extensive quantity is constant (and large) but where total observation time fluctuates can be mapped to those of the fixed-time ensemble. We discuss how the correspondence between generalized ensembles can be exploited in path sampling schemes for generating rare dynamical trajectories.
Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases
NASA Astrophysics Data System (ADS)
Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre
2011-12-01
Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.
Dynamic remapping of parallel computations with varying resource demands
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Saltz, J. H.
1986-01-01
A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.
ERIC Educational Resources Information Center
Follette, William C.; Bonow, Jordan T.
2009-01-01
Whether explicitly acknowledged or not, behavior-analytic principles are at the heart of most, if not all, empirically supported therapies. However, the change process in psychotherapy is only now being rigorously studied. Functional analytic psychotherapy (FAP; Kohlenberg & Tsai, 1991; Tsai et al., 2009) explicitly identifies behavioral-change…
Executive Function and Reading Comprehension: A Meta-Analytic Review
ERIC Educational Resources Information Center
Follmer, D. Jake
2018-01-01
This article presents a meta-analytic review of the relation between executive function and reading comprehension. Results (N = 6,673) supported a moderate positive association between executive function and reading comprehension (r = 0.36). Moderator analyses suggested that correlations between executive function and reading comprehension did not…
Analysis of gene network robustness based on saturated fixed point attractors
2014-01-01
The analysis of gene network robustness to noise and mutation is important for fundamental and practical reasons. Robustness refers to the stability of the equilibrium expression state of a gene network to variations of the initial expression state and network topology. Numerical simulation of these variations is commonly used for the assessment of robustness. Since there exists a great number of possible gene network topologies and initial states, even millions of simulations may be still too small to give reliable results. When the initial and equilibrium expression states are restricted to being saturated (i.e., their elements can only take values 1 or −1 corresponding to maximum activation and maximum repression of genes), an analytical gene network robustness assessment is possible. We present this analytical treatment based on determination of the saturated fixed point attractors for sigmoidal function models. The analysis can determine (a) for a given network, which and how many saturated equilibrium states exist and which and how many saturated initial states converge to each of these saturated equilibrium states and (b) for a given saturated equilibrium state or a given pair of saturated equilibrium and initial states, which and how many gene networks, referred to as viable, share this saturated equilibrium state or the pair of saturated equilibrium and initial states. We also show that the viable networks sharing a given saturated equilibrium state must follow certain patterns. These capabilities of the analytical treatment make it possible to properly define and accurately determine robustness to noise and mutation for gene networks. Previous network research conclusions drawn from performing millions of simulations follow directly from the results of our analytical treatment. Furthermore, the analytical results provide criteria for the identification of model validity and suggest modified models of gene network dynamics. The yeast cell-cycle network is used as an illustration of the practical application of this analytical treatment. PMID:24650364
NASA Technical Reports Server (NTRS)
Schaefer, J. W.; Tong, H.; Clark, K. J.; Suchsland, K. E.; Neuner, G. J.
1975-01-01
A detailed experimental and analytical evaluation was performed to define the response of TD nickel chromium alloy (20 percent chromium) and coated columbium (R512E on CB-752 and VH-109 on WC129Y) to shuttle orbiter reentry heating. Flight conditions important to the response of these thermal protection system (TPS) materials were calculated, and test conditions appropriate to simulation of these flight conditions in flowing air ground test facilities were defined. The response characteristics of these metallics were then evaluated for the flight and representative ground test conditions by analytical techniques employing appropriate thermochemical and thermal response computer codes and by experimental techniques employing an arc heater flowing air test facility and flat face stagnation point and wedge test models. These results were analyzed to define the ground test requirements to obtain valid TPS response characteristics for application to flight. For both material types in the range of conditions appropriate to the shuttle application, the surface thermochemical response resulted in a small rate of change of mass and a negligible energy contribution. The thermal response in terms of surface temperature was controlled by the net heat flux to the surface; this net flux was influenced significantly by the surface catalycity and surface emissivity. The surface catalycity must be accounted for in defining simulation test conditions so that proper heat flux levels to, and therefore surface temperatures of, the test samples are achieved.
Analytical model for the radio-frequency sheath
NASA Astrophysics Data System (ADS)
Czarnetzki, Uwe
2013-12-01
A simple analytical model for the planar radio-frequency (rf) sheath in capacitive discharges is developed that is based on the assumptions of a step profile for the electron front, charge exchange collisions with constant cross sections, negligible ionization within the sheath, and negligible ion dynamics. The continuity, momentum conservation, and Poisson equations are combined in a single integro-differential equation for the square of the ion drift velocity, the so called sheath equation. Starting from the kinetic Boltzmann equation, special attention is paid to the derivation and the validity of the approximate fluid equation for momentum balance. The integrals in the sheath equation appear in the screening function which considers the relative contribution of the temporal mean of the electron density to the space charge in the sheath. It is shown that the screening function is quite insensitive to variations of the effective sheath parameters. The two parameters defining the solution are the ratios of the maximum sheath extension to the ion mean free path and the Debye length, respectively. A simple general analytic expression for the screening function is introduced. By means of this expression approximate analytical solutions are obtained for the collisionless as well as the highly collisional case that compare well with the exact numerical solution. A simple transition formula allows application to all degrees of collisionality. In addition, the solutions are used to calculate all static and dynamic quantities of the sheath, e.g., the ion density, fields, and currents. Further, the rf Child-Langmuir laws for the collisionless as well as the collisional case are derived. An essential part of the model is the a priori knowledge of the wave form of the sheath voltage. This wave form is derived on the basis of a cubic charge-voltage relation for individual sheaths, considering both sheaths and the self-consistent self-bias in a discharge with arbitrary symmetry. The externally applied rf voltage is assumed to be sinusoidal, although the model can be extended to arbitrary wave forms, e.g., for dual-frequency discharges. The model calculates explicitly the cubic correction parameter in the charge-voltage relation for the case of highly asymmetric discharges. It is shown that the cubic correction is generally moderate but more pronounced in the collisionless case. The analytical results are compared to experimental data from the literature obtained by laser electric field measurements of the mean and dynamic fields in the capacitive sheath for various gases and pressures. Very good agreement is found throughout.
Analytical model for the radio-frequency sheath.
Czarnetzki, Uwe
2013-12-01
A simple analytical model for the planar radio-frequency (rf) sheath in capacitive discharges is developed that is based on the assumptions of a step profile for the electron front, charge exchange collisions with constant cross sections, negligible ionization within the sheath, and negligible ion dynamics. The continuity, momentum conservation, and Poisson equations are combined in a single integro-differential equation for the square of the ion drift velocity, the so called sheath equation. Starting from the kinetic Boltzmann equation, special attention is paid to the derivation and the validity of the approximate fluid equation for momentum balance. The integrals in the sheath equation appear in the screening function which considers the relative contribution of the temporal mean of the electron density to the space charge in the sheath. It is shown that the screening function is quite insensitive to variations of the effective sheath parameters. The two parameters defining the solution are the ratios of the maximum sheath extension to the ion mean free path and the Debye length, respectively. A simple general analytic expression for the screening function is introduced. By means of this expression approximate analytical solutions are obtained for the collisionless as well as the highly collisional case that compare well with the exact numerical solution. A simple transition formula allows application to all degrees of collisionality. In addition, the solutions are used to calculate all static and dynamic quantities of the sheath, e.g., the ion density, fields, and currents. Further, the rf Child-Langmuir laws for the collisionless as well as the collisional case are derived. An essential part of the model is the a priori knowledge of the wave form of the sheath voltage. This wave form is derived on the basis of a cubic charge-voltage relation for individual sheaths, considering both sheaths and the self-consistent self-bias in a discharge with arbitrary symmetry. The externally applied rf voltage is assumed to be sinusoidal, although the model can be extended to arbitrary wave forms, e.g., for dual-frequency discharges. The model calculates explicitly the cubic correction parameter in the charge-voltage relation for the case of highly asymmetric discharges. It is shown that the cubic correction is generally moderate but more pronounced in the collisionless case. The analytical results are compared to experimental data from the literature obtained by laser electric field measurements of the mean and dynamic fields in the capacitive sheath for various gases and pressures. Very good agreement is found throughout.
IR spectroscopic studies in microchannel structures
NASA Astrophysics Data System (ADS)
Guber, A. E.; Bier, W.
1998-06-01
By means of the various microengineering methods available, microreaction systems can be produced among others. These microreactors consist of microchannels, where chemical reactions take place under defined conditions. For optimum process control, continuous online analytics is envisaged in the microchannels. For this purpose, a special analytical module has been developed. It may be applied for IR spectroscopic studies at any point of the microchannel.
Multipass optical device and process for gas and analyte determination
Bernacki, Bruce E [Kennewick, WA
2011-01-25
A torus multipass optical device and method are described that provide for trace level determination of gases and gas-phase analytes. The torus device includes an optical cavity defined by at least one ring mirror. The mirror delivers optical power in at least a radial and axial direction and propagates light in a multipass optical path of a predefined path length.
USDA-ARS?s Scientific Manuscript database
For any analytical system the population mean (mu) number of entities (e.g., cells or molecules) per tested volume, surface area, or mass also defines the population standard deviation (sigma = square root of mu ). For a preponderance of analytical methods, sigma is very small relative to mu due to...
ERIC Educational Resources Information Center
Kilpatrick, Sue; Field, John; Falk, Ian
The possibility of using the concept of social capital as an analytical tool for exploring lifelong learning and community development was examined. The following were among the topics considered: (1) differences between definitions of the concept of social capital that are based on collective benefit and those that define social capital as a…
Anomaly formulas for the complex-valued analytic torsion on compact bordisms
Maldonado Molina, Osmar
2013-01-01
We extend the complex-valued analytic torsion, introduced by Burghelea and Haller on closed manifolds, to compact Riemannian bordisms. We do so by considering a flat complex vector bundle over a compact Riemannian manifold, endowed with a fiberwise nondegenerate symmetric bilinear form. The Riemmanian metric and the bilinear form are used to define non-selfadjoint Laplacians acting on vector-valued smooth forms under absolute and relative boundary conditions. In order to define the complex-valued analytic torsion in this situation, we study spectral properties of these generalized Laplacians. Then, as main results, we obtain so-called anomaly formulas for this torsion. Our reasoning takes into account that the coefficients in the heat trace asymptotic expansion associated to the boundary value problem under consideration, are locally computable. The anomaly formulas for the complex-valued Ray–Singer torsion are derived first by using the corresponding ones for the Ray–Singer metric, obtained by Brüning and Ma on manifolds with boundary, and then an argument of analytic continuation. In odd dimensions, our anomaly formulas are in accord with the corresponding results of Su, without requiring the variations of the Riemannian metric and bilinear structures to be supported in the interior of the manifold. PMID:27087744
Limits of linearity and detection for some drugs of abuse.
Needleman, S B; Romberg, R W
1990-01-01
The limits of linearity (LOL) and detection (LOD) are important factors in establishing the reliability of an analytical procedure for accurately assaying drug concentrations in urine specimens. Multiple analyses of analyte over an extended range of concentrations provide a measure of the ability of the analytical procedure to correctly identify known quantities of drug in a biofluid matrix. Each of the seven drugs of abuse gives linear analytical responses from concentrations at or near their LOD to concentrations several-fold higher than those generally encountered in the drug screening laboratory. The upper LOL exceeds the Department of Navy (DON) cutoff values by factors of approximately 2 to 160. The LOD varies from 0.4 to 5.0% of the DON cutoff value for each drug. The limit of quantitation (LOQ) is calculated as the LOD + 7 SD. The range for LOL is greater for drugs analyzed with deuterated internal standards compared with those using conventional internal standards. For THC acid, cocaine, PCP, and morphine, LOLs are 8 to 160-fold greater than the defined cutoff concentrations. For the other drugs, the LOL's are only 2 to 4-fold greater than the defined cutoff concentrations.
End-point detection in potentiometric titration by continuous wavelet transform.
Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W
2009-10-15
The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
BETA (Bitter Electromagnet Testing Apparatus) Design and Testing
NASA Astrophysics Data System (ADS)
Bates, Evan; Birmingham, William; Rivera, William; Romero-Talamas, Carlos
2016-10-01
BETA is a 1T water cooled Bitter-type magnetic system that has been designed and constructed at the Dusty Plasma Laboratory of the University of Maryland, Baltimore County to serve as a prototype of a scaled 10T version. Currently the system is undergoing magnetic, thermal and mechanical testing to ensure safe operating conditions and to prove analytical design optimizations. These magnets will function as experimental tools for future dusty plasma based and collaborative experiments. An overview of design methods used for building a custom made Bitter magnet with user defined experimental constraints is reviewed. The three main design methods consist of minimizing the following: ohmic power, peak conductor temperatures, and stresses induced by Lorentz forces. We will also discuss the design of BETA which includes: the magnet core, pressure vessel, cooling system, power storage bank, high powered switching system, diagnostics with safety cutoff feedback, and data acquisition (DAQ)/magnet control Matlab code. Furthermore, we present experimental data from diagnostics for validation of our analytical preliminary design methodologies and finite element analysis calculations. BETA will contribute to the knowledge necessary to finalize the 10 T magnet design.
Semi-active control of a sandwich beam partially filled with magnetorheological elastomer
NASA Astrophysics Data System (ADS)
Dyniewicz, Bartłomiej; Bajkowski, Jacek M.; Bajer, Czesław I.
2015-08-01
The paper deals with the semi-active control of vibrations of structural elements. Elastomer composites with ferromagnetic particles that act as magnetorheological fluids are used. The damping coefficient and the shear modulus of the elastomer increases when it is exposed to an electro-magnetic field. The control of this process in time allows us to reduce vibrations more effectively than if the elastomer is permanently exposed to a magnetic field. First the analytical solution for the vibrations of a sandwich beam filled with an elastomer is given. Then the control problem is defined and applied to the analytical formula. The numerical solution of the minimization problem results in a periodic, perfectly rectangular control function if free vibrations are considered. Such a temporarily acting magnetic field is more efficient than a constantly acting one. The surplus reaches 20-50% or more, depending on the filling ratio of the elastomer. The resulting control was verified experimentally in the vibrations of a cantilever sandwich beam. The proposed semi-active control can be directly applied to engineering vibrating structural elements, for example helicopter rotors, aircraft wings, pads under machines, and vehicles.
Gutknecht, Mandy; Danner, Marion; Schaarschmidt, Marthe-Lisa; Gross, Christian; Augustin, Matthias
2018-02-15
To define treatment benefit, the Patient Benefit Index contains a weighting of patient-relevant treatment goals using the Patient Needs Questionnaire, which includes a 5-point Likert scale ranging from 0 ("not important at all") to 4 ("very important"). These treatment goals have been assigned to five health dimensions. The importance of each dimension can be derived by averaging the importance ratings on the Likert scales of associated treatment goals. As the use of a Likert scale does not allow for a relative assessment of importance, the objective of this study was to estimate relative importance weights for health dimensions and associated treatment goals in patients with psoriasis by using the analytic hierarchy process and to compare these weights with the weights resulting from the Patient Needs Questionnaire. Furthermore, patients' judgments on the difficulty of the methods were investigated. Dimensions of the Patient Benefit Index and their treatment goals were mapped into a hierarchy of criteria and sub-criteria to develop the analytic hierarchy process questionnaire. Adult patients with psoriasis starting a new anti-psoriatic therapy in the outpatient clinic of the Institute for Health Services Research in Dermatology and Nursing at the University Medical Center Hamburg (Germany) were recruited and completed both methods (analytic hierarchy process, Patient Needs Questionnaire). Ratings of treatment goals on the Likert scales (Patient Needs Questionnaire) were summarized within each dimension to assess the importance of the respective health dimension/criterion. Following the analytic hierarchy process approach, consistency in judgments was assessed using a standardized measurement (consistency ratio). At the analytic hierarchy process level of criteria, 78 of 140 patients achieved the accepted consistency. Using the analytic hierarchy process, the dimension "improvement of physical functioning" was most important, followed by "improvement of social functioning". Concerning the Patient Needs Questionnaire results, these dimensions were ranked in second and fifth position, whereas "strengthening of confidence in the therapy and in a possible healing" was ranked most important, which was least important in the analytic hierarchy process ranking. In both methods, "improvement of psychological well-being" and "reduction of impairments due to therapy" were equally ranked in positions three and four. In contrast to this, on the level of sub-criteria, predominantly a similar ranking of treatment goals could be observed between the analytic hierarchy process and the Patient Needs Questionnaire. From the patients' point of view, the Likert scales (Patient Needs Questionnaire) were easier to complete than the analytic hierarchy process pairwise comparisons. Patients with psoriasis assign different importance to health dimensions and associated treatment goals. In choosing a method to assess the importance of health dimensions and/or treatment goals, it needs to be considered that resulting importance weights may differ in dependence on the used method. However, in this study, observed discrepancies in importance weights of the health dimensions were most likely caused by the different methodological approaches focusing on treatment goals to assess the importance of health dimensions on the one hand (Patient Needs Questionnaire) or directly assessing health dimensions on the other hand (analytic hierarchy process).
NASA Astrophysics Data System (ADS)
Hu, Xian-Quan; Luo, Guang; Cui, Li-Peng; Li, Fang-Yu; Niu, Lian-Bin
2009-03-01
The analytic solution of the radial Schrödinger equation is studied by using the tight coupling condition of several positive-power and inverse-power potential functions in this article. Furthermore, the precisely analytic solutions and the conditions that decide the existence of analytic solution have been searched when the potential of the radial Schrödinger equation is V(r) = α1r8 + α2r3 + α3r2 + β3r-1 + β2r-3 + β1r-4. Generally speaking, there is only an approximate solution, but not analytic solution for Schrödinger equation with several potentials' superposition. However, the conditions that decide the existence of analytic solution have been found and the analytic solution and its energy level structure are obtained for the Schrödinger equation with the potential which is motioned above in this paper. According to the single-value, finite and continuous standard of wave function in a quantum system, the authors firstly solve the asymptotic solution through the radial coordinate r → and r → 0; secondly, they make the asymptotic solutions combining with the series solutions nearby the neighborhood of irregular singularities; and then they compare the power series coefficients, deduce a series of analytic solutions of the stationary state wave function and corresponding energy level structure by tight coupling among the coefficients of potential functions for the radial Schrödinger equation; and lastly, they discuss the solutions and make conclusions.
ERIC Educational Resources Information Center
Senko, Corwin; Dawson, Blair
2017-01-01
Achievement goal theory originally defined performance-approach goals as striving to demonstrate competence to outsiders by outperforming peers. The research, however, has operationalized the goals inconsistently, emphasizing the competence demonstration element in some cases and the peer comparison element in others. A meta-analysis by Hulleman…
Inorganic chemical analysis of environmental materials—A lecture series
Crock, J.G.; Lamothe, P.J.
2011-01-01
At the request of the faculty of the Colorado School of Mines, Golden, Colorado, the authors prepared and presented a lecture series to the students of a graduate level advanced instrumental analysis class. The slides and text presented in this report are a compilation and condensation of this series of lectures. The purpose of this report is to present the slides and notes and to emphasize the thought processes that should be used by a scientist submitting samples for analyses in order to procure analytical data to answer a research question. First and foremost, the analytical data generated can be no better than the samples submitted. The questions to be answered must first be well defined and the appropriate samples collected from the population that will answer the question. The proper methods of analysis, including proper sample preparation and digestion techniques, must then be applied. Care must be taken to achieve the required limits of detection of the critical analytes to yield detectable analyte concentration (above "action" levels) for the majority of the study's samples and to address what portion of those analytes answer the research question-total or partial concentrations. To guarantee a robust analytical result that answers the research question(s), a well-defined quality assurance and quality control (QA/QC) plan must be employed. This QA/QC plan must include the collection and analysis of field and laboratory blanks, sample duplicates, and matrix-matched standard reference materials (SRMs). The proper SRMs may include in-house materials and/or a selection of widely available commercial materials. A discussion of the preparation and applicability of in-house reference materials is also presented. Only when all these analytical issues are sufficiently addressed can the research questions be answered with known certainty.
Pre-analytical and analytical aspects affecting clinical reliability of plasma glucose results.
Pasqualetti, Sara; Braga, Federica; Panteghini, Mauro
2017-07-01
The measurement of plasma glucose (PG) plays a central role in recognizing disturbances in carbohydrate metabolism, with established decision limits that are globally accepted. This requires that PG results are reliable and unequivocally valid no matter where they are obtained. To control the pre-analytical variability of PG and prevent in vitro glycolysis, the use of citrate as rapidly effective glycolysis inhibitor has been proposed. However, the commercial availability of several tubes with studies showing different performance has created confusion among users. Moreover, and more importantly, studies have shown that tubes promptly inhibiting glycolysis give PG results that are significantly higher than tubes containing sodium fluoride only, used in the majority of studies generating the current PG cut-points, with a different clinical classification of subjects. From the analytical point of view, to be equivalent among different measuring systems, PG results should be traceable to a recognized higher-order reference via the implementation of an unbroken metrological hierarchy. In doing this, it is important that manufacturers of measuring systems consider the uncertainty accumulated through the different steps of the selected traceability chain. In particular, PG results should fulfil analytical performance specifications defined to fit the intended clinical application. Since PG has tight homeostatic control, its biological variability may be used to define these limits. Alternatively, given the central diagnostic role of the analyte, an outcome model showing the impact of analytical performance of test on clinical classifications of subjects can be used. Using these specifications, performance assessment studies employing commutable control materials with values assigned by reference procedure have shown that the quality of PG measurements is often far from desirable and that problems are exacerbated using point-of-care devices. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Nasiri, Hamid; Ebrahimi, Amrollah; Zahed, Arash; Arab, Mostafa; Samouei, Rahele
2015-05-01
Functional neurological symptom disorder commonly presents with symptoms and defects of sensory and motor functions. Therefore, it is often mistaken for a medical condition. It is well known that functional neurological symptom disorder more often caused by psychological factors. There are three main approaches namely analytical, cognitive and biological to manage conversion disorder. Any of such approaches can be applied through short-term treatment programs. In this case, study a 12-year-old boy with the diagnosed functional neurological symptom disorder (psychogenic myopia) was put under a cognitive-analytical treatment. The outcome of this treatment modality was proved successful.
Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.
Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng
2014-06-01
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.
Recent Studies in Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Garcia, Rafael Ferro
2008-01-01
Functional Analytic Psychotherapy (FAP), based on the principles of radical behaviorism, emphasizes the impact of eventualities that occur during therapeutic sessions, the therapist-client interaction context, functional equivalence between environments, natural reinforcement and shaping by the therapist. This paper reviews recent studies of FAP…
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
Analytic study of orbiter landing profiles
NASA Technical Reports Server (NTRS)
Walker, H. J.
1981-01-01
A broad survey of possible orbiter landing configurations was made with specific goals of defining boundaries for the landing task. The results suggest that the center of the corridors between marginal and routine represents a more or less optimal preflare condition for regular operations. Various constraints used to define the boundaries are based largely on qualitative judgements from earlier flight experience with the X-15 and lifting body research aircraft. The results should serve as useful background for expanding and validating landing simulation programs. The analytic approach offers a particular advantage in identifying trends due to the systematic variation of factors such as vehicle weight, load factor, approach speed, and aim point. Limitations such as a constant load factor during the flare and using a fixed gear deployment time interval, can be removed by increasing the flexibility of the computer program. This analytic definition of landing profiles of the orbiter may suggest additional studies, includin more configurations or more comparisons of landing profiles within and beyond the corridor boundaries.
Prowess - A Software Model for the Ooty Wide Field Array
NASA Astrophysics Data System (ADS)
Marthi, Visweshwar Ram
2017-03-01
One of the scientific objectives of the Ooty Wide Field Array (OWFA) is to observe the redshifted H i emission from z ˜ 3.35. Although predictions spell out optimistic outcomes in reasonable integration times, these studies were based purely on analytical assumptions, without accounting for limiting systematics. A software model for OWFA has been developed with a view to understanding the instrument-induced systematics, by describing a complete software model for the instrument. This model has been implemented through a suite of programs, together called Prowess, which has been conceived with the dual role of an emulator as well as observatory data analysis software. The programming philosophy followed in building Prowess enables a general user to define an own set of functions and add new functionality. This paper describes a co-ordinate system suitable for OWFA in which the baselines are defined. The foregrounds are simulated from their angular power spectra. The visibilities are then computed from the foregrounds. These visibilities are then used for further processing, such as calibration and power spectrum estimation. The package allows for rich visualization features in multiple output formats in an interactive fashion, giving the user an intuitive feel for the data. Prowess has been extensively used for numerical predictions of the foregrounds for the OWFA H i experiment.
Thermostatistical description of gas mixtures from space partitions
NASA Astrophysics Data System (ADS)
Rohrmann, R. D.; Zorec, J.
2006-10-01
The new mathematical framework based on the free energy of pure classical fluids presented by Rohrmann [Physica A 347, 221 (2005)] is extended to multicomponent systems to determine thermodynamic and structural properties of chemically complex fluids. Presently, the theory focuses on D -dimensional mixtures in the low-density limit (packing factor η<0.01 ). The formalism combines the free-energy minimization technique with space partitions that assign an available volume v to each particle. v is related to the closeness of the nearest neighbor and provides a useful tool to evaluate the perturbations experimented by particles in a fluid. The theory shows a close relationship between statistical geometry and statistical mechanics. New, unconventional thermodynamic variables and mathematical identities are derived as a result of the space division. Thermodynamic potentials μil , conjugate variable of the populations Nil of particles class i with the nearest neighbors of class l are defined and their relationships with the usual chemical potentials μi are established. Systems of hard spheres are treated as illustrative examples and their thermodynamics functions are derived analytically. The low-density expressions obtained agree nicely with those of scaled-particle theory and Percus-Yevick approximation. Several pair distribution functions are introduced and evaluated. Analytical expressions are also presented for hard spheres with attractive forces due to Kac-tails and square-well potentials. Finally, we derive general chemical equilibrium conditions.
Definition and characterization of an extended social-affective default network.
Amft, Maren; Bzdok, Danilo; Laird, Angela R; Fox, Peter T; Schilbach, Leonhard; Eickhoff, Simon B
2015-03-01
Recent evidence suggests considerable overlap between the default mode network (DMN) and regions involved in social, affective and introspective processes. We considered these overlapping regions as the social-affective part of the DMN. In this study, we established a robust mapping of the underlying brain network formed by these regions and those strongly connected to them (the extended social-affective default network). We first seeded meta-analytic connectivity modeling and resting-state analyses in the meta-analytically defined DMN regions that showed statistical overlap with regions associated with social and affective processing. Consensus connectivity of each seed was subsequently delineated by a conjunction across both connectivity analyses. We then functionally characterized the ensuing regions and performed several cluster analyses. Among the identified regions, the amygdala/hippocampus formed a cluster associated with emotional processes and memory functions. The ventral striatum, anterior cingulum, subgenual cingulum and ventromedial prefrontal cortex formed a heterogeneous subgroup associated with motivation, reward and cognitive modulation of affect. Posterior cingulum/precuneus and dorsomedial prefrontal cortex were associated with mentalizing, self-reference and autobiographic information. The cluster formed by the temporo-parietal junction and anterior middle temporal sulcus/gyrus was associated with language and social cognition. Taken together, the current work highlights a robustly interconnected network that may be central to introspective, socio-affective, that is, self- and other-related mental processes.
NASA Technical Reports Server (NTRS)
Simonson, M. R.; Smith, E. G.; Uhl, W. R.
1974-01-01
Analytical and experimental studies were performed to define the flowfield of annular jets, with and, without swirling flow. The analytical model treated configurations with variations of flow angularities, radius ratio, and swirl distributions. Swirl distributions characteristic of stator vanes and rotor blade rows, where the total pressure and swirl distributions are related were incorporated in the mathematical model. The experimental studies included tests of eleven nozzle models, both with and, without swirling exhaust flow. Flowfield surveys were obtained and used for comparison with the analytical model. This comparison of experimental and analytical studies served as the basis for evaluation of several empirical constants as required for application of the analysis to the general flow configuration. The analytical model developed during these studies is applicable to the evaluation of the flowfield and overall performance of the exhaust of statorless lift fan systems that contain various levels of exhaust swirl.
Lombardi, Giovanni; Sansoni, Veronica; Banfi, Giuseppe
2017-08-01
In the last few years, a growing number of molecules have been associated to an endocrine function of the skeletal muscle. Circulating myokine levels, in turn, have been associated with several pathophysiological conditions including the cardiovascular ones. However, data from different studies are often not completely comparable or even discordant. This would be due, at least in part, to the whole set of situations related to the preparation of the patient prior to blood sampling, blood sampling procedure, processing and/or store. This entire process constitutes the pre-analytical phase. The importance of the pre-analytical phase is often not considered. However, in routine diagnostics, the 70% of the errors are in this phase. Moreover, errors during the pre-analytical phase are carried over in the analytical phase and affects the final output. In research, for example, when samples are collected over a long time and by different laboratories, a standardized procedure for sample collecting and the correct procedure for sample storage are acknowledged. In this review, we discuss the pre-analytical variables potentially affecting the measurement of myokines with cardiovascular functions.
Kinetic damping in the spectra of the spherical impedance probe
NASA Astrophysics Data System (ADS)
Oberrath, J.
2018-04-01
The impedance probe is a measurement device to measure plasma parameters, such as electron density. It consists of one electrode connected to a network analyzer via a coaxial cable and is immersed into a plasma. A bias potential superposed with an alternating potential is applied to the electrode and the response of the plasma is measured. Its dynamical interaction with the plasma in an electrostatic, kinetic description can be modeled in an abstract notation based on functional analytic methods. These methods provide the opportunity to derive a general solution, which is given as the response function of the probe–plasma system. It is defined by the matrix elements of the resolvent of an appropriate dynamical operator. Based on the general solution, a residual damping for vanishing pressure can be predicted and can only be explained by kinetic effects. In this paper, an explicit response function of the spherical impedance probe is derived. Therefore, the resolvent is determined by its algebraic representation based on an expansion in orthogonal basis functions. This allows one to compute an approximated response function and its corresponding spectra. These spectra show additional damping due to kinetic effects and are in good agreement with former kinetically determined spectra.
Quantum calculus of classical vortex images, integrable models and quantum states
NASA Astrophysics Data System (ADS)
Pashaev, Oktay K.
2016-10-01
From two circle theorem described in terms of q-periodic functions, in the limit q→1 we have derived the strip theorem and the stream function for N vortex problem. For regular N-vortex polygon we find compact expression for the velocity of uniform rotation and show that it represents a nonlinear oscillator. We describe q-dispersive extensions of the linear and nonlinear Schrodinger equations, as well as the q-semiclassical expansions in terms of Bernoulli and Euler polynomials. Different kind of q-analytic functions are introduced, including the pq-analytic and the golden analytic functions.
Dynamic response of gold nanoparticle chemiresistors to organic analytes in aqueous solution.
Müller, Karl-Heinz; Chow, Edith; Wieczorek, Lech; Raguse, Burkhard; Cooper, James S; Hubble, Lee J
2011-10-28
We investigate the response dynamics of 1-hexanethiol-functionalized gold nanoparticle chemiresistors exposed to the analyte octane in aqueous solution. The dynamic response is studied as a function of the analyte-water flow velocity, the thickness of the gold nanoparticle film and the analyte concentration. A theoretical model for analyte limited mass-transport is used to model the analyte diffusion into the film, the partitioning of the analyte into the 1-hexanethiol capping layers and the subsequent swelling of the film. The degree of swelling is then used to calculate the increase of the electron tunnel resistance between adjacent nanoparticles which determines the resistance change of the film. In particular, the effect of the nonlinear relationship between resistance and swelling on the dynamic response is investigated at high analyte concentration. Good agreement between experiment and the theoretical model is achieved. This journal is © the Owner Societies 2011
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-09-01
The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.
Zhang, Chuanbao; Guo, Wei; Huang, Hengjian; Ma, Yueyun; Zhuang, Junhua; Zhang, Jie
2013-01-01
Background Reference intervals of Liver function tests are very important for the screening, diagnosis, treatment, and monitoring of liver diseases. We aim to establish common reference intervals of liver function tests specifically for the Chinese adult population. Methods A total of 3210 individuals (20–79 years) were enrolled in six representative geographical regions in China. Analytes of ALT, AST, GGT, ALP, total protein, albumin and total bilirubin were measured using three analytical systems mainly used in China. The newly established reference intervals were based on the results of traceability or multiple systems, and then validated in 21 large hospitals located nationwide qualified by the National External Quality Assessment (EQA) of China. Results We had been established reference intervals of the seven liver function tests for the Chinese adult population and found there were apparent variances of reference values for the variables for partitioning analysis such as gender(ALT, GGT, total bilirubin), age(ALP, albumin) and region(total protein). More than 86% of the 21 laboratories passed the validation in all subgroup of reference intervals and overall about 95.3% to 98.8% of the 1220 validation results fell within the range of the new reference interval for all liver function tests. In comparison with the currently recommended reference intervals in China, the single side observed proportions of out of range of reference values from our study for most of the tests deviated significantly from the nominal 2.5% such as total bilirubin (15.2%), ALP (0.2%), albumin (0.0%). Most of reference intervals in our study were obviously different from that of other races. Conclusion These used reference intervals are no longer applicable for the current Chinese population. We have established common reference intervals of liver function tests that are defined specifically for Chinese population and can be universally used among EQA-approved laboratories located all over China. PMID:24058449
Tran, N L; Bohrer, F I; Trogler, W C; Kummel, A C
2009-05-28
Density functional theory (DFT) simulations were used to determine the binding strength of 12 electron-donating analytes to the zinc metal center of a zinc phthalocyanine molecule (ZnPc monomer). The analyte binding strengths were compared to the analytes' enthalpies of complex formation with boron trifluoride (BF(3)), which is a direct measure of their electron donating ability or Lewis basicity. With the exception of the most basic analyte investigated, the ZnPc binding energies were found to correlate linearly with analyte basicities. Based on natural population analysis calculations, analyte complexation to the Zn metal of the ZnPc monomer resulted in limited charge transfer from the analyte to the ZnPc molecule, which increased with analyte-ZnPc binding energy. The experimental analyte sensitivities from chemiresistor ZnPc sensor data were proportional to an exponential of the binding energies from DFT calculations consistent with sensitivity being proportional to analyte coverage and binding strength. The good correlation observed suggests DFT is a reliable method for the prediction of chemiresistor metallophthalocyanine binding strengths and response sensitivities.
Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H
2016-05-30
For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Tuomisto, Marti T.; Parkkinen, Lauri
2012-01-01
Verbal behavior, as in the use of terms, is an important part of scientific activity in general and behavior analysis in particular. Many glossaries and dictionaries of behavior analysis have been published in English, but few in any other language. Here we review the area of behavior analytic terminology, its translations, and development in…
Carroll, Annemaree; McCarthy, Molly; Houghton, Stephen; Sanders O'Connor, Emma; Zadow, Corinne
2018-04-24
Reactive and proactive aggression is a dichotomous classification of aggression in adults and children. This distinction has been supported by a number of variable-based and factor analytic studies. Due to high inter-correlations, however, the reactive-proactive aggression distinction may not be entirely useful for understanding how group or individual aggressive behavior varies in children and adolescents. Drawing on a sample of primary school-aged children (N = 242) aged 7-12 years, this study sought to determine whether reactive and proactive aggression could be distinguished at the variable-level and the person-level in children. Exploratory Factor Analysis of data from an aggression instrument measuring both functions and forms of aggression, found a two-factor construct of aggression constituted by a reactive and proactive aggression factor. A person-based analysis was then conducted after classifying children according to the presence of reactive and/or proactive aggression. Discriminant function analysis was used to discern whether classifications on the basis of aggression function produced meaningful distinctions in terms of antisocial traits and emotional valence and intensity measures. Two functions were identified which distinguished children with different combinations of reactive and proactive aggression. Reactive-only aggressive children were defined primarily by high levels of impulsivity, while proactive-only children were defined primarily by higher levels of antisocial traits. Children high in both types of aggression exhibited both the presence of antisocial traits and impulsivity. Contrary to recent findings, this suggests that differences in aggression functions remain meaningful at the person level in children. Implications for interventions are discussed. © 2018 Wiley Periodicals, Inc.
Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.
Li, Shuai; Li, Yangming
2013-10-28
The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Buonfiglio, Marzia; Toscano, M; Puledda, F; Avanzini, G; Di Clemente, L; Di Sabato, F; Di Piero, V
2015-03-01
Habituation is considered one of the most basic mechanisms of learning. Habituation deficit to several sensory stimulations has been defined as a trait of migraine brain and also observed in other disorders. On the other hand, analytic information processing style is characterized by the habit of continually evaluating stimuli and it has been associated with migraine. We investigated a possible correlation between lack of habituation of evoked visual potentials and analytic cognitive style in healthy subjects. According to Sternberg-Wagner self-assessment inventory, 15 healthy volunteers (HV) with high analytic score and 15 HV with high global score were recruited. Both groups underwent visual evoked potentials recordings after psychological evaluation. We observed significant lack of habituation in analytical individuals compared to global group. In conclusion, a reduced habituation of visual evoked potentials has been observed in analytic subjects. Our results suggest that further research should be undertaken regarding the relationship between analytic cognitive style and lack of habituation in both physiological and pathophysiological conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
A new field of research, visual analytics, has recently been introduced. This has been defined as “the science of analytical reasoning facilitated by visual interfaces." Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation and dissemination. As researchers begin to develop visual analytic environments, it will be advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work will have on the users who will work in such environments. This paper presents five areas or aspects ofmore » visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined.« less
ERIC Educational Resources Information Center
Schoendorff, Benjamin; Steinwachs, Joanne
2012-01-01
How can therapists be effectively trained in clinical functional contextualism? In this conceptual article we propose a new way of training therapists in Acceptance and Commitment Therapy skills using tools from Functional Analytic Psychotherapy in a training context functionally similar to the therapeutic relationship. FAP has been successfully…
betaFIT: A computer program to fit pointwise potentials to selected analytic functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Pashov, Asen
2017-01-01
This paper describes program betaFIT, which performs least-squares fits of sets of one-dimensional (or radial) potential function values to four different types of sophisticated analytic potential energy functional forms. These families of potential energy functions are: the Expanded Morse Oscillator (EMO) potential [J Mol Spectrosc 1999;194:197], the Morse/Long-Range (MLR) potential [Mol Phys 2007;105:663], the Double Exponential/Long-Range (DELR) potential [J Chem Phys 2003;119:7398], and the "Generalized Potential Energy Function (GPEF)" form introduced by Šurkus et al. [Chem Phys Lett 1984;105:291], which includes a wide variety of polynomial potentials, such as the Dunham [Phys Rev 1932;41:713], Simons-Parr-Finlan [J Chem Phys 1973;59:3229], and Ogilvie-Tipping [Proc R Soc A 1991;378:287] polynomials, as special cases. This code will be useful for providing the realistic sets of potential function shape parameters that are required to initiate direct fits of selected analytic potential functions to experimental data, and for providing better analytical representations of sets of ab initio results.
Superfund CLP National Functional Guidelines for Data Review
A collection of all the national functional guidelines for data review written and maintained by EPA OSWER OSRTI's Analytical Services Branch (ASB). Used for review of analytical data generated using CLP SOWs.
NASA Astrophysics Data System (ADS)
Aymard, François; Gulminelli, Francesca; Margueron, Jérôme
2016-08-01
The problem of determination of nuclear surface energy is addressed within the framework of the extended Thomas Fermi (ETF) approximation using Skyrme functionals. We propose an analytical model for the density profiles with variationally determined diffuseness parameters. In this first paper, we consider the case of symmetric nuclei. In this situation, the ETF functional can be exactly integrated, leading to an analytical formula expressing the surface energy as a function of the couplings of the energy functional. The importance of non-local terms is stressed and it is shown that they cannot be deduced simply from the local part of the functional, as it was suggested in previous works.
2008-09-01
43 B. WELL-DEFINED MEASURES ........................................................... 43 C. ESSENTIAL ELEMENTS OF ANALYSIS ( EEA ...45 D. EEA PROCESS FOR RESTORATION OF ESSENTIAL SERVICE - WATER...FBCB2 Force XXI Battle Command, Brigade-and-Below FM Army Field Manual EEA Essential Elements of Analysis EPG Electronic Proving Ground ESS
Reflections on the Field of Higher Education: Time, Space and Sub-Fields
ERIC Educational Resources Information Center
Yokoyama, Keiko
2016-01-01
The objective of this study is to define the field of higher education and clarify its identity. It examines three analytical dimensions which, it proposes, shape the field: knowledge, approach and community. It argues that contextual knowledge around the issue of higher education has defined the field but has not determined techniques that are…
A New Analytic Framework for Moderation Analysis --- Moving Beyond Analytic Interactions
Tang, Wan; Yu, Qin; Crits-Christoph, Paul; Tu, Xin M.
2009-01-01
Conceptually, a moderator is a variable that modifies the effect of a predictor on a response. Analytically, a common approach as used in most moderation analyses is to add analytic interactions involving the predictor and moderator in the form of cross-variable products and test the significance of such terms. The narrow scope of such a procedure is inconsistent with the broader conceptual definition of moderation, leading to confusion in interpretation of study findings. In this paper, we develop a new approach to the analytic procedure that is consistent with the concept of moderation. The proposed framework defines moderation as a process that modifies an existing relationship between the predictor and the outcome, rather than simply a test of a predictor by moderator interaction. The approach is illustrated with data from a real study. PMID:20161453
AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodríguez-Ramírez, J. C.; Raga, A. C.; Lora, V.
2016-12-20
We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. Wemore » compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.« less
Propellant Readiness Level: A Methodological Approach to Propellant Characterization
NASA Technical Reports Server (NTRS)
Bossard, John A.; Rhys, Noah O.
2010-01-01
A methodological approach to defining propellant characterization is presented. The method is based on the well-established Technology Readiness Level nomenclature. This approach establishes the Propellant Readiness Level as a metric for ascertaining the readiness of a propellant or a propellant combination by evaluating the following set of propellant characteristics: thermodynamic data, toxicity, applications, combustion data, heat transfer data, material compatibility, analytical prediction modeling, injector/chamber geometry, pressurization, ignition, combustion stability, system storability, qualification testing, and flight capability. The methodology is meant to be applicable to all propellants or propellant combinations; liquid, solid, and gaseous propellants as well as monopropellants and propellant combinations are equally served. The functionality of the proposed approach is tested through the evaluation and comparison of an example set of hydrocarbon fuels.
Development of design and analysis methodology for composite bolted joints
NASA Astrophysics Data System (ADS)
Grant, Peter; Sawicki, Adam
1991-05-01
This paper summarizes work performed to develop composite joint design methodology for use on rotorcraft primary structure, determine joint characteristics which affect joint bearing and bypass strength, and develop analytical methods for predicting the effects of such characteristics in structural joints. Experimental results have shown that bearing-bypass interaction allowables cannot be defined using a single continuous function due to variance of failure modes for different bearing-bypass ratios. Hole wear effects can be significant at moderate stress levels and should be considered in the development of bearing allowables. A computer program has been developed and has successfully predicted bearing-bypass interaction effects for the (0/+/-45/90) family of laminates using filled hole and unnotched test data.
Surface-admittance equivalence principle for nonradiating and cloaking problems
NASA Astrophysics Data System (ADS)
Labate, Giuseppe; Alù, Andrea; Matekovits, Ladislau
2017-06-01
In this paper, we address nonradiating and cloaking problems exploiting the surface equivalence principle, by imposing at any arbitrary boundary the control of the admittance discontinuity between the overall object (with or without cloak) and the background. After a rigorous demonstration, we apply this model to a nonradiating problem, appealing for anapole modes and metamolecules modeling, and to a cloaking problem, appealing for non-Foster metasurface design. A straightforward analytical condition is obtained for controlling the scattering of a dielectric object over a surface boundary of interest. Previous quasistatic results are confirmed and a general closed-form solution beyond the subwavelength regime is provided. In addition, this formulation can be extended to other wave phenomena once the proper admittance function is defined (thermal, acoustics, elastomechanics, etc.).
Unusual square roots in the ghost-free theory of massive gravity
NASA Astrophysics Data System (ADS)
Golovnev, Alexey; Smirnov, Fedor
2017-06-01
A crucial building block of the ghost free massive gravity is the square root function of a matrix. This is a problematic entity from the viewpoint of existence and uniqueness properties. We accurately describe the freedom of choosing a square root of a (non-degenerate) matrix. It has discrete and (in special cases) continuous parts. When continuous freedom is present, the usual perturbation theory in terms of matrices can be critically ill defined for some choices of the square root. We consider the new formulation of massive and bimetric gravity which deals directly with eigenvalues (in disguise of elementary symmetric polynomials) instead of matrices. It allows for a meaningful discussion of perturbation theory in such cases, even though certain non-analytic features arise.
A behavior-analytic view of psychological health
Follette, William C.; Bach, Patricia A.; Follette, Victoria M.
1993-01-01
This paper argues that a behavioral analysis of psychological health is useful and appropriate. Such an analysis will allow us to better evaluate intervention outcomes without resorting only to the assessment of pathological behavior, thus providing an alternative to the Diagnostic and Statistical Manual system of conceptualizing behavior. The goals of such an analysis are to distinguish between people and outcomes using each term of the three-term contingency as a dimension to consider. A brief review of other efforts to define psychological health is provided. Laboratory approaches to a behavioral analysis of healthy behavior are presented along with shortcomings in our science that impede our analysis. Finally, we present some of the functional characteristics of psychological health that we value. PMID:22478160
Reflections on Klein's radical notion of phantasy and its implications for analytic practice.
Blass, Rachel B
2017-06-01
Analysts may incorporate many of Melanie Klein's important contributions (e.g., on preoedipal dynamics, envy, and projective identification) without transforming their basic analytic approach. In this paper I argue that adopting the Kleinian notion of unconscious phantasy is transformative. While it is grounded in Freud's thinking and draws out something essential to his work, this notion of phantasy introduces a radical change that defines Kleinian thinking and practice and significantly impacts the analyst's basic clinical approach. This impact and its technical implications in the analytic situation are illustrated and discussed. Copyright © 2017 Institute of Psychoanalysis.
Parachute-deployment-parameter identification based on an analytical simulation of Viking BLDT AV-4
NASA Technical Reports Server (NTRS)
Talay, T. A.
1974-01-01
A six-degree-of-freedom analytical simulation of parachute deployment dynamics developed at the Langley Research Center is presented. A comparison study was made using flight results from the Viking Balloon Launched Decelerator Test (BLDT) AV-4. Since there are significant voids in the knowledge of vehicle and decelerator aerodynamics and suspension system physical properties, a set of deployment-parameter input has been defined which may be used as a basis for future studies of parachute deployment dynamics. The study indicates the analytical model is sufficiently sophisticated to investigate parachute deployment dynamics with reasonable accuracy.
Analytic integration of real-virtual counterterms in NNLO jet cross sections II
NASA Astrophysics Data System (ADS)
Bolzoni, Paolo; Moch, Sven-Olaf; Somogyi, Gábor; Trócsányi, Zoltán
2009-08-01
We present analytic expressions of all integrals required to complete the explicit evaluation of the real-virtual integrated counterterms needed to define a recently proposed subtraction scheme for jet cross sections at next-to-next-to-leading order in QCD. We use the Mellin-Barnes representation of these integrals in 4 - 2epsilon dimensions to obtain the coefficients of their Laurent expansions around epsilon = 0. These coefficients are given by linear combinations of multidimensional Mellin-Barnes integrals. We compute the coefficients of such expansions in epsilon both numerically and analytically by complex integration over the Mellin-Barnes contours.
Depth-resolved monitoring of analytes diffusion in ocular tissues
NASA Astrophysics Data System (ADS)
Larin, Kirill V.; Ghosn, Mohamad G.; Tuchin, Valery V.
2007-02-01
Optical coherence tomography (OCT) is a noninvasive imaging technique with high in-depth resolution. We employed OCT technique for monitoring and quantification of analyte and drug diffusion in cornea and sclera of rabbit eyes in vitro. Different analytes and drugs such as metronidazole, dexamethasone, ciprofloxacin, mannitol, and glucose solution were studied and whose permeability coefficients were calculated. Drug diffusion monitoring was performed as a function of time and as a function of depth. Obtained results suggest that OCT technique might be used for analyte diffusion studies in connective and epithelial tissues.
ERIC Educational Resources Information Center
Kanter, Jonathan W.; Landes, Sara J.; Busch, Andrew M.; Rusch, Laura C.; Brown, Keri R.; Baruch, David E.; Holman, Gareth I.
2006-01-01
The current study investigated a behavior-analytic treatment, functional analytic psychotherapy (FAP), for outpatient depression utilizing two single-subject A/A+B designs. The baseline condition was cognitive behavioral therapy. Results demonstrated treatment success in 1 client after the addition of FAP and treatment failure in the 2nd. This…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parfenov, O.G.
1994-12-25
We discuss three results. The first exhibits the order of decrease of the s-values as a function of the CR-dimension of a compact set on which we approximate the class of analytic functions being studied. The second is an asymptotic formula for the case when the domain of analyticity and the compact set are Reinhart domains. The third is the computation of the s-values of a special operator that is of interest for approximation theory on one-dimensional manifolds.
Application of conformal transformation to elliptic geometry for electric impedance tomography.
Yilmaz, Atila; Akdoğan, Kurtuluş E; Saka, Birsen
2008-03-01
Electrical impedance tomography (EIT) is a medical imaging modality that is used to compute the conductivity distribution through measurements on the cross-section of a body part. An elliptic geometry model, which defines a more general frame, ensures more accurate results in reconstruction and assessment of inhomogeneities inside. This study provides a link between the analytical solutions defined in circular and elliptical geometries on the basis of the computation of conformal mapping. The results defined as voltage distributions for the homogeneous case in elliptic and circular geometries have been compared with those obtained by the use of conformal transformation between elliptical and well-known circular geometry. The study also includes the results of the finite element method (FEM) as another approach for more complex geometries for the comparison of performance in other complex scenarios for eccentric inhomogeneities. The study emphasizes that for the elliptic case the analytical solution with conformal transformation is a reliable and useful tool for developing insight into more complex forms including eccentric inhomogeneities.
A multigroup radiation diffusion test problem: Comparison of code results with analytic solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestakov, A I; Harte, J A; Bolstad, J H
2006-12-21
We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.
Double Wigner distribution function of a first-order optical system with a hard-edge aperture.
Pan, Weiqing
2008-01-01
The effect of an apertured optical system on Wigner distribution can be expressed as a superposition integral of the input Wigner distribution function and the double Wigner distribution function of the apertured optical system. By introducing a hard aperture function into a finite sum of complex Gaussian functions, the double Wigner distribution functions of a first-order optical system with a hard aperture outside and inside it are derived. As an example of application, the analytical expressions of the Wigner distribution for a Gaussian beam passing through a spatial filtering optical system with an internal hard aperture are obtained. The analytical results are also compared with the numerical integral results, and they show that the analytical results are proper and ascendant.
A results-based process for evaluation of diverse visual analytics tools
NASA Astrophysics Data System (ADS)
Rubin, Gary; Berger, David H.
2013-05-01
With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.
On the Application of Euler Deconvolution to the Analytic Signal
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.; Pasteka, R.
2005-05-01
In the last years papers on Euler deconvolution (ED) used formulations that accounted for the unknown background field, allowing to consider the structural index (N) an unknown to be solved for, together with the source coordinates. Among them, Hsu (2002) and Fedi and Florio (2002) independently pointed out that the use of an adequate m-order derivative of the field, instead than the field itself, allowed solving for both N and source position. For the same reason, Keating and Pilkington (2004) proposed the ED of the analytic signal. A function being analyzed by ED must be homogeneous but also harmonic, because it must be possible to compute its vertical derivative, as well known from potential field theory. Huang et al. (1995), demonstrated that analytic signal is a homogeneous function, but, for instance, it is rather obvious that the magnetic field modulus (corresponding to the analytic signal of a gravity field) is not a harmonic function (e.g.: Grant & West, 1965). Thus, it appears that a straightforward application of ED to the analytic signal is not possible because a vertical derivation of this function is not correct by using standard potential fields analysis tools. In this note we want to theoretically and empirically check what kind of error are caused in the ED by such wrong assumption about analytic signal harmonicity. We will discuss results on profile and map synthetic data, and use a simple method to compute the vertical derivative of non-harmonic functions measured on a horizontal plane. Our main conclusions are: 1. To approximate a correct evaluation of the vertical derivative of a non-harmonic function it is useful to compute it with finite-difference, by using upward continuation. 2. We found that the errors on the vertical derivative computed as if the analytic signal was harmonic reflects mainly on the structural index estimate; these errors can mislead an interpretation even though the depth estimates are almost correct. 3. Consistent estimates of depth and S.I. are instead obtained by using a finite-difference vertical derivative of the analytic signal. 4. Analysis of a case history confirms the strong error in the estimation of structural index if the analytic signal is treated as an harmonic function.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Liang, Xiaojing; Wang, Shuai; Liu, Shujuan; Liu, Xia; Jiang, Shengxiang
2012-08-01
An octadecylsilane functionalized graphene oxide/silica stationary phase was fabricated by assembling graphene oxide onto the silica particles through an amide bond and subsequent immobilization of octadecylsilane. The chromatographic properties of the stationary phase were investigated by reversed-phase chromatography with alkylbenzenes, polycyclic aromatic hydrocarbons, amines, and phenolic compounds as the analytes. All the compounds achieved good separation on the column. The comparison between a C18 commercial column and the new stationary phase indicated that the existence of π-electron system of graphene oxide allows π-π interaction between analyte and octadecylsilane functionalized graphene oxide/silica stationary phase except for hydrophobic interaction, while only hydrophobic interaction presented between analyte and C18 commercial column. This suggests that some analytes can be better separated on the octadecylsilane functionalized graphene oxide/silica column. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The Challenge of Developing a Universal Case Conceptualization for Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Bonow, Jordan T.; Maragakis, Alexandros; Follette, William C.
2012-01-01
Functional Analytic Psychotherapy (FAP) targets a client's interpersonal behavior for change with the goal of improving his or her quality of life. One question guiding FAP case conceptualization is, "What interpersonal behavioral repertoires will allow a specific client to function optimally?" Previous FAP writings have suggested that a therapist…
ERIC Educational Resources Information Center
Wetterneck, Chad T.; Hart, John M.
2012-01-01
Problems with intimacy and interpersonal issues are exhibited across most psychiatric disorders. However, most of the targets in Cognitive Behavioral Therapy are primarily intrapersonal in nature, with few directly involved in interpersonal functioning and effective intimacy. Functional Analytic Psychotherapy (FAP) provides a behavioral basis for…
ERIC Educational Resources Information Center
Manduchi, Katia; Schoendorff, Benjamin
2012-01-01
Practicing Functional Analytic Psychotherapy (FAP) for the first time can seem daunting to therapists. Establishing a deep and intense therapeutic relationship, identifying FAP's therapeutic targets of clinically relevant behaviors, and using contingent reinforcement to help clients emit more functional behavior in the therapeutic relationship all…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cresti, Alessandro; Grosso, Giuseppe; Parravicini, Giuseppe Pastori
2006-05-15
We have derived closed analytic expressions for the Green's function of an electron in a two-dimensional electron gas threaded by a uniform perpendicular magnetic field, also in the presence of a uniform electric field and of a parabolic spatial confinement. A workable and powerful numerical procedure for the calculation of the Green's functions for a large infinitely extended quantum wire is considered exploiting a lattice model for the wire, the tight-binding representation for the corresponding matrix Green's function, and the Peierls phase factor in the Hamiltonian hopping matrix element to account for the magnetic field. The numerical evaluation of themore » Green's function has been performed by means of the decimation-renormalization method, and quite satisfactorily compared with the analytic results worked out in this paper. As an example of the versatility of the numerical and analytic tools here presented, the peculiar semilocal character of the magnetic Green's function is studied in detail because of its basic importance in determining magneto-transport properties in mesoscopic systems.« less
Solving three-body-breakup problems with outgoing-flux asymptotic conditions
NASA Astrophysics Data System (ADS)
Randazzo, J. M.; Buezas, F.; Frapiccini, A. L.; Colavecchia, F. D.; Gasaneo, G.
2011-11-01
An analytically solvable three-body collision system (s wave) model is used to test two different theoretical methods. The first one is a configuration interaction expansion of the scattering wave function using a basis set of Generalized Sturmian Functions (GSF) with purely outgoing flux (CISF), introduced recently in A. L. Frapicinni, J. M. Randazzo, G. Gasaneo, and F. D. Colavecchia [J. Phys. B: At. Mol. Opt. Phys.JPAPEH0953-407510.1088/0953-4075/43/10/101001 43, 101001 (2010)]. The second one is a finite element method (FEM) calculation performed with a commercial code. Both methods are employed to analyze different ways of modeling the asymptotic behavior of the wave function in finite computational domains. The asymptotes can be simulated very accurately by choosing hyperspherical or rectangular contours with the FEM software. In contrast, the CISF method can be defined both in an infinite domain or within a confined region in space. We found that the hyperspherical (rectangular) FEM calculation and the infinite domain (confined) CISF evaluation are equivalent. Finally, we apply these models to the Temkin-Poet approach of hydrogen ionization.
Renormalized Energy Concentration in Random Matrices
NASA Astrophysics Data System (ADS)
Borodin, Alexei; Serfaty, Sylvia
2013-05-01
We define a "renormalized energy" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of Sandier and Serfaty (From the Ginzburg-Landau model to vortex lattice problems, 2012; 1D log-gases and the renormalized energy, 2013). Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix β-sine processes on the real line ( β = 1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the β = 2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.
Consistency criteria for generalized Cuddeford systems
NASA Astrophysics Data System (ADS)
Ciotti, Luca; Morganti, Lucia
2010-01-01
General criteria to check the positivity of the distribution function (phase-space consistency) of stellar systems of assigned density and anisotropy profile are useful starting points in Jeans-based modelling. Here, we substantially extend previous results, and present the inversion formula and the analytical necessary and sufficient conditions for phase-space consistency of the family of multicomponent Cuddeford spherical systems: the distribution function of each density component of these systems is defined as the sum of an arbitrary number of Cuddeford distribution functions with arbitrary values of the anisotropy radius, but identical angular momentum exponent. The radial trend of anisotropy that can be realized by these models is therefore very general. As a surprising byproduct of our study, we found that the `central cusp-anisotropy theorem' (a necessary condition for consistency relating the values of the central density slope and of the anisotropy parameter) holds not only at the centre but also at all radii in consistent multicomponent generalized Cuddeford systems. This last result suggests that the so-called mass-anisotropy degeneracy could be less severe than what is sometimes feared.
Forbes, Thomas P.; Degertekin, F. Levent; Fedorov, Andrei G.
2010-01-01
Electrochemistry and ion transport in a planar array of mechanically-driven, droplet-based ion sources are investigated using an approximate time scale analysis and in-depth computational simulations. The ion source is modeled as a controlled-current electrolytic cell, in which the piezoelectric transducer electrode, which mechanically drives the charged droplet generation using ultrasonic atomization, also acts as the oxidizing/corroding anode (positive mode). The interplay between advective and diffusive ion transport of electrochemically generated ions is analyzed as a function of the transducer duty cycle and electrode location. A time scale analysis of the relative importance of advective vs. diffusive ion transport provides valuable insight into optimality, from the ionization prospective, of alternative design and operation modes of the ion source operation. A computational model based on the solution of time-averaged, quasi-steady advection-diffusion equations for electroactive species transport is used to substantiate the conclusions of the time scale analysis. The results show that electrochemical ion generation at the piezoelectric transducer electrodes located at the back-side of the ion source reservoir results in poor ionization efficiency due to insufficient time for the charged analyte to diffuse away from the electrode surface to the ejection location, especially at near 100% duty cycle operation. Reducing the duty cycle of droplet/analyte ejection increases the analyte residence time and, in turn, improves ionization efficiency, but at an expense of the reduced device throughput. For applications where this is undesirable, i.e., multiplexed and disposable device configurations, an alternative electrode location is incorporated. By moving the charging electrode to the nozzle surface, the diffusion length scale is greatly reduced, drastically improving ionization efficiency. The ionization efficiency of all operating conditions considered is expressed as a function of the dimensionless Peclet number, which defines the relative effect of advection as compared to diffusion. This analysis is general enough to elucidate an important role of electrochemistry in ionization efficiency of any arrayed ion sources, be they mechanically-driven or electrosprays, and is vital for determining optimal design and operation conditions. PMID:20607111
NASA Technical Reports Server (NTRS)
Tikidjian, Raffi; Mackey, Ryan
2008-01-01
The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.
Statistical framework and noise sensitivity of the amplitude radial correlation contrast method.
Kipervaser, Zeev Gideon; Pelled, Galit; Goelman, Gadi
2007-09-01
A statistical framework for the amplitude radial correlation contrast (RCC) method, which integrates a conventional pixel threshold approach with cluster-size statistics, is presented. The RCC method uses functional MRI (fMRI) data to group neighboring voxels in terms of their degree of temporal cross correlation and compares coherences in different brain states (e.g., stimulation OFF vs. ON). By defining the RCC correlation map as the difference between two RCC images, the map distribution of two OFF states is shown to be normal, enabling the definition of the pixel cutoff. The empirical cluster-size null distribution obtained after the application of the pixel cutoff is used to define a cluster-size cutoff that allows 5% false positives. Assuming that the fMRI signal equals the task-induced response plus noise, an analytical expression of amplitude-RCC dependency on noise is obtained and used to define the pixel threshold. In vivo and ex vivo data obtained during rat forepaw electric stimulation are used to fine-tune this threshold. Calculating the spatial coherences within in vivo and ex vivo images shows enhanced coherence in the in vivo data, but no dependency on the anesthesia method, magnetic field strength, or depth of anesthesia, strengthening the generality of the proposed cutoffs. Copyright (c) 2007 Wiley-Liss, Inc.
Procedures For Microbial-Ecology Laboratory
NASA Technical Reports Server (NTRS)
Huff, Timothy L.
1993-01-01
Microbial Ecology Laboratory Procedures Manual provides concise and well-defined instructions on routine technical procedures to be followed in microbiological laboratory to ensure safety, analytical control, and validity of results.
Closing the brain-to-brain loop in laboratory testing.
Plebani, Mario; Lippi, Giuseppe
2011-07-01
Abstract The delivery of laboratory services has been described 40 years ago and defined with the foremost concept of "brain-to-brain turnaround time loop". This concept consists of several processes, including the final step which is the action undertaken on the patient based on laboratory information. Unfortunately, the need for systematic feedback to improve the value of laboratory services has been poorly understood and, even more risky, poorly applied in daily laboratory practice. Currently, major problems arise from the unavailability of consensually accepted quality specifications for the extra-analytical phase of laboratory testing. This, in turn, does not allow clinical laboratories to calculate a budget for the "patient-related total error". The definition and use of the term "total error" refers only to the analytical phase, and should be better defined as "total analytical error" to avoid any confusion and misinterpretation. According to the hierarchical approach to classify strategies to set analytical quality specifications, the "assessment of the effect of analytical performance on specific clinical decision-making" is comprehensively at the top and therefore should be applied as much as possible to address analytical efforts towards effective goals. In addition, an increasing number of laboratories worldwide are adopting risk management strategies such as FMEA, FRACAS, LEAN and Six Sigma since these techniques allow the identification of the most critical steps in the total testing process, and to reduce the patient-related risk of error. As a matter of fact, an increasing number of laboratory professionals recognize the importance of understanding and monitoring any step in the total testing process, including the appropriateness of the test request as well as the appropriate interpretation and utilization of test results.
Analytical analysis of the temporal asymmetry between seawater intrusion and retreat
NASA Astrophysics Data System (ADS)
Rathore, Saubhagya Singh; Zhao, Yue; Lu, Chunhui; Luo, Jian
2018-01-01
The quantification of timescales associated with the movement of the seawater-freshwater interface is useful for developing effective management strategies for controlling seawater intrusion (SWI). In this study, for the first time, we derive an explicit analytical solution for the timescales of SWI and seawater retreat (SWR) in a confined, homogeneous coastal aquifer system under the quasi-steady assumption, based on a classical sharp-interface solution for approximating freshwater outflow rates into the sea. The flow continuity and hydrostatic equilibrium across the interface are identified as two primary mechanisms governing timescales of the interface movement driven by an abrupt change in discharge rates or hydraulic heads at the inland boundary. Through theoretical analysis, we quantified the dependence of interface-movement timescales on porosity, hydraulic conductivity, aquifer thickness, aquifer length, density ratio, and boundary conditions. Predictions from the analytical solution closely agreed with those from numerical simulations. In addition, we define a temporal asymmetry index (the ratio of the SWI timescale to the SWR timescale) to represent the resilience of the coastal aquifer in response to SWI. The developed analytical solutions provide a simple tool for the quick assessment of SWI and SWR timescales and reveal that the temporal asymmetry between SWI and SWR mainly relies on the initial and final values of the freshwater flux at the inland boundary, and is weakly affected by aquifer parameters. Furthermore, we theoretically examined the log-linearity relationship between the timescale and the freshwater flux at the inland boundary, and found that the relationship may be approximated by two linear functions with a slope of -2 and -1 for large changes at the boundary flux for SWI and SWR, respectively.
Kasaian, M T; Lee, J; Brennan, A; Danto, S I; Black, K E; Fitz, L; Dixon, A E
2018-04-17
A major goal of asthma therapy is to achieve disease control, with maintenance of lung function, reduced need for rescue medication, and prevention of exacerbation. Despite current standard of care, up to 70% of patients with asthma remain poorly controlled. Analysis of serum and sputum biomarkers could offer insights into parameters associated with poor asthma control. To identify signatures as determinants of asthma disease control, we performed proteomics using Olink proximity extension analysis. Up to 3 longitudinal serum samples were collected from 23 controlled and 25 poorly controlled asthmatics. Nine of the controlled and 8 of the poorly controlled subjects also provided 2 longitudinal sputum samples. The study included an additional cohort of 9 subjects whose serum was collected within 48 hours of asthma exacerbation. Two separate pre-defined Proseek Multiplex panels (INF and CVDIII) were run to quantify 181 separate protein analytes in serum and sputum. Panels consisting of 9 markers in serum (CCL19, CCL25, CDCP1, CCL11, FGF21, FGF23, Flt3L, IL-10Rβ, IL-6) and 16 markers in sputum (tPA, KLK6, RETN, ADA, MMP9, Chit1, GRN, PGLYRP1, MPO, HGF, PRTN3, DNER, PI3, Chi3L1, AZU1, and OPG) distinguished controlled and poorly controlled asthmatics. The sputum analytes were consistent with a pattern of neutrophil activation associated with poor asthma control. The serum analyte profile of the exacerbation cohort resembled that of the controlled group rather than that of the poorly controlled asthmatics, possibly reflecting a therapeutic response to systemic corticosteroids. Proteomic profiles in serum and sputum distinguished controlled and poorly controlled asthmatics, and were maintained over time. Findings support a link between sputum neutrophil markers and loss of asthma control. © 2018 John Wiley & Sons Ltd.
Analytical time-domain Green’s functions for power-law media
Kelly, James F.; McGough, Robert J.; Meerschaert, Mark M.
2008-01-01
Frequency-dependent loss and dispersion are typically modeled with a power-law attenuation coefficient, where the power-law exponent ranges from 0 to 2. To facilitate analytical solution, a fractional partial differential equation is derived that exactly describes power-law attenuation and the Szabo wave equation [“Time domain wave-equations for lossy media obeying a frequency power-law,” J. Acoust. Soc. Am. 96, 491–500 (1994)] is an approximation to this equation. This paper derives analytical time-domain Green’s functions in power-law media for exponents in this range. To construct solutions, stable law probability distributions are utilized. For exponents equal to 0, 1∕3, 1∕2, 2∕3, 3∕2, and 2, the Green’s function is expressed in terms of Dirac delta, exponential, Airy, hypergeometric, and Gaussian functions. For exponents strictly less than 1, the Green’s functions are expressed as Fox functions and are causal. For exponents greater than or equal than 1, the Green’s functions are expressed as Fox and Wright functions and are noncausal. However, numerical computations demonstrate that for observation points only one wavelength from the radiating source, the Green’s function is effectively causal for power-law exponents greater than or equal to 1. The analytical time-domain Green’s function is numerically verified against the material impulse response function, and the results demonstrate excellent agreement. PMID:19045774
Challenges in the Development of Functional Assays of Membrane Proteins
Tiefenauer, Louis; Demarche, Sophie
2012-01-01
Lipid bilayers are natural barriers of biological cells and cellular compartments. Membrane proteins integrated in biological membranes enable vital cell functions such as signal transduction and the transport of ions or small molecules. In order to determine the activity of a protein of interest at defined conditions, the membrane protein has to be integrated into artificial lipid bilayers immobilized on a surface. For the fabrication of such biosensors expertise is required in material science, surface and analytical chemistry, molecular biology and biotechnology. Specifically, techniques are needed for structuring surfaces in the micro- and nanometer scale, chemical modification and analysis, lipid bilayer formation, protein expression, purification and solubilization, and most importantly, protein integration into engineered lipid bilayers. Electrochemical and optical methods are suitable to detect membrane activity-related signals. The importance of structural knowledge to understand membrane protein function is obvious. Presently only a few structures of membrane proteins are solved at atomic resolution. Functional assays together with known structures of individual membrane proteins will contribute to a better understanding of vital biological processes occurring at biological membranes. Such assays will be utilized in the discovery of drugs, since membrane proteins are major drug targets.
Timing variation in an analytically solvable chaotic system
NASA Astrophysics Data System (ADS)
Blakely, J. N.; Milosavljevic, M. S.; Corron, N. J.
2017-02-01
We present analytic solutions for a chaotic dynamical system that do not have the regular timing characteristic of recently reported solvable chaotic systems. The dynamical system can be viewed as a first order filter with binary feedback. The feedback state may be switched only at instants defined by an external clock signal. Generalizing from a period one clock, we show analytic solutions for period two and higher period clocks. We show that even when the clock 'ticks' randomly the chaotic system has an analytic solution. These solutions can be visualized in a stroboscopic map whose complexity increases with the complexity of the clock. We provide both analytic results as well as experimental data from an electronic circuit implementation of the system. Our findings bridge the gap between the irregular timing of well known chaotic systems such as Lorenz and Rossler and the well regulated oscillations of recently reported solvable chaotic systems.
Den Hartog, Emiel A; Havenith, George
2010-01-01
For wearers of protective clothing in radiation environments there are no quantitative guidelines available for the effect of a radiative heat load on heat exchange. Under the European Union funded project ThermProtect an analytical effort was defined to address the issue of radiative heat load while wearing protective clothing. As within the ThermProtect project much information has become available from thermal manikin experiments in thermal radiation environments, these sets of experimental data are used to verify the analytical approach. The analytical approach provided a good prediction of the heat loss in the manikin experiments, 95% of the variance was explained by the model. The model has not yet been validated at high radiative heat loads and neglects some physical properties of the radiation emissivity. Still, the analytical approach provides a pragmatic approach and may be useful for practical implementation in protective clothing standards for moderate thermal radiation environments.
A conflict of analysis: analytical chemistry and milk adulteration in Victorian Britain.
Steere-Williams, Jacob
2014-08-01
This article centres on a particularly intense debate within British analytical chemistry in the late nineteenth century, between local public analysts and the government chemists of the Inland Revenue Service. The two groups differed in both practical methodologies and in the interpretation of analytical findings. The most striking debates in this period were related to milk analysis, highlighted especially in Victorian courtrooms. It was in protracted court cases, such as the well known Manchester Milk Case in 1883, that analytical chemistry was performed between local public analysts and the government chemists, who were often both used as expert witnesses. Victorian courtrooms were thus important sites in the context of the uneven professionalisation of chemistry. I use this tension to highlight what Christopher Hamlin has called the defining feature of Victorian public health, namely conflicts of professional jurisdiction, which adds nuance to histories of the struggle of professionalisation and public credibility in analytical chemistry.
Analytical approximation of a distorted reflector surface defined by a discrete set of points
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Zaman, Afroz A.
1988-01-01
Reflector antennas on Earth orbiting spacecrafts generally cannot be described analytically. The reflector surface is subjected to a large temperature fluctuation and gradients, and is thus warped from its true geometrical shape. Aside from distortion by thermal stresses, reflector surfaces are often purposely shaped to minimize phase aberrations and scanning losses. To analyze distorted reflector antennas defined by discrete surface points, a numerical technique must be applied to compute an interpolatory surface passing through a grid of discrete points. In this paper, the distorted reflector surface points are approximated by two analytical components: an undistorted surface component and a surface error component. The undistorted surface component is a best fit paraboloid polynomial for the given set of points and the surface error component is a Fourier series expansion of the deviation of the actual surface points, from the best fit paraboloid. By applying the numerical technique to approximate the surface normals of the distorted reflector surface, the induced surface current can be obtained using physical optics technique. These surface currents are integrated to find the far field radiation pattern.
Fraser, Karl; Harrison, Scott J; Lane, Geoff A; Otter, Don E; Hemar, Yacine; Quek, Siew-Young; Rasmussen, Susanne
2014-01-01
Tea is the second most consumed beverage in the world after water and there are numerous reported health benefits as a result of consuming tea, such as reducing the risk of cardiovascular disease and many types of cancer. Thus, there is much interest in the chemical composition of teas, for example; defining components responsible for contributing to reported health benefits; defining quality characteristics such as product flavor; and monitoring for pesticide residues to comply with food safety import/export requirements. Covered in this review are some of the latest developments in mass spectrometry-based analytical techniques for measuring and characterizing low molecular weight components of tea, in particular primary and secondary metabolites. The methodology; more specifically the chromatography and detection mechanisms used in both targeted and non-targeted studies, and their main advantages and disadvantages are discussed. Finally, we comment on the latest techniques that are likely to have significant benefit to analysts in the future, not merely in the area of tea research, but in the analytical chemistry of low molecular weight compounds in general.
Fuller, Daniel; Buote, Richard; Stanley, Kevin
2017-11-01
The volume and velocity of data are growing rapidly and big data analytics are being applied to these data in many fields. Population and public health researchers may be unfamiliar with the terminology and statistical methods used in big data. This creates a barrier to the application of big data analytics. The purpose of this glossary is to define terms used in big data and big data analytics and to contextualise these terms. We define the five Vs of big data and provide definitions and distinctions for data mining, machine learning and deep learning, among other terms. We provide key distinctions between big data and statistical analysis methods applied to big data. We contextualise the glossary by providing examples where big data analysis methods have been applied to population and public health research problems and provide brief guidance on how to learn big data analysis methods. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Zacharis, Constantinos K; Vastardi, Elli
2018-02-20
In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
Opening the Black Box: Understanding the Science Behind Big Data and Predictive Analytics.
Hofer, Ira S; Halperin, Eran; Cannesson, Maxime
2018-05-25
Big data, smart data, predictive analytics, and other similar terms are ubiquitous in the lay and scientific literature. However, despite the frequency of usage, these terms are often poorly understood, and evidence of their disruption to clinical care is hard to find. This article aims to address these issues by first defining and elucidating the term big data, exploring the ways in which modern medical data, both inside and outside the electronic medical record, meet the established definitions of big data. We then define the term smart data and discuss the transformations necessary to make big data into smart data. Finally, we examine the ways in which this transition from big to smart data will affect what we do in research, retrospective work, and ultimately patient care.
NASA Technical Reports Server (NTRS)
Townsend, J. C.
1980-01-01
In order to provide experimental data for comparison with newly developed finite difference methods for computing supersonic flows over aircraft configurations, wind tunnel tests were conducted on four arrow wing models. The models were machined under numeric control to precisely duplicate analytically defined shapes. They were heavily instrumented with pressure orifices at several cross sections ahead of and in the region where there is a gap between the body and the wing trailing edge. The test Mach numbers were 2.36, 2.96, and 4.63. Tabulated pressure data for the complete test series are presented along with selected oil flow photographs. Comparisons of some preliminary numerical results at zero angle of attack show good to excellent agreement with the experimental pressure distributions.
Chang, Chia-Ming; Yang, Yi-Ping; Chuang, Jen-Hua; Chuang, Chi-Mu; Lin, Tzu-Wei; Wang, Peng-Hui; Yu, Mu-Hsien
2017-01-01
The clinical characteristics of clear cell carcinoma (CCC) and endometrioid carcinoma EC) are concomitant with endometriosis (ES), which leads to the postulation of malignant transformation of ES to endometriosis-associated ovarian carcinoma (EAOC). Different deregulated functional areas were proposed accounting for the pathogenesis of EAOC transformation, and there is still a lack of a data-driven analysis with the accumulated experimental data in publicly-available databases to incorporate the deregulated functions involved in the malignant transformation of EOAC. We used the microarray gene expression datasets of ES, CCC and EC downloaded from the National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO) database. Then, we investigated the pathogenesis of EAOC by a data-driven, function-based analytic model with the quantified molecular functions defined by 1454 Gene Ontology (GO) term gene sets. This model converts the gene expression profiles to the functionome consisting of 1454 quantified GO functions, and then, the key functions involving the malignant transformation of EOAC can be extracted by a series of filters. Our results demonstrate that the deregulated oxidoreductase activity, metabolism, hormone activity, inflammatory response, innate immune response and cell-cell signaling play the key roles in the malignant transformation of EAOC. These results provide the evidence supporting the specific molecular pathways involved in the malignant transformation of EAOC. PMID:29113136
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.
1974-01-01
The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.
NASA Technical Reports Server (NTRS)
Liu, F. C.
1986-01-01
The objective of this investigation is to make analytical determination of the acceleration produced by crew motion in an orbiting space station and define design parameters for the suspension system of microgravity experiments. A simple structural model for simulation of the IOC space station is proposed. Mathematical formulation of this model provides the engineers a simple and direct tool for designing an effective suspension system.
The Role of Shaping the Client's Interpretations in Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Abreu, Paulo Roberto; Hubner, Maria Martha Costa; Lucchese, Fernanda
2012-01-01
Clinical behavior analysis often targets the shaping of clients' functional interpretations of/or rules about his own behavior. These are referred to as clinically relevant behavior 3 (CRB3) in functional analytic psychotherapy (FAP). We suggest that CRB3s should be seen as contingency-specifying stimuli (CSS), due to the their ability to change…
Gudimetla, V S Rao; Holmes, Richard B; Smith, Carey; Needham, Gregory
2012-05-01
The effect of anisotropic Kolmogorov turbulence on the log-amplitude correlation function for plane-wave fields is investigated using analysis, numerical integration, and simulation. A new analytical expression for the log-amplitude correlation function is derived for anisotropic Kolmogorov turbulence. The analytic results, based on the Rytov approximation, agree well with a more general wave-optics simulation based on the Fresnel approximation as well as with numerical evaluations, for low and moderate strengths of turbulence. The new expression reduces correctly to previously published analytic expressions for isotropic turbulence. The final results indicate that, as asymmetry becomes greater, the Rytov variance deviates from that given by the standard formula. This deviation becomes greater with stronger turbulence, up to moderate turbulence strengths. The anisotropic effects on the log-amplitude correlation function are dominant when the separation of the points is within the Fresnel length. In the direction of stronger turbulence, there is an enhanced dip in the correlation function at a separation close to the Fresnel length. The dip is diminished in the weak-turbulence axis, suggesting that energy redistribution via focusing and defocusing is dominated by the strong-turbulence axis. The new analytical expression is useful when anisotropy is observed in relevant experiments. © 2012 Optical Society of America
Analytic modeling of aerosol size distributions
NASA Technical Reports Server (NTRS)
Deepack, A.; Box, G. P.
1979-01-01
Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanizaki, Yuya; Nishimura, Hiromichi; Verbaarschot, Jacobus J. M.
We propose new gradient flows that define Lefschetz thimbles and do not blow up in a finite flow time. Here, we study analytic properties of these gradient flows, and confirm them by numerical tests in simple examples.
Extension-torsion coupling behavior of advanced composite tilt-rotor blades
NASA Technical Reports Server (NTRS)
Kosmatka, J. B.
1989-01-01
An analytic model was developed to study the extension-bend-twist coupling behavior of an advanced composite helicopter or tilt-rotor blade. The outer surface of the blade is defined by rotating an arbitrary cross section about an initial twist axis. The cross section can be nonhomogeneous and composed of generally anisotropic materials. The model is developed based upon a three dimensional elasticity approach that is recast as a coupled two-dimensional boundary value problem defined in a curvilinear coordinate system. Displacement solutions are written in terms of known functions that represent extension, bending, and twisting and unknown functions for local cross section deformations. The unknown local deformation functions are determined by applying the principle of minimum potential energy to the discretized two-dimensional cross section. This is an application of the Ritz method, where the trial function family is the displacement field associated with a finite element (8-node isoparametric quadrilaterals) representation of the section. A computer program was written where the cross section is discretized into 8-node quadrilateral subregions. Initially the program was verified using previously published results (both three-dimensional elasticity and technical beam theory) for pretwisted isotropic bars with an elliptical cross section. In addition, solid and thin-wall multi-cell NACA-0012 airfoil sections were analyzed to illustrate the pronounced effects that pretwist, initial twist axis location, and spar location has on coupled behavior. Currently, a series of advanced composite airfoils are being modeled in order to assess how the use of laminated composite materials interacts with pretwist to alter the coupling behavior of the blade. These studies will investigate the use of different ply angle orientations and the use of symmetric versus unsymmetric laminates.
NASA Astrophysics Data System (ADS)
Volchkov, Yu. M.
2017-09-01
This paper describes the modified bending equations of layered orthotropic plates in the first approximation. The approximation of the solution of the equation of the three-dimensional theory of elasticity by the Legendre polynomial segments is used to obtain differential equations of the elastic layer. For the approximation of equilibrium equations and boundary conditions of three-dimensional theory of elasticity, several approximations of each desired function (stresses and displacements) are used. The stresses at the internal points of the plate are determined from the defining equations for the orthotropic material, averaged with respect to the plate thickness. The construction of the bending equations of layered plates for each layer is carried out with the help of the elastic layer equations and the conjugation conditions on the boundaries between layers, which are conditions for the continuity of normal stresses and displacements. The numerical solution of the problem of bending of the rectangular layered plate obtained with the help of modified equations is compared with an analytical solution. It is determined that the maximum error in determining the stresses does not exceed 3 %.
Maximal liquid bridges between horizontal cylinders
NASA Astrophysics Data System (ADS)
Cooray, Himantha; Huppert, Herbert E.; Neufeld, Jerome A.
2016-08-01
We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace-Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace-Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity.
Analytical effective tensor for flow-through composites
Sviercoski, Rosangela De Fatima [Los Alamos, NM
2012-06-19
A machine, method and computer-usable medium for modeling an average flow of a substance through a composite material. Such a modeling includes an analytical calculation of an effective tensor K.sup.a suitable for use with a variety of media. The analytical calculation corresponds to an approximation to the tensor K, and follows by first computing the diagonal values, and then identifying symmetries of the heterogeneity distribution. Additional calculations include determining the center of mass of the heterogeneous cell and its angle according to a defined Cartesian system, and utilizing this angle into a rotation formula to compute the off-diagonal values and determining its sign.
How can we study reasoning in the brain?
Papo, David
2015-01-01
The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon, and experimental designs are discussed. PMID:25964755
How can we study reasoning in the brain?
Papo, David
2015-01-01
The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon, and experimental designs are discussed.
Plant metabolomics: from holistic hope, to hype, to hot topic.
Hall, Robert D
2006-01-01
In a short time, plant metabolomics has gone from being just an ambitious concept to being a rapidly growing, valuable technology applied in the stride to gain a more global picture of the molecular organization of multicellular organisms. The combination of improved analytical capabilities with newly designed, dedicated statistical, bioinformatics and data mining strategies, is beginning to broaden the horizons of our understanding of how plants are organized and how metabolism is both controlled but highly flexible. Metabolomics is predicted to play a significant, if not indispensable role in bridging the phenotype-genotype gap and thus in assisting us in our desire for full genome sequence annotation as part of the quest to link gene to function. Plants are a fabulously rich source of diverse functional biochemicals and metabolomics is also already proving valuable in an applied context. By creating unique opportunities for us to interrogate plant systems and characterize their biochemical composition, metabolomics will greatly assist in identifying and defining much of the still unexploited biodiversity available today.
The Artistic Infant Directed Performance: A Mycroanalysis of the Adult's Movements and Sounds.
Español, Silvia; Shifres, Favio
2015-09-01
Intersubjectivity experiences established between adults and infants are partially determined by the particular ways in which adults are active in front of babies. An important amount of research focuses on the "musicality" of infant-directed speech (defined melodic contours, tonal and rhythm variations, etc.) and its role in linguistic enculturation. However, researchers have recently suggested that adults also bring a multimodal performance to infants. According to this, some scholars seem to find indicators of the genesis of the performing arts (mainly music and dance) in such a multimodal stimulation. We analyze the adult performance using analytical categories and methodologies of analysis broadly validated in the fields of music performance and movement analysis in contemporary dance. We present microanalyses of an adult-7 month old infant interaction scene that evidenced structural aspects of infant directed multimodal performance compatible with music and dance structures, and suggest functions of adult performance similar to performing arts functions or related to them.
Kinetic study of ion acoustic twisted waves with kappa distributed electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arshad, Kashif, E-mail: kashif.arshad.butt@gmail.com; Aman-ur-Rehman, E-mail: amansadiq@gmail.com; Mahmood, Shahzad, E-mail: shahzadm100@gmail.com
2016-05-15
The kinetic theory of Landau damping of ion acoustic twisted modes is developed in the presence of orbital angular momentum of the helical (twisted) electric field in plasmas with kappa distributed electrons and Maxwellian ions. The perturbed distribution function and helical electric field are considered to be decomposed by Laguerre-Gaussian mode function defined in cylindrical geometry. The Vlasov-Poisson equation is obtained and solved analytically to obtain the weak damping rates of the ion acoustic twisted waves in a non-thermal plasma. The strong damping effects of ion acoustic twisted waves at low values of temperature ratio of electrons and ions aremore » also obtained by using exact numerical method and illustrated graphically, where the weak damping wave theory fails to explain the phenomenon properly. The obtained results of Landau damping rates of the twisted ion acoustic wave are discussed at different values of azimuthal wave number and non-thermal parameter kappa for electrons.« less
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; ...
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashiwa, B. A.
2010-12-01
Abstract A thermodynamically consistent and fully general equation–of– state (EOS) for multifield applications is described. EOS functions are derived from a Helmholtz free energy expressed as the sum of thermal (fluctuational) and collisional (condensed–phase) contributions; thus the free energy is of the Mie–Gr¨uneisen1 form. The phase–coexistence region is defined using a parameterized saturation curve by extending the form introduced by Guggenheim,2 which scales the curve relative to conditions at the critical point. We use the zero–temperature condensed–phase contribution developed by Barnes,3 which extends the Thomas–Fermi–Dirac equation to zero pressure. Thus, the functional form of the EOS could be called MGGBmore » (for Mie– Gr¨uneisen–Guggenheim–Barnes). Substance–specific parameters are obtained by fitting the low–density energy to data from the Sesame4 library; fitting the zero–temperature pressure to the Sesame cold curve; and fitting the saturation curve and latent heat to laboratory data,5 if available. When suitable coexistence data, or Sesame data, are not available, then we apply the Principle of Corresponding States.2 Thus MGGB can be thought of as a numerical recipe for rendering the tabular Sesame EOS data in an analytic form that includes a proper coexistence region, and which permits the accurate calculation of derivatives associated with compressibility, expansivity, Joule coefficient, and specific heat, all of which are required for multifield applications. 1« less
Major advances in testing of dairy products: milk component and dairy product attribute testing.
Barbano, D M; Lynch, J M
2006-04-01
Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.
Evaluation of Cobas Integra 800 under simulated routine conditions in six laboratories.
Redondo, Francisco L; Bermudez, Pilar; Cocco, Claudio; Colella, Francesca; Graziani, Maria Stella; Fiehn, Walter; Hierla, Thomas; Lemoël, Gisèle; Belliard, AnneMarie; Manene, Dieudonne; Meziani, Mourad; Liebel, Maryann; McQueen, Matthew J; Stockmann, Wolfgang
2003-03-01
The new selective access analyser Cobas Integra 800 from Roche Diagnostics was evaluated in an international multicentre study at six sites. Routine simulation experiments showed good performance and full functionality of the instrument and provocation of anomalous situations generated no problems. The new features on Cobas Integra 800, namely clot detection and dispensing control, worked according to specifications. The imprecision of Cobas Integra 800 fulfilled the proposed quality specifications regarding imprecision of analytical systems for clinical chemistry with few exceptions. Claims for linearity, drift, and carry-over were all within the defined specifications, except urea linearity. Interference exists in some cases, as could be expected due to the chemistries applied. Accuracy met the proposed quality specifications, except in some special cases. Method comparisons with Cobas Integra 700 showed good agreement; comparisons with other analysis systems yielded in several cases explicable deviations. Practicability of Cobas Integra 800 met or exceeded the requirements for more than 95% of all attributes rated. The strong points of the new analysis system were reagent handling, long stability of calibration curves, high number of tests on board, compatibility of the sample carrier to other Roche systems, and the sample integrity check for more reliable analytical results. The improvement of the workflow offered by the 5-position rack and STAT handling like on Cobas Integra 800 makes the instrument attractive for further consolidation in the medium-sized laboratory, for dedicated use of special analytes, and/or as back-up in the large routine laboratory.
Storkey, J; Holst, N; Bøjer, O Q; Bigongiali, F; Bocci, G; Colbach, N; Dorner, Z; Riemens, M M; Sartorato, I; Sønderskov, M; Verschwele, A
2015-04-01
A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters. These relationships were used to build a generic model, operating at the level of functional traits, to simulate the impact of increasing herbicide and fertiliser use on virtual weeds along gradients of seed weight and maximum height. The model generated 'fitness contours' (defined as population growth rates) within this trait space in different scenarios, onto which two sets of weed species, defined as common or declining in the UK, were mapped. The effect of increasing inputs on the weed flora was successfully simulated; 77% of common species were predicted to have stable or increasing populations under high fertiliser and herbicide use, in contrast with only 29% of the species that have declined. Future development of the WTDB will aim to increase the number of species covered, incorporate a wider range of traits and analyse intraspecific variability under contrasting management and environments.
Visual-Motor Integration in Children With Mild Intellectual Disability: A Meta-Analysis.
Memisevic, Haris; Djordjevic, Mirjana
2018-01-01
Visual-motor integration (VMI) skills, defined as the coordination of fine motor and visual perceptual abilities, are a very good indicator of a child's overall level of functioning. Research has clearly established that children with intellectual disability (ID) have deficits in VMI skills. This article presents a meta-analytic review of 10 research studies involving 652 children with mild ID for which a VMI skills assessment was also available. We measured the standardized mean difference (Hedges' g) between scores on VMI tests of these children with mild ID and either typically developing children's VMI test scores in these studies or normative mean values on VMI tests used by the studies. While mild ID is defined in part by intelligence scores that are two to three standard deviations below those of typically developing children, the standardized mean difference of VMI differences between typically developing children and children with mild ID in this meta-analysis was 1.75 (95% CI [1.11, 2.38]). Thus, the intellectual and adaptive skill deficits of children with mild ID may be greater (perhaps especially due to their abstract and conceptual reasoning deficits) than their relative VMI deficits. We discuss the possible meaning of this relative VMI strength among children with mild ID and suggest that their stronger VMI skills may be a target for intensive academic interventions as a means of attenuating problems in adaptive functioning.
Clement, Cristina C.; Aphkhazava, David; Nieves, Edward; Callaway, Myrasol; Olszewski, Waldemar; Rotzschke, Olaf; Santambrogio, Laura
2013-01-01
In this study a proteomic approach was used to define the protein content of matched samples of afferent prenodal lymph and plasma derived from healthy volunteers. The analysis was performed using two analytical methodologies coupled with nanoliquid chromatography-tandem mass spectrometry: one-dimensional gel electrophoresis (1DEF nanoLC Orbitrap–ESI–MS/MS), and two-dimensional fluorescence difference-in-gel electrophoresis (2D-DIGE nanoLC–ESI–MS/MS). The 253 significantly identified proteins (p<0.05), obtained from the tandem mass spectrometry data, were further analyzed with pathway analysis (IPA) to define the functional signature of prenodal lymph and matched plasma. The 1DEF coupled with nanoLC–MS–MS revealed that the common proteome between the two biological fluids (144 out of 253 proteins) was dominated by complement activation and blood coagulation components, transporters and protease inhibitors. The enriched proteome of human lymph (72 proteins) consisted of products derived from the extracellular matrix, apoptosis and cellular catabolism. In contrast, the enriched proteome of human plasma (37 proteins) consisted of soluble molecules of the coagulation system and cell–cell signaling factors. The functional networks associated with both common and source-distinctive proteomes highlight the principal biological activity of these immunologically relevant body fluids. PMID:23202415
Lemasson, Elise; Bertin, Sophie; Hennig, Philippe; Boiteux, Hélène; Lesellier, Eric; West, Caroline
2015-08-21
Impurity profiling of organic products that are synthesized as possible drug candidates requires complementary analytical methods to ensure that all impurities are identified. Supercritical fluid chromatography (SFC) is a very useful tool to achieve this objective, as an adequate selection of stationary phases can provide orthogonal separations so as to maximize the chances to see all impurities. In this series of papers, we have developed a method for achiral SFC-MS profiling of drug candidates, based on a selection of 160 analytes issued from Servier Research Laboratories. In the first part of this study, focusing on mobile phase selection, a gradient elution with carbon dioxide and methanol comprising 2% water and 20mM ammonium acetate proved to be the best in terms of chromatographic performance, while also providing good MS response [1]. The objective of this second part was the selection of an orthogonal set of ultra-high performance stationary phases, that was carried out in two steps. Firstly, a reduced set of analytes (20) was used to screen 23 columns. The columns selected were all 1.7-2.5μm fully porous or 2.6-2.7μm superficially porous particles, with a variety of stationary phase chemistries. Derringer desirability functions were used to rank the columns according to retention window, column efficiency evaluated with peak width of selected analytes, and the proportion of analytes successfully eluted with good peak shapes. The columns providing the worst performances were thus eliminated and a shorter selection of columns (11) was obtained. Secondly, based on 160 tested analytes, the 11 columns were ranked again. The retention data obtained on these columns were then compared to define a reduced set of the best columns providing the greatest orthogonality, to maximize the chances to see all impurities within a limited number of runs. Two high-performance columns were thus selected: ACQUITY UPC(2) HSS C18 SB and Nucleoshell HILIC. Copyright © 2015 Elsevier B.V. All rights reserved.
Big data analytics workflow management for eScience
NASA Astrophysics Data System (ADS)
Fiore, Sandro; D'Anca, Alessandro; Palazzo, Cosimo; Elia, Donatello; Mariello, Andrea; Nassisi, Paola; Aloisio, Giovanni
2015-04-01
In many domains such as climate and astrophysics, scientific data is often n-dimensional and requires tools that support specialized data types and primitives if it is to be properly stored, accessed, analysed and visualized. Currently, scientific data analytics relies on domain-specific software and libraries providing a huge set of operators and functionalities. However, most of these software fail at large scale since they: (i) are desktop based, rely on local computing capabilities and need the data locally; (ii) cannot benefit from available multicore/parallel machines since they are based on sequential codes; (iii) do not provide declarative languages to express scientific data analysis tasks, and (iv) do not provide newer or more scalable storage models to better support the data multidimensionality. Additionally, most of them: (v) are domain-specific, which also means they support a limited set of data formats, and (vi) do not provide a workflow support, to enable the construction, execution and monitoring of more complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides several parallel operators to manipulate large datasets. Some relevant examples include: (i) data sub-setting (slicing and dicing), (ii) data aggregation, (iii) array-based primitives (the same operator applies to all the implemented UDF extensions), (iv) data cube duplication, (v) data cube pivoting, (vi) NetCDF-import and export. Metadata operators are available too. Additionally, the Ophidia framework provides array-based primitives to perform data sub-setting, data aggregation (i.e. max, min, avg), array concatenation, algebraic expressions and predicate evaluation on large arrays of scientific data. Bit-oriented plugins have also been implemented to manage binary data cubes. Defining processing chains and workflows with tens, hundreds of data analytics operators is the real challenge in many practical scientific use cases. This talk will specifically address the main needs, requirements and challenges regarding data analytics workflow management applied to large scientific datasets. Three real use cases concerning analytics workflows for sea situational awareness, fire danger prevention, climate change and biodiversity will be discussed in detail.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
NASA Astrophysics Data System (ADS)
Izacard, Olivier
2016-08-01
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izacard, Olivier, E-mail: izacard@llnl.gov
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less
Towards a comprehensive city emission function (CCEF)
NASA Astrophysics Data System (ADS)
Kocifaj, Miroslav
2018-01-01
The comprehensive city emission function (CCEF) is developed for a heterogeneous light-emitting or blocking urban environments, embracing any combination of input parameters that characterize linear dimensions in the system (size and distances between buildings or luminaires), properties of light-emitting elements (such as luminous building façades and street lighting), ground reflectance and total uplight-fraction, all of these defined for an arbitrarily sized 2D area. The analytical formula obtained is not restricted to a single model class as it can capture any specific light-emission feature for wide range of cities. The CCEF method is numerically fast in contrast to what can be expected of other probabilistic approaches that rely on repeated random sampling. Hence the present solution has great potential in light-pollution modeling and can be included in larger numerical models. Our theoretical findings promise great progress in light-pollution modeling as this is the first time an analytical solution to city emission function (CEF) has been developed that depends on statistical mean size and height of city buildings, inter-building separation, prevailing heights of light fixtures, lighting density, and other factors such as e.g. luminaire light output and light distribution, including the amount of uplight, and representative city size. The model is validated for sensitivity and specificity pertinent to combinations of input parameters in order to test its behavior under various conditions, including those that can occur in complex urban environments. It is demonstrated that the solution model succeeds in reproducing a light emission peak at some elevated zenith angles and is consistent with reduced rather than enhanced emission in directions nearly parallel to the ground.
NASA Astrophysics Data System (ADS)
Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan; Geuder, Norbert; Habte, Aron; Schwandt, Marko; Vignola, Frank
2016-05-01
Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible and color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2 % for global horizontal irradiance (GHI), and 2.9 % for DNI (for GHI and DNI over 300 W/m²) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan
2016-05-31
Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible andmore » color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2% for global horizontal irradiance (GHI), and 2.9% for DNI (for GHI and DNI over 300 W/m2) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.« less
Mowlavi, Ali Asghar; Fornasier, Maria Rossa; Mirzaei, Mohammd; Bregant, Paola; de Denaro, Mario
2014-10-01
The beta and gamma absorbed fractions in organs and tissues are the important key factors of radionuclide internal dosimetry based on Medical Internal Radiation Dose (MIRD) approach. The aim of this study is to find suitable analytical functions for beta and gamma absorbed fractions in spherical and ellipsoidal volumes with a uniform distribution of iodine-131 radionuclide. MCNPX code has been used to calculate the energy absorption from beta and gamma rays of iodine-131 uniformly distributed inside different ellipsoids and spheres, and then the absorbed fractions have been evaluated. We have found the fit parameters of a suitable analytical function for the beta absorbed fraction, depending on a generalized radius for ellipsoid based on the radius of sphere, and a linear fit function for the gamma absorbed fraction. The analytical functions that we obtained from fitting process in Monte Carlo data can be used for obtaining the absorbed fractions of iodine-131 beta and gamma rays for any volume of the thyroid lobe. Moreover, our results for the spheres are in good agreement with the results of MIRD and other scientific literatures.
Quantum mechanical reality according to Copenhagen 2.0
NASA Astrophysics Data System (ADS)
Din, Allan M.
2016-05-01
The long-standing conceptual controversies concerning the interpretation of nonrelativistic quantum mechanics are argued, on one hand, to be due to its incompleteness, as affirmed by Einstein. But on the other hand, it appears to be possible to complete it at least partially, as Bohr might have appreciated it, in the framework of its standard mathematical formalism with observables as appropriately defined self-adjoint operators. This completion of quantum mechanics is based on the requirement on laboratory physics to be effectively confined to a bounded space region and on the application of the von Neumann deficiency theorem to properly define a set of self-adjoint extensions of standard observables, e.g. the momenta and the Hamiltonian, in terms of certain isometries on the region boundary. This is formalized mathematically in the setting of a boundary ontology for the so-called Qbox in which the wave function acquires a supplementary dependence on a set of Additional Boundary Variables (ABV). It is argued that a certain geometric subset of the ABV parametrizing Quasi-Periodic Translational Isometries (QPTI) has a particular physical importance by allowing for the definition of an ontic wave function, which has the property of epitomizing the spatial wave function “collapse.” Concomitantly the standard wave function in an unbounded geometry is interpreted as an epistemic wave function, which together with the ontic QPTI wave function gives rise to the notion of two-wave duality, replacing the standard concept of wave-particle duality. More generally, this approach to quantum physics in a bounded geometry provides a novel analytical basis for a better understanding of several conceptual notions of quantum mechanics, including reality, nonlocality, entanglement and Heisenberg’s uncertainty relation. The scope of this analysis may be seen as a foundational update of the multiple versions 1.x of the Copenhagen interpretation of quantum mechanics, which is sufficiently incremental so as to be appropriately characterized as Copenhagen 2.0.
Durning, Steven J; Costanzo, Michelle E; Beckman, Thomas J; Artino, Anthony R; Roy, Michael J; van der Vleuten, Cees; Holmboe, Eric S; Lipner, Rebecca S; Schuwirth, Lambert
2016-06-01
Diagnostic reasoning involves the thinking steps up to and including arrival at a diagnosis. Dual process theory posits that a physician's thinking is based on both non-analytic or fast, subconscious thinking and analytic thinking that is slower, more conscious, effortful and characterized by comparing and contrasting alternatives. Expertise in clinical reasoning may relate to the two dimensions measured by the diagnostic thinking inventory (DTI): memory structure and flexibility in thinking. Explored the functional magnetic resonance imaging (fMRI) correlates of these two aspects of the DTI: memory structure and flexibility of thinking. Participants answered and reflected upon multiple-choice questions (MCQs) during fMRI. A DTI was completed shortly after the scan. The brain processes associated with the two dimensions of the DTI were correlated with fMRI phases - assessing flexibility in thinking during analytical clinical reasoning, memory structure during non-analytical clinical reasoning and the total DTI during both non-analytical and analytical reasoning in experienced physicians. Each DTI component was associated with distinct functional neuroanatomic activation patterns, particularly in the prefrontal cortex. Our findings support diagnostic thinking conceptual models and indicate mechanisms through which cognitive demands may induce functional adaptation within the prefrontal cortex. This provides additional objective validity evidence for the use of the DTI in medical education and practice settings.
Enabling quaternion derivatives: the generalized HR calculus
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C.; Mandic, Danilo P.
2015-01-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis. PMID:26361555
Enabling quaternion derivatives: the generalized HR calculus.
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C; Mandic, Danilo P
2015-08-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis.
van Heeringen, Kees; Bijttebier, Stijn; Desmyter, Stefanie; Vervaet, Myriam; Baeken, Chris
2014-01-01
Objective: We conducted meta-analyses of functional and structural neuroimaging studies comparing adolescent and adult individuals with a history of suicidal behavior and a psychiatric disorder to psychiatric controls in order to objectify changes in brain structure and function in association with a vulnerability to suicidal behavior. Methods: Magnetic resonance imaging studies published up to July 2013 investigating structural or functional brain correlates of suicidal behavior were identified through computerized and manual literature searches. Activation foci from 12 studies encompassing 475 individuals, i.e., 213 suicide attempters and 262 psychiatric controls were subjected to meta-analytical study using anatomic or activation likelihood estimation (ALE). Result: Activation likelihood estimation revealed structural deficits and functional changes in association with a history of suicidal behavior. Structural findings included reduced volumes of the rectal gyrus, superior temporal gyrus and caudate nucleus. Functional differences between study groups included an increased reactivity of the anterior and posterior cingulate cortices. Discussion: A history of suicidal behavior appears to be associated with (probably interrelated) structural deficits and functional overactivation in brain areas, which contribute to a decision-making network. The findings suggest that a vulnerability to suicidal behavior can be defined in terms of a reduced motivational control over the intentional behavioral reaction to salient negative stimuli. PMID:25374525
Analytical Model for Thermal Elastoplastic Stresses of Functionally Graded Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, P. C.; Chen, G.; Liu, L. S.
2008-02-15
A modification analytical model is presented for the thermal elastoplastic stresses of functionally graded materials subjected to thermal loading. The presented model follows the analytical scheme presented by Y. L. Shen and S. Suresh [6]. In the present model, the functionally graded materials are considered as multilayered materials. Each layer consists of metal and ceramic with different volume fraction. The ceramic layer and the FGM interlayers are considered as elastic brittle materials. The metal layer is considered as elastic-perfectly plastic ductile materials. Closed-form solutions for different characteristic temperature for thermal loading are presented as a function of the structure geometriesmore » and the thermomechanical properties of the materials. A main advance of the present model is that the possibility of the initial and spread of plasticity from the two sides of the ductile layers taken into account. Comparing the analytical results with the results from the finite element analysis, the thermal stresses and deformation from the present model are in good agreement with the numerical ones.« less
Engelmann, Brett W
2017-01-01
The Src Homology 2 (SH2) domain family primarily recognizes phosphorylated tyrosine (pY) containing peptide motifs. The relative affinity preferences among competing SH2 domains for phosphopeptide ligands define "specificity space," and underpins many functional pY mediated interactions within signaling networks. The degree of promiscuity exhibited and the dynamic range of affinities supported by individual domains or phosphopeptides is best resolved by a carefully executed and controlled quantitative high-throughput experiment. Here, I describe the fabrication and application of a cellulose-peptide conjugate microarray (CPCMA) platform to the quantitative analysis of SH2 domain specificity space. Included herein are instructions for optimal experimental design with special attention paid to common sources of systematic error, phosphopeptide SPOT synthesis, microarray fabrication, analyte titrations, data capture, and analysis.
Propagation of mechanical waves through a stochastic medium with spherical symmetry
NASA Astrophysics Data System (ADS)
Avendaño, Carlos G.; Reyes, J. Adrián
2018-01-01
We theoretically analyze the propagation of outgoing mechanical waves through an infinite isotropic elastic medium possessing spherical symmetry whose Lamé coefficients and density are spatial random functions characterized by well-defined statistical parameters. We derive the differential equation that governs the average displacement for a system whose properties depend on the radial coordinate. We show that such an equation is an extended version of the well-known Bessel differential equation whose perturbative additional terms contain coefficients that depend directly on the squared noise intensities and the autocorrelation lengths in an exponential decay fashion. We numerically solve the second order differential equation for several values of noise intensities and autocorrelation lengths and compare the corresponding displacement profiles with that of the exact analytic solution for the case of absent inhomogeneities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
SADE is a software package for rapidly assembling analytic pipelines to manipulate data. The packages consists of the engine that manages the data and coordinates the movement of data between the tasks performing a function? a set of core libraries consisting of plugins that perform common tasks? and a framework to extend the system supporting the development of new plugins. Currently through configuration files, a pipeline can be defined that maps the routing of data through a series of plugins. Pipelines can be run in a batch mode or can process streaming data? they can be executed from the commandmore » line or run through a Windows background service. There currently exists over a hundred plugins, over fifty pipeline configurations? and the software is now being used by about a half-dozen projects.« less
Migration of tungsten dust in tokamaks: role of dust-wall collisions
NASA Astrophysics Data System (ADS)
Ratynskaia, S.; Vignitchouk, L.; Tolias, P.; Bykov, I.; Bergsåker, H.; Litnovsky, A.; den Harder, N.; Lazzaro, E.
2013-12-01
The modelling of a controlled tungsten dust injection experiment in TEXTOR by the dust dynamics code MIGRAINe is reported. The code, in addition to the standard dust-plasma interaction processes, also encompasses major mechanical aspects of dust-surface collisions. The use of analytical expressions for the restitution coefficients as functions of the dust radius and impact velocity allows us to account for the sticking and rebound phenomena that define which parts of the dust size distribution can migrate efficiently. The experiment provided unambiguous evidence of long-distance dust migration; artificially introduced tungsten dust particles were collected 120° toroidally away from the injection point, but also a selectivity in the permissible size of transported grains was observed. The main experimental results are reproduced by modelling.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
SSME main combustion chamber life prediction
NASA Technical Reports Server (NTRS)
Cook, R. T.; Fryk, E. E.; Newell, J. F.
1983-01-01
Typically, low cycle fatigue life is a function of the cyclic strain range, the material properties, and the operating temperature. The reusable life is normally defined by the number of strain cycles that can be accrued before severe material degradation occurs. Reusable life is normally signified by the initiation or propagation of surface cracks. Hot-fire testing of channel wall combustors has shown significant mid-channel wall thinning or deformation during accrued cyclic testing. This phenomenon is termed cyclic-creep and appears to be significantly accelerated at elevated surface temperatures. This failure mode was analytically modelled. The cyclic life of the baseline SSME-MCC based on measured calorimeter heat transfer data, and the life sensitivity of local hot spots caused by injector effects were determined. Four life enhanced designs were assessed.
Microcirculation and the physiome projects.
Bassingthwaighte, James B
2008-11-01
The Physiome projects comprise a loosely knit worldwide effort to define the Physiome through databases and theoretical models, with the goal of better understanding the integrative functions of cells, organs, and organisms. The projects involve developing and archiving models, providing centralized databases, and linking experimental information and models from many laboratories into self-consistent frameworks. Increasingly accurate and complete models that embody quantitative biological hypotheses, adhere to high standards, and are publicly available and reproducible, together with refined and curated data, will enable biological scientists to advance integrative, analytical, and predictive approaches to the study of medicine and physiology. This review discusses the rationale and history of the Physiome projects, the role of theoretical models in the development of the Physiome, and the current status of efforts in this area addressing the microcirculation.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
NASA Astrophysics Data System (ADS)
Ha, T.-K.; Günthard, H. H.
1989-07-01
Structural parameters like bond length, bond angles, etc. and harmonic and anharmonic potential coefficients of molecules with internal rotation, inversion or puckering modes are generally assumed to vary with the large amplitude internal coordinates in a concerted manner (relaxation). Taking the coordinate vectors of the nuclear configuration of semirigid molecules with relaxation (SRMRs) as functions of relaxing structural parameters and finite amplitude internal coordinate, the isometric group of SRMRs is discussed and the irreducible representations of the latter are shown to classify into engendered and nonengendered ones. On this basis a concept of equivalent sets of nuclei SRMRs is introduced and an analytical expression is derived which defines the most general functional form of relaxation increments of all common types of structural parameters compatible with isometric symmetry. This formula is shown to be a close analog of an analytical expression defining the transformations induced by the isometric group of infinitesimal internal coordinates associated with typical structural parameters. Furthermore analogous formulae are given for the most general form of the relaxation of harmonic potential coefficients as a function of finite internal coordinates. The general relations are illustrated by ab initio calculations for 1,2-difluoroethane at the MP4/DZP//HF/4-31G* level for twelve values of the dihedral angle including complete structure optimization. The potential to internal rotation is found to be in essential agreement with experimentally derived data. For a complete set of ab initio structural parameters the associated relaxation increments are represented as Fourier series, which are shown to confirm the form predicted by the general formula and the isometric group of 1,2-difluoroethane. Depending on type of the structural parameters (bond length, bond angles, etc.), the associated relaxation increments appear to follow some simple rules. Similarly a complete set of harmonic potential coefficients derived from the ab initio calculations will be analyzed in terms of Fourier series and shown to conform to the symmetry requirements of the symmetry group. Relaxation of potential coefficients is found to amount to up to ≈5% for some types of diagonal and nondiagonal terms and to reflect certain "topological" rules similar to regularities of harmonic potential constants of quasi-rigid molecules found in empirical determinations of valence force fields.
A history of cepstrum analysis and its application to mechanical problems
NASA Astrophysics Data System (ADS)
Randall, Robert B.
2017-12-01
It is not widely realised that the first paper on cepstrum analysis was published two years before the FFT algorithm, despite having Tukey as a common author, and its definition was such that it was not reversible even to the log spectrum. After publication of the FFT in 1965, the cepstrum was redefined so as to be reversible to the log spectrum, and shortly afterwards Oppenheim and Schafer defined the ;complex cepstrum;, which was reversible to the time domain. They also derived the analytical form of the complex cepstrum of a transfer function in terms of its poles and zeros. The cepstrum had been used in speech analysis for determining voice pitch (by accurately measuring the harmonic spacing), but also for separating the formants (transfer function of the vocal tract) from voiced and unvoiced sources, and this led quite early to similar applications in mechanics. The first was to gear diagnostics (Randall), where the cepstrum greatly simplified the interpretation of the sideband families associated with local faults in gears, and the second was to extraction of diesel engine cylinder pressure signals from acoustic response measurements (Lyon and Ordubadi). Later Polydoros defined the differential cepstrum, which had an analytical form similar to the impulse response function, and Gao and Randall used this and the complex cepstrum in the application of cepstrum analysis to modal analysis of mechanical structures. Antoni proposed the mean differential cepstrum, which gave a smoothed result. The cepstrum can be applied to MIMO systems if at least one SIMO response can be separated, and a number of blind source separation techniques have been proposed for this. Most recently it has been shown that even though it is not possible to apply the complex cepstrum to stationary signals, it is possible to use the real cepstrum to edit their (log) amplitude spectrum, and combine this with the original phase to obtain edited time signals. This has already been used for a wide range of mechanical applications. A very powerful processing tool is an exponential ;lifter; (window) applied to the cepstrum, which is shown to extract the modal part of the response (with a small extra damping of each mode corresponding to the window). This can then be used to repress or enhance the modal information in the response according to the application.
Defining Higher-Order Turbulent Moment Closures with an Artificial Neural Network and Random Forest
NASA Astrophysics Data System (ADS)
McGibbon, J.; Bretherton, C. S.
2017-12-01
Unresolved turbulent advection and clouds must be parameterized in atmospheric models. Modern higher-order closure schemes depend on analytic moment closure assumptions that diagnose higher-order moments in terms of lower-order ones. These are then tested against Large-Eddy Simulation (LES) higher-order moment relations. However, these relations may not be neatly analytic in nature. Rather than rely on an analytic higher-order moment closure, can we use machine learning on LES data itself to define a higher-order moment closure?We assess the ability of a deep artificial neural network (NN) and random forest (RF) to perform this task using a set of observationally-based LES runs from the MAGIC field campaign. By training on a subset of 12 simulations and testing on remaining simulations, we avoid over-fitting the training data.Performance of the NN and RF will be assessed and compared to the Analytic Double Gaussian 1 (ADG1) closure assumed by Cloudy Layers Unified By Binormals (CLUBB), a higher-order turbulence closure currently used in the Community Atmosphere Model (CAM). We will show that the RF outperforms the NN and the ADG1 closure for the MAGIC cases within this diagnostic framework. Progress and challenges in using a diagnostic machine learning closure within a prognostic cloud and turbulence parameterization will also be discussed.
Potential energy distribution function and its application to the problem of evaporation
NASA Astrophysics Data System (ADS)
Gerasimov, D. N.; Yurin, E. I.
2017-10-01
Distribution function on potential energy in a strong correlated system can be calculated analytically. In an equilibrium system (for instance, in the bulk of the liquid) this distribution function depends only on temperature and mean potential energy, which can be found through the specific heat of vaporization. At the surface of the liquid this distribution function differs significantly, but its shape still satisfies analytical correlation. Distribution function on potential energy nearby the evaporation surface can be used instead of the work function of the atom of the liquid.
NASA Astrophysics Data System (ADS)
Lee, Gibbeum; Cho, Yeunwoo
2018-01-01
A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1993-01-01
The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.
Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere
NASA Technical Reports Server (NTRS)
Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.
1975-01-01
The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.
Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home
The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.
Calculation of periodic flows in a continuously stratified fluid
NASA Astrophysics Data System (ADS)
Vasiliev, A.
2012-04-01
Analytic theory of disturbances generated by an oscillating compact source in a viscous continuously stratified fluid was constructed. Exact solution of the internal waves generation problem was constructed taking into account diffusivity effects. This analysis is based on set of fundamental equations of incompressible flows. The linearized problem of periodic flows in a continuously stratified fluid, generated by an oscillating part of the inclined plane was solved by methods of singular perturbation theory. A rectangular or disc placed on a sloping plane and oscillating linearly in an arbitrary direction was selected as a source of disturbances. The solutions include regularly perturbed on dissipative component functions describing internal waves and a family of singularly perturbed functions. One of the functions from the singular components family has an analogue in a homogeneous fluid that is a periodic or Stokes' flow. Its thickness is defined by a universal micro scale depending on kinematics viscosity coefficient and a buoyancy frequency with a factor depending on the wave slope. Other singular perturbed functions are specific for stratified flows. Their thickness are defined the diffusion coefficient, kinematic viscosity and additional factor depending on geometry of the problem. Fields of fluid density, velocity, vorticity, pressure, energy density and flux as well as forces acting on the source are calculated for different types of the sources. It is shown that most effective source of waves is the bi-piston. Complete 3D problem is transformed in various limiting cases that are into 2D problem for source in stratified or homogeneous fluid and the Stokes problem for an oscillating infinite plane. The case of the "critical" angle that is equality of the emitting surface and the wave cone slope angles needs in separate investigations. In this case, the number of singular component is saved. Patterns of velocity and density fields were constructed and analyzed by methods of computational mathematics. Singular components of the solution affect the flow pattern of the inhomogeneous stratified fluid, not only near the source of the waves, but at a large distance. Analytical calculations of the structure of wave beams are matched with laboratory experiments. Some deviations at large distances from the source are formed due to the contribution of background wave field associated with seiches in the laboratory tank. In number of the experiments vortices with closed contours were observed on some distances from the disk. The work was supported by Ministry of Education and Science RF (Goscontract No. 16.518.11.7059), experiments were performed on set up USU "HPC IPMec RAS".
Default Trends in Major Postsecondary Education Sectors.
ERIC Educational Resources Information Center
Merisotis, Jamie P.
1988-01-01
Information on GSL defaults in five states is reviewed: California, Illinois, Massachusetts, New Jersey, and Pennsylvania. Default rates are defined and levels of default are examined using a variety of analytical methods. (Author/MLW)
Pinnaduwage, Lal A [Knoxville, TN; Thundat, Thomas G [Knoxville, TN; Brown, Gilbert M [Knoxville, TN; Hawk, John Eric [Olive Branch, MS; Boiadjiev, Vassil I [Knoxville, TN
2007-04-24
A chemically functionalized cantilever system has a cantilever coated on one side thereof with a reagent or biological species which binds to an analyte. The system is of particular value when the analyte is a toxic chemical biological warfare agent or an explosive.
openECA Detailed Design Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Russell
This document describes the functional and non-functional requirements for: The openECA platform The included analytic systems that will: Validate the operational readiness and performance of the openECA platform Provide out-of-box value to those that implement the openECA platform with an initial collection of analytics
Functional Analytic Psychotherapy and Supervision
ERIC Educational Resources Information Center
Callaghan, Glenn M.
2006-01-01
The interpersonal behavior therapy, Functional Analytic Psychotherapy (FAP) has been empirically investigated and described in the literature for a little over a decade. Still, little has been written about the process of supervision in FAP. While there are many aspects of FAP supervision shared by other contemporary behavior therapies and…
Analytic Result for the Two-loop Six-point NMHV Amplitude in N = 4 Super Yang-Mills Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Lance J.; /SLAC; Drummond, James M.
2012-02-15
We provide a simple analytic formula for the two-loop six-point ratio function of planar N = 4 super Yang-Mills theory. This result extends the analytic knowledge of multi-loop six-point amplitudes beyond those with maximal helicity violation. We make a natural ansatz for the symbols of the relevant functions appearing in the two-loop amplitude, and impose various consistency conditions, including symmetry, the absence of spurious poles, the correct collinear behavior, and agreement with the operator product expansion for light-like (super) Wilson loops. This information reduces the ansatz to a small number of relatively simple functions. In order to fix these parametersmore » uniquely, we utilize an explicit representation of the amplitude in terms of loop integrals that can be evaluated analytically in various kinematic limits. The final compact analytic result is expressed in terms of classical polylogarithms, whose arguments are rational functions of the dual conformal cross-ratios, plus precisely two functions that are not of this type. One of the functions, the loop integral {Omega}{sup (2)}, also plays a key role in a new representation of the remainder function R{sub 6}{sup (2)} in the maximally helicity violating sector. Another interesting feature at two loops is the appearance of a new (parity odd) x (parity odd) sector of the amplitude, which is absent at one loop, and which is uniquely determined in a natural way in terms of the more familiar (parity even) x (parity even) part. The second non-polylogarithmic function, the loop integral {tilde {Omega}}{sup (2)}, characterizes this sector. Both {Omega}{sup (2)} and {tilde {Omega}}{sup (2)} can be expressed as one-dimensional integrals over classical polylogarithms with rational arguments.« less
[Projective identification, mimesis and the analytical situation. Preliminary observations].
Ruberto, A; Lucchi, N; Senesi, P; Gaston, A
1990-01-01
In an attempt to underline the need to refer to an imaginary setting, in which the analytical relationship is acted out, the Authors have considered the possible relations between the concept of projective identification, as defined by Klein and further developed by Bion, and the idea of "Mimesis", which is inevitably involved in every story, and which confronts the imaginary at the very moment in which it is produced. The "fusion" between subject and object, which may occur in a more or less partial manner, is defined as a phenomenal demonstration of the participation of the two poles of the relationship in a "super-individual" experience which embraces them both. The mythical image of the hunter. Anyone is, in our opinion, a paradigmatic clement in this from of "meeting" which takes place within an impersonal and illusionary dimension.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillespie, B.M.; Stromatt, R.W.; Ross, G.A.
This data package contains the results obtained by Pacific Northwest Laboratory (PNL) staff in the characterization of samples for the 101-SY Hydrogen Safety Project. The samples were submitted for analysis by Westinghouse Hanford Company (WHC) under the Technical Project Plan (TPP) 17667 and the Quality Assurance Plan MCS-027. They came from a core taken during Window C'' after the May 1991 gas release event. The analytical procedures required for analysis were defined in the Test Instructions (TI) prepared by the PNL 101-SY Analytical Chemistry Laboratory (ACL) Project Management Office in accordance with the TPP and the QA Plan. The requestedmore » analysis for these samples was volatile organic analysis. The quality control (QC) requirements for each sample are defined in the Test Instructions for each sample. The QC requirements outlined in the procedures and requested in the WHC statement of work were followed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillespie, B.M.; Stromatt, R.W.; Ross, G.A.
This data package contains the results obtained by Pacific Northwest Laboratory (PNL) staff in the characterization of samples for the 101-SY Hydrogen Safety Project. The samples were submitted for analysis by Westinghouse Hanford Company (WHC) under the Technical Project Plan (TPP) 17667 and the Quality Assurance Plan MCS-027. They came from a core taken during Window ``C`` after the May 1991 gas release event. The analytical procedures required for analysis were defined in the Test Instructions (TI) prepared by the PNL 101-SY Analytical Chemistry Laboratory (ACL) Project Management Office in accordance with the TPP and the QA Plan. The requestedmore » analysis for these samples was volatile organic analysis. The quality control (QC) requirements for each sample are defined in the Test Instructions for each sample. The QC requirements outlined in the procedures and requested in the WHC statement of work were followed.« less
Wavelets and the Poincaré half-plane
NASA Astrophysics Data System (ADS)
Klauder, J. R.; Streater, R. F.
1994-01-01
A square-integrable signal of positive energy is transformed into an analytic function in the upper half-plane, on which SL(2,R) acts. It is shown that this analytic function is determined by its scalar products with the discrete family of functions obtained by acting with SL(2,Z) on a cyclic vector, provided that the spin of the representation is less than 3.
[Urodynamics foundations: contractile potency and urethral doppler].
Benítez Navío, Julio; Caballero Gómez, Pilar; Delgado Elipe, Ildefonso
2002-12-01
To calculate the bladder softening factor, elastic constant and contractile potency. For the analysis we considered bladder behavior like that of a spring. See articles 1 and 2 published in this issue. Using flowmetry, Doppler ultrasound and abdominal pressure (Transrectal pressure register catheter) an analytical solution that permits calculation of factors defining bladder behavior was looked for. Doppler ultrasound allows us to know urine velocity through the prostatic urethra and, therefore, to calculate bladder contractile potency. Equations are solved reaching an analytical solution that allows calculating those factors that define bladder behavior: Bladder contractile potency, detrusor elastic constant, considering it behaves like a spring, and calculation of muscle resistance to movement. All thanks to Doppler ultrasound that allows to know urine speed. The bladder voiding phase is defined with the aforementioned factors; storage phase behavior can be indirectly inferred. Only uroflowmetry curves, Doppler ultrasound and abdominal pressure value are used. We comply with the so called non invasive urodynamics although for us it is just another phase in the biomechanical study of the detrusor muscle. Main conclusion is the addition of Doppler ultrasound to the urodynamist armamentarium as an essential instrument for the comprehension of bladder dynamics and calculation of bladder behavior defining factors. It is not a change in the focus but in the methods, gaining knowledge and diminishing invasion.
Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions
NASA Astrophysics Data System (ADS)
Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus
2017-10-01
We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.
ERIC Educational Resources Information Center
Bowen, Sarah; Haworth, Kevin; Grow, Joel; Tsai, Mavis; Kohlenberg, Robert
2012-01-01
Functional Analytic Psychotherapy (FAP; Kohlenberg & Tsai, 1991) aims to improve interpersonal relationships through skills intended to increase closeness and connection. The current trial assessed a brief mindfulness-based intervention informed by FAP, in which an interpersonal element was added to a traditional intrapersonal mindfulness…
USDA-ARS?s Scientific Manuscript database
As sample preparation and analytical techniques have improved, data handling has become the main limitation in automated high-throughput analysis of targeted chemicals in many applications. Conventional chromatographic peak integration functions rely on complex software and settings, but untrustwor...
Functional Analytic Psychotherapy for Interpersonal Process Groups: A Behavioral Application
ERIC Educational Resources Information Center
Hoekstra, Renee
2008-01-01
This paper is an adaptation of Kohlenberg and Tsai's work, Functional Analytical Psychotherapy (1991), or FAP, to group psychotherapy. This author applied a behavioral rationale for interpersonal process groups by illustrating key points with a hypothetical client. Suggestions are also provided for starting groups, identifying goals, educating…
Functional Analytic Psychotherapy with Juveniles Who Have Committed Sexual Offenses
ERIC Educational Resources Information Center
Newring, Kirk A. B.; Wheeler, Jennifer G.
2012-01-01
We have previously discussed the application of Functional Analytic Psychotherapy (FAP) with adults who have committed sexual offense behaviors (Newring & Wheeler, 2010). The present entry borrows heavily from the foundation presented in that chapter, and extends this approach to working with adolescents, youth, and juveniles with sexual offense…
Equifinality in Functional Analytic Psychotherapy: Different Strokes for Different Folks
ERIC Educational Resources Information Center
Darrow, Sabrina M.; Dalto, Georgia; Follette, William C.
2012-01-01
Functional Analytic Psychotherapy (FAP) is an interpersonal behavior therapy that relies on a therapist's ability to contingently respond to in-session client behavior. Valued behavior change in clients results from the therapist shaping more effective client interpersonal behaviors by providing effective social reinforcement when these behaviors…
Promoting Efficacy Research on Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Maitland, Daniel W. M.; Gaynor, Scott T.
2012-01-01
Functional Analytic Psychotherapy (FAP) is a form of therapy grounded in behavioral principles that utilizes therapist reactions to shape target behavior. Despite a growing literature base, there is a paucity of research to establish the efficacy of FAP. As a general approach to psychotherapy, and how the therapeutic relationship produces change,…
NASA Astrophysics Data System (ADS)
Touil, B.; Bendib, A.; Bendib-Kalache, K.
2017-02-01
The longitudinal dielectric function is derived analytically from the relativistic Vlasov equation for arbitrary values of the relevant parameters z = m c 2 / T , where m is the rest electron mass, c is the speed of light, and T is the electron temperature in energy units. A new analytical approach based on the Legendre polynomial expansion and continued fractions was used. Analytical expression of the electron distribution function was derived. The real part of the dispersion relation and the damping rate of electron plasma waves are calculated both analytically and numerically in the whole range of the parameter z . The results obtained improve significantly the previous results reported in the literature. For practical purposes, explicit expressions of the real part of the dispersion relation and the damping rate in the range z > 30 and strongly relativistic regime are also proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandon, Lav; Kuhn, Kevin J; Drake, Lawrence R
Los Alamos National Laboratory's (LANL) Actinide Analytical Chemistry (AAC) group has been in existence since the Manhattan Project. It maintains a complete set of analytical capabilities for performing complete characterization (elemental assay, isotopic, metallic and non metallic trace impurities) of uranium and plutonium samples in different forms. For a majority of the customers there are strong quality assurance (QA) and quality control (QC) objectives including highest accuracy and precision with well defined uncertainties associated with the analytical results. Los Alamos participates in various international and national programs such as the Plutonium Metal Exchange Program, New Brunswick Laboratory's (NBL' s) Safeguardsmore » Measurement Evaluation Program (SME) and several other inter-laboratory round robin exercises to monitor and evaluate the data quality generated by AAC. These programs also provide independent verification of analytical measurement capabilities, and allow any technical problems with analytical measurements to be identified and corrected. This presentation will focus on key analytical capabilities for destructive analysis in AAC and also comparative data between LANL and peer groups for Pu assay and isotopic analysis.« less
Assessment of analytical techniques for predicting solid propellant exhaust plumes
NASA Technical Reports Server (NTRS)
Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.
1977-01-01
The calculation of solid propellant exhaust plume flow fields is addressed. Two major areas covered are: (1) the applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size and particle size distributions, and (2) thermochemical modeling of the gaseous phase of the flow field. Comparisons of experimentally measured and analytically predicted data are made. The experimental data were obtained for subscale solid propellant motors with aluminum loadings of 2, 10 and 15%. Analytical predictions were made using a fully coupled two-phase numerical solution. Data comparisons will be presented for radial distributions at plume axial stations of 5, 12, 16 and 20 diameters.
The case for visual analytics of arsenic concentrations in foods.
Johnson, Matilda O; Cohly, Hari H P; Isokpehi, Raphael D; Awofolu, Omotayo R
2010-05-01
Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species.
The Case for Visual Analytics of Arsenic Concentrations in Foods
Johnson, Matilda O.; Cohly, Hari H.P.; Isokpehi, Raphael D.; Awofolu, Omotayo R.
2010-01-01
Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species. PMID:20623005
The general 2-D moments via integral transform method for acoustic radiation and scattering
NASA Astrophysics Data System (ADS)
Smith, Jerry R.; Mirotznik, Mark S.
2004-05-01
The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, M.D.
Analytical Chemistry of PCBs offers a review of physical, chemical, commercial, environmental and biological properties of PCBs. It also defines and discusses six discrete steps of analysis: sampling, extraction, cleanup, determination, data reduction, and quality assurance. The final chapter provides a discussion on collaborative testing - the ultimate step in method evaluation. Dr. Erickson also provides a bibliography of over 1200 references, critical reviews of primary literature, and five appendices which present ancillary material on PCB nomen-clature, physical properties, composition of commercial mixtures, mass spectra characteristics, and PGC/ECD chromatograms.
Measures and metrics for software development
NASA Technical Reports Server (NTRS)
1984-01-01
The evaluations of and recommendations for the use of software development measures based on the practical and analytical experience of the Software Engineering Laboratory are discussed. The basic concepts of measurement and system of classification for measures are described. The principal classes of measures defined are explicit, analytic, and subjective. Some of the major software measurement schemes appearing in the literature are derived. The applications of specific measures in a production environment are explained. These applications include prediction and planning, review and assessment, and evaluation and selection.
Analytical investigation of thermal barrier coatings on advanced power generation gas turbines
NASA Technical Reports Server (NTRS)
Amos, D. J.
1977-01-01
An analytical investigation of present and advanced gas turbine power generation cycles incorporating thermal barrier turbine component coatings was performed. Approximately 50 parametric points considering simple, recuperated, and combined cycles (including gasification) with gas turbine inlet temperatures from current levels through 1644K (2500 F) were evaluated. The results indicated that thermal barriers would be an attractive means to improve performance and reduce cost of electricity for these cycles. A recommended thermal barrier development program has been defined.
Analytical electron microscopic studies and positron lifetime measurements in Al-doped MgO crystals
NASA Astrophysics Data System (ADS)
Pedrosa, M. A.; Pareja, R.; González, R.; Abraham, M. M.
1987-07-01
MgO crystals intentionally doped with Al were characterized by analytical electron microscopic examinations and positron lifetime measurements. Large spinel (MgO Al2O3) precipitates were observed in samples with high contents of Al. A well-defined crystallographic relationship between the precipitates and the matrix was found. The characteristics of positron lifetime spectra appear to depend on the valence state of the different impurities in the MgO lattice suggesting that positrons are trapped by vacancy impurity complexes.
NASA Technical Reports Server (NTRS)
Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.
1977-01-01
An analysis of experimental nozzle, exhaust plume, and exhaust plume impingement data is presented. The data were obtained for subscale solid propellant motors with propellant Al loadings of 2, 10 and 15% exhausting to simulated altitudes of 50,000, 100,000 and 112,000 ft. Analytical predictions were made using a fully coupled two-phase method of characteristics numerical solution and a technique for defining thermal and pressure environments experienced by bodies immersed in two-phase exhaust plumes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiotelis, Nicos; Popolo, Antonino Del, E-mail: adelpopolo@oact.inaf.it, E-mail: hiotelis@ipta.demokritos.gr
We construct an integral equation for the first crossing distributions for fractional Brownian motion in the case of a constant barrier and we present an exact analytical solution. Additionally we present first crossing distributions derived by simulating paths from fractional Brownian motion. We compare the results of the analytical solutions with both those of simulations and those of some approximated solutions which have been used in the literature. Finally, we present multiplicity functions for dark matter structures resulting from our analytical approach and we compare with those resulting from N-body simulations. We show that the results of analytical solutions aremore » in good agreement with those of path simulations but differ significantly from those derived from approximated solutions. Additionally, multiplicity functions derived from fractional Brownian motion are poor fits of the those which result from N-body simulations. We also present comparisons with other models which are exist in the literature and we discuss different ways of improving the agreement between analytical results and N-body simulations.« less
Geochemical Constraints for Mercury's PCA-Derived Geochemical Terranes
NASA Astrophysics Data System (ADS)
Stockstill-Cahill, K. R.; Peplowski, P. N.
2018-05-01
PCA-derived geochemical terranes provide a robust, analytical means of defining these terranes using strictly geochemical inputs. Using the end members derived in this way, we are able to assess the geochemical implications for Mercury.
DOT National Transportation Integrated Search
2012-11-30
The objective of this project was to develop technical relationships between reliability improvement strategies and reliability performance metrics. This project defined reliability, explained the importance of travel time distributions for measuring...
Analytical Methods for Interconnection | Distributed Generation
; ANALYSIS Program Lead Kristen.Ardani@nrel.gov 303-384-4641 Accurately and quickly defining the effects of designed to accommodate voltage rises, bi-directional power flows, and other effects caused by distributed
Safety management of a complex R&D ground operating system
NASA Technical Reports Server (NTRS)
Connors, J.; Mauer, R. A.
1975-01-01
Report discusses safety program implementation for large R&D operating system. Analytical techniques are defined and suggested as tools for identifying potential hazards and determining means to effectively control or eliminate hazards.
Reconstructing metabolic flux vectors from extreme pathways: defining the alpha-spectrum.
Wiback, Sharon J; Mahadevan, Radhakrishnan; Palsson, Bernhard Ø
2003-10-07
The move towards genome-scale analysis of cellular functions has necessitated the development of analytical (in silico) methods to understand such large and complex biochemical reaction networks. One such method is extreme pathway analysis that uses stoichiometry and thermodynamic irreversibly to define mathematically unique, systemic metabolic pathways. These extreme pathways form the edges of a high-dimensional convex cone in the flux space that contains all the attainable steady state solutions, or flux distributions, for the metabolic network. By definition, any steady state flux distribution can be described as a nonnegative linear combination of the extreme pathways. To date, much effort has been focused on calculating, defining, and understanding these extreme pathways. However, little work has been performed to determine how these extreme pathways contribute to a given steady state flux distribution. This study represents an initial effort aimed at defining how physiological steady state solutions can be reconstructed from a network's extreme pathways. In general, there is not a unique set of nonnegative weightings on the extreme pathways that produce a given steady state flux distribution but rather a range of possible values. This range can be determined using linear optimization to maximize and minimize the weightings of a particular extreme pathway in the reconstruction, resulting in what we have termed the alpha-spectrum. The alpha-spectrum defines which extreme pathways can and cannot be included in the reconstruction of a given steady state flux distribution and to what extent they individually contribute to the reconstruction. It is shown that accounting for transcriptional regulatory constraints can considerably shrink the alpha-spectrum. The alpha-spectrum is computed and interpreted for two cases; first, optimal states of a skeleton representation of core metabolism that include transcriptional regulation, and second for human red blood cell metabolism under various physiological, non-optimal conditions.
Falch, Ken Vidar; Detlefs, Carsten; Snigirev, Anatoly; Mathiesen, Ragnvald H
2018-01-01
Analytical expressions for the transmission cross-coefficients for x-ray microscopes based on compound refractive lenses are derived based on Gaussian approximations of the source shape and energy spectrum. The effects of partial coherence, defocus, beam convergence, as well as lateral and longitudinal chromatic aberrations are accounted for and discussed. Taking the incoherent limit of the transmission cross-coefficients, a compact analytical expression for the modulation transfer function of the system is obtained, and the resulting point, line and edge spread functions are presented. Finally, analytical expressions for optimal numerical aperture, coherence ratio, and bandwidth are given. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindberg, Michael J.
2010-09-28
Between October 14, 2009 and February 22, 2010 sediment samples were received from 100-BC Decision Unit for geochemical studies. This is an analytical data report for sediments received from CHPRC at the 100 BC 5 OU. The analyses for this project were performed at the 325 building located in the 300 Area of the Hanford Site. The analyses were performed according to Pacific Northwest National Laboratory (PNNL) approved procedures and/or nationally recognized test procedures. The data sets include the sample identification numbers, analytical results, estimated quantification limits (EQL), and quality control data. The preparatory and analytical quality control requirements, calibrationmore » requirements, acceptance criteria, and failure actions are defined in the on-line QA plan 'Conducting Analytical Work in Support of Regulatory Programs' (CAW). This QA plan implements the Hanford Analytical Services Quality Assurance Requirements Documents (HASQARD) for PNNL.« less
Validation of Analytical Damping Ratio by Fatigue Stress Limit
NASA Astrophysics Data System (ADS)
Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul
2018-03-01
The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olesov, A V
2014-10-31
New inequalities are established for analytic functions satisfying Meiman's majorization conditions. Estimates for values of and differential inequalities involving rational trigonometric functions with an integer majorant on an interval of length less than the period and with prescribed poles which are symmetrically positioned relative to the real axis, as well as differential inequalities for trigonometric polynomials in some classes, are given as applications. These results improve several theorems due to Meiman, Genchev, Smirnov and Rusak. Bibliography: 27 titles.
Simple functionalization method for single conical pores with a polydopamine layer
NASA Astrophysics Data System (ADS)
Horiguchi, Yukichi; Goda, Tatsuro; Miyahara, Yuji
2018-04-01
Resistive pulse sensing (RPS) is an interesting analytical system in which micro- to nanosized pores are used to evaluate particles or small analytes. Recently, molecular immobilization techniques to improve the performance of RPS have been reported. The problem in functionalization for RPS is that molecular immobilization by chemical reaction is restricted by the pore material type. Herein, a simple functionalization is performed using mussel-inspired polydopamine as an intermediate layer to connect the pore material with functional molecules.
Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas
Izacard, Olivier
2016-08-02
In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less
Good pharmacovigilance practices: technology enabled.
Nelson, Robert C; Palsulich, Bruce; Gogolak, Victor
2002-01-01
The assessment of spontaneous reports is most effective it is conducted within a defined and rigorous process. The framework for good pharmacovigilance process (GPVP) is proposed as a subset of good postmarketing surveillance process (GPMSP), a functional structure for both a public health and corporate risk management strategy. GPVP has good practices that implement each step within a defined process. These practices are designed to efficiently and effectively detect and alert the drug safety professional to new and potentially important information on drug-associated adverse reactions. These practices are enabled by applied technology designed specifically for the review and assessment of spontaneous reports. Specific practices include rules-based triage, active query prompts for severe organ insults, contextual single case evaluation, statistical proportionality and correlational checks, case-series analyses, and templates for signal work-up and interpretation. These practices and the overall GPVP are supported by state-of-the-art web-based systems with powerful analytical engines, workflow and audit trials to allow validated systems support for valid drug safety signalling efforts. It is also important to understand that a process has a defined set of steps and any one cannot stand independently. Specifically, advanced use of technical alerting methods in isolation can mislead and allow one to misunderstand priorities and relative value. In the end, pharmacovigilance is a clinical art and a component process to the science of pharmacoepidemiology and risk management.
Analytic integrable systems: Analytic normalization and embedding flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang
In this paper we mainly study the existence of analytic normalization and the normal form of finite dimensional complete analytic integrable dynamical systems. More details, we will prove that any complete analytic integrable diffeomorphism F(x)=Bx+f(x) in (Cn,0) with B having eigenvalues not modulus 1 and f(x)=O(|) is locally analytically conjugate to its normal form. Meanwhile, we also prove that any complete analytic integrable differential system x˙=Ax+f(x) in (Cn,0) with A having nonzero eigenvalues and f(x)=O(|) is locally analytically conjugate to its normal form. Furthermore we will prove that any complete analytic integrable diffeomorphism defined on an analytic manifold can be embedded in a complete analytic integrable flow. We note that parts of our results are the improvement of Moser's one in J. Moser, The analytic invariants of an area-preserving mapping near a hyperbolic fixed point, Comm. Pure Appl. Math. 9 (1956) 673-692 and of Poincaré's one in H. Poincaré, Sur l'intégration des équations différentielles du premier order et du premier degré, II, Rend. Circ. Mat. Palermo 11 (1897) 193-239. These results also improve the ones in Xiang Zhang, Analytic normalization of analytic integrable systems and the embedding flows, J. Differential Equations 244 (2008) 1080-1092 in the sense that the linear part of the systems can be nonhyperbolic, and the one in N.T. Zung, Convergence versus integrability in Poincaré-Dulac normal form, Math. Res. Lett. 9 (2002) 217-228 in the way that our paper presents the concrete expression of the normal form in a restricted case.
NASA Astrophysics Data System (ADS)
Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.
2017-12-01
Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.
Tang, Weijuan; Sheng, Huaming; Kong, John Y; Yerabolu, Ravikiran; Zhu, Hanyu; Max, Joann; Zhang, Minli; Kenttämaa, Hilkka I
2016-06-30
The oxidation of sulfur atoms is an important biotransformation pathway for many sulfur-containing drugs. In order to rapidly identify the sulfone functionality in drug metabolites, a tandem mass spectrometric method based on ion-molecule reactions was developed. A phosphorus-containing reagent, trimethyl phosphite (TMP), was allowed to react with protonated analytes with various functionalities in a linear quadrupole ion trap mass spectrometer. The reaction products and reaction efficiencies were measured. Only protonated sulfone model compounds were found to react with TMP to form a characteristic [TMP adduct-MeOH] product ion. All other protonated compounds investigated, with functionalities such as sulfoxide, N-oxide, hydroxylamino, keto, carboxylic acid, and aliphatic and aromatic amino, only react with TMP via proton transfer and/or addition. The specificity of the reaction was further demonstrated by using a sulfoxide-containing anti-inflammatory drug, sulindac, as well as its metabolite sulindac sulfone. A method based on functional group-selective ion-molecule reactions in a linear quadrupole ion trap mass spectrometer has been demonstrated for the identification of the sulfone functionality in protonated analytes. A characteristic [TMP adduct-MeOH] product ion was only formed for the protonated sulfone analytes. The applicability of the TMP reagent in identifying sulfone functionalities in drug metabolites was also demonstrated. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A Distributed Trajectory-Oriented Approach to Managing Traffic Complexity
NASA Technical Reports Server (NTRS)
Idris, Husni; Wing, David J.; Vivona, Robert; Garcia-Chico, Jose-Luis
2007-01-01
In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which ground-based service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. While its architecture becomes more distributed, the goal of the Air Traffic Management (ATM) system remains to achieve objectives such as maintaining safety and efficiency. It is, therefore, critical to design appropriate control elements to ensure that aircraft and groundbased actions result in achieving these objectives without unduly restricting user-preferred trajectories. This paper presents a trajectory-oriented approach containing two such elements. One is a trajectory flexibility preservation function, by which aircraft plan their trajectories to preserve flexibility to accommodate unforeseen events. And the other is a trajectory constraint minimization function by which ground-based agents, in collaboration with air-based agents, impose just-enough restrictions on trajectories to achieve ATM objectives, such as separation assurance and flow management. The underlying hypothesis is that preserving trajectory flexibility of each individual aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by minimizing constraints without jeopardizing the intended ATM objectives. The paper presents conceptually how the two functions operate in a distributed control architecture that includes self separation. The paper illustrates the concept through hypothetical scenarios involving conflict resolution and flow management. It presents a functional analysis of the interaction and information flow between the functions. It also presents an analytical framework for defining metrics and developing methods to preserve trajectory flexibility and minimize its constraints. In this framework flexibility is defined in terms of robustness and adaptability to disturbances and the impact of constraints is illustrated through analysis of a trajectory solution space with limited degrees of freedom and in simple constraint situations involving meeting multiple times of arrival and resolving a conflict.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2012-01-01
New first- and second-order displacement transfer functions have been developed for deformed shape calculations of nonuniform cross-sectional beam structures such as aircraft wings. The displacement transfer functions are expressed explicitly in terms of beam geometrical parameters and surface strains (uniaxial bending strains) obtained at equally spaced strain stations along the surface of the beam structure. By inputting the measured or analytically calculated surface strains into the displacement transfer functions, one could calculate local slopes, deflections, and cross-sectional twist angles of the nonuniform beam structure for mapping the overall structural deformed shapes for visual display. The accuracy of deformed shape calculations by the first- and second-order displacement transfer functions are determined by comparing these values to the analytically predicted values obtained from finite element analyses. This comparison shows that the new displacement transfer functions could quite accurately calculate the deformed shapes of tapered cantilever tubular beams with different tapered angles. The accuracy of the present displacement transfer functions also are compared to those of the previously developed displacement transfer functions.
Rocchitta, Gaia; Spanu, Angela; Babudieri, Sergio; Latte, Gavinella; Madeddu, Giordano; Galleri, Grazia; Nuvoli, Susanna; Bagella, Paola; Demartis, Maria Ilaria; Fiore, Vito; Manetti, Roberto; Serra, Pier Andrea
2016-01-01
Enzyme-based chemical biosensors are based on biological recognition. In order to operate, the enzymes must be available to catalyze a specific biochemical reaction and be stable under the normal operating conditions of the biosensor. Design of biosensors is based on knowledge about the target analyte, as well as the complexity of the matrix in which the analyte has to be quantified. This article reviews the problems resulting from the interaction of enzyme-based amperometric biosensors with complex biological matrices containing the target analyte(s). One of the most challenging disadvantages of amperometric enzyme-based biosensor detection is signal reduction from fouling agents and interference from chemicals present in the sample matrix. This article, therefore, investigates the principles of functioning of enzymatic biosensors, their analytical performance over time and the strategies used to optimize their performance. Moreover, the composition of biological fluids as a function of their interaction with biosensing will be presented. PMID:27249001