Diagrams for the Free Energy and Density Weight Factors of the Ising Models.
1983-01-01
sum to zero . The associated R. A. Farrell, T. Morita, and P. H. E. Meijer, "Cluster Expan- also, "_ ratum: New Generating Functions and Results for the...given for the cubic lattices. We employ a theorem that states that a certain sum of diagrams is zero in order to obtain the density-dependent weight...these diagrams are given for the cubic lattices. We employ a theorem that states that a certain sum of diagrams is zero in order to obtain the density
How Does Sequence Structure Affect the Judgment of Time? Exploring a Weighted Sum of Segments Model
ERIC Educational Resources Information Center
Matthews, William J.
2013-01-01
This paper examines the judgment of segmented temporal intervals, using short tone sequences as a convenient test case. In four experiments, we investigate how the relative lengths, arrangement, and pitches of the tones in a sequence affect judgments of sequence duration, and ask whether the data can be described by a simple weighted sum of…
Pooling across cells to normalize single-cell RNA sequencing data with many zero counts.
Lun, Aaron T L; Bach, Karsten; Marioni, John C
2016-04-27
Normalization of single-cell RNA sequencing data is necessary to eliminate cell-specific biases prior to downstream analyses. However, this is not straightforward for noisy single-cell data where many counts are zero. We present a novel approach where expression values are summed across pools of cells, and the summed values are used for normalization. Pool-based size factors are then deconvolved to yield cell-based factors. Our deconvolution approach outperforms existing methods for accurate normalization of cell-specific biases in simulated data. Similar behavior is observed in real data, where deconvolution improves the relevance of results of downstream analyses.
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2008-10-01
We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.
A sharp lower bound for the sum of a sine series with convex coefficients
NASA Astrophysics Data System (ADS)
Solodov, A. P.
2016-12-01
The sum of a sine series g(\\mathbf b,x)=\\sumk=1^∞ b_k\\sin kx with coefficients forming a convex sequence \\mathbf b is known to be positive on the interval (0,π). Its values near zero are conventionally evaluated using the Salem function v(\\mathbf b,x)=x\\sumk=1m(x) kb_k, m(x)=[π/x]. In this paper it is proved that 2π-2v(\\mathbf b,x) is not a minorant for g(\\mathbf b,x). The modified Salem function v_0(\\mathbf b,x)=x\\bigl(\\sumk=1m(x)-1 kb_k+(1/2)m(x)bm(x)\\bigr) is shown to satisfy the lower bound g(\\mathbf b,x)>2π-2v_0(\\mathbf b,x) in some right neighbourhood of zero. This estimate is shown to be sharp on the class of convex sequences \\mathbf b. Moreover, the upper bound for g(\\mathbf b,x) is refined on the class of monotone sequences \\mathbf b. Bibliography: 11 titles.
Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei
2016-02-01
This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.
NASA Astrophysics Data System (ADS)
Hetényi, Balázs
2014-03-01
The Drude weight, the quantity which distinguishes metals from insulators, is proportional to the second derivative of the ground state energy with respect to a flux at zero flux. The same expression also appears in the definition of the Meissner weight, the quantity which indicates superconductivity, as well as in the definition of non-classical rotational inertia of bosonic superfluids. It is shown that the difference between these quantities depends on the interpretation of the average momentum term, which can be understood as the expectation value of the total momentum (Drude weight), the sum of the expectation values of single momenta (rotational inertia of a superfluid), or the sum over expectation values of momentum pairs (Meissner weight). This distinction appears naturally when the current from which the particular transport quantity is derived is cast in terms of shift operators.
Giant quadrupole and monopole resonances in /sup 28/Si
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lui, Y.; Bronson, J.D.; Youngblood, D.H.
1985-05-01
Inelastic alpha scattering measurements have been performed for /sup 28/Si at small angles including zero degrees. A total of 66% of the E0 energy-weighted sum rule was identified (using a Satchler version 2 form factor) centered at E/sub x/ = 17.9 MeV having a width of 4.8 MeV and 34% of the E2 energy-weighted sum rule was identified above E/sub x/ = 15.3 MeV centered at 19.0 MeV with a width of 4.4 MeV. The dependence of the extracted E0 strength on form factor and optical potential was explored.
Observation weights unlock bulk RNA-seq tools for zero inflation and single-cell applications.
Van den Berge, Koen; Perraudeau, Fanny; Soneson, Charlotte; Love, Michael I; Risso, Davide; Vert, Jean-Philippe; Robinson, Mark D; Dudoit, Sandrine; Clement, Lieven
2018-02-26
Dropout events in single-cell RNA sequencing (scRNA-seq) cause many transcripts to go undetected and induce an excess of zero read counts, leading to power issues in differential expression (DE) analysis. This has triggered the development of bespoke scRNA-seq DE methods to cope with zero inflation. Recent evaluations, however, have shown that dedicated scRNA-seq tools provide no advantage compared to traditional bulk RNA-seq tools. We introduce a weighting strategy, based on a zero-inflated negative binomial model, that identifies excess zero counts and generates gene- and cell-specific weights to unlock bulk RNA-seq DE pipelines for zero-inflated data, boosting performance for scRNA-seq.
Exact sum rules for inhomogeneous systems containing a zero mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amore, Paolo, E-mail: paolo.amore@gmail.com
2014-10-15
We show that the formulas for the sum rules for the eigenvalues of inhomogeneous systems that we have obtained in two recent papers are incomplete when the system contains a zero mode. We prove that there are finite contributions of the zero mode to the sum rules and we explicitly calculate the expressions for the sum rules of order one and two. The previous results for systems that do not contain a zero mode are unaffected. - Highlights: • We discuss the sum rules of the eigenvalues of inhomogeneous systems containing a zero mode. • We derive the explicit expressionsmore » for sum rules of order one and two. • We perform accurate numerical tests of these results for three examples.« less
Zero-Sum Bias: Perceived Competition Despite Unlimited Resources
Meegan, Daniel V.
2010-01-01
Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party) when it is actually non-zero-sum. The experimental participants were students at a university where students’ grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed. PMID:21833251
Zero-sum bias: perceived competition despite unlimited resources.
Meegan, Daniel V
2010-01-01
Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party) when it is actually non-zero-sum. The experimental participants were students at a university where students' grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
Entropy Inequalities for Stable Densities and Strengthened Central Limit Theorems
NASA Astrophysics Data System (ADS)
Toscani, Giuseppe
2016-10-01
We consider the central limit theorem for stable laws in the case of the standardized sum of independent and identically distributed random variables with regular probability density function. By showing decay of different entropy functionals along the sequence we prove convergence with explicit rate in various norms to a Lévy centered density of parameter λ >1 . This introduces a new information-theoretic approach to the central limit theorem for stable laws, in which the main argument is shown to be the relative fractional Fisher information, recently introduced in Toscani (Ricerche Mat 65(1):71-91, 2016). In particular, it is proven that, with respect to the relative fractional Fisher information, the Lévy density satisfies an analogous of the logarithmic Sobolev inequality, which allows to pass from the monotonicity and decay to zero of the relative fractional Fisher information in the standardized sum to the decay to zero in relative entropy with an explicit decay rate.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our “worst-case weighted multi-objective game” model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call “robust-weighted Nash equilibrium”. We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications. PMID:26820512
ERIC Educational Resources Information Center
Shen, Jianping; Xia, Jiangang
2012-01-01
Is the power relationship between public school teachers and principals a win-win situation or a zero-sum game? By applying hierarchical linear modeling to the 1999-2000 nationally representative Schools and Staffing Survey data, we found that both the win-win and zero-sum-game theories had empirical evidence. The decision-making areas…
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
NASA Astrophysics Data System (ADS)
Pedersen, Thomas Garm
2018-07-01
Bessel functions play an important role for quantum states in spherical and cylindrical geometries. In cases of perfect confinement, the energy of Schrödinger and massless Dirac fermions is determined by the zeros and intersections of Bessel functions, respectively. In an external electric field, standard perturbation theory therefore expresses the polarizability as a sum over these zeros or intersections. Both non-relativistic and relativistic polarizabilities can be calculated analytically, however. Hence, by equating analytical expressions to perturbation expansions, several sum rules for the zeros and intersections of Bessel functions emerge.
Construction of an Exome-Wide Risk Score for Schizophrenia Based on a Weighted Burden Test.
Curtis, David
2018-01-01
Polygenic risk scores obtained as a weighted sum of associated variants can be used to explore association in additional data sets and to assign risk scores to individuals. The methods used to derive polygenic risk scores from common SNPs are not suitable for variants detected in whole exome sequencing studies. Rare variants, which may have major effects, are seen too infrequently to judge whether they are associated and may not be shared between training and test subjects. A method is proposed whereby variants are weighted according to their frequency, their annotations and the genes they affect. A weighted sum across all variants provides an individual risk score. Scores constructed in this way are used in a weighted burden test and are shown to be significantly different between schizophrenia cases and controls using a five-way cross-validation procedure. This approach represents a first attempt to summarise exome sequence variation into a summary risk score, which could be combined with risk scores from common variants and from environmental factors. It is hoped that the method could be developed further. © 2017 John Wiley & Sons Ltd/University College London.
College Sports: The Mystery of the Zero-Sum Game
ERIC Educational Resources Information Center
Getz, Malcolm; Siegfried, John J.
2012-01-01
In recent years, when a university may earn well over $10 million per year from fees for sports-broadcast rights, half of the teams still lose. Collegiate athletic competition is a zero sum game: The number of winners equals the number of losers. So why do universities spend growing sums of scarce resources on an activity when the odds of winning…
Smithson, Michael; Sopeña, Arthur; Platow, Michael J
2015-01-01
This paper presents an investigation into marginalizing racism, a form of prejudice whereby ingroup members claim that specific individuals belong to their group, but also exclude them by not granting them all of the privileges of a full ingroup member. One manifestation of this is that perceived degree of outgroup membership will covary negatively with degree of ingroup membership. That is, group membership may be treated as a zero-sum quantity (e.g., one cannot be both Australian and Iraqi). Study 1 demonstrated that judges allocate more zero-sum membership assignments and lower combined membership in their country of origin and their adopted country to high-threat migrants than low-threat migrants. Study 2 identified a subtle type of zero-sum reasoning which holds that stronger degree of membership in one's original nationality constrains membership in a new nationality to a greater extent than stronger membership in the new nationality constrains membership in one's original nationality. This pattern is quite general, being replicated in large samples from four nations (USA, UK, India, and China). Taken together, these studies suggest that marginalizing racism is more than a belief that people retain a "stain" from membership in their original group. Marginalizing racism also manifests itself as conditional zero-sum beliefs about multiple group memberships.
Complete convergence of randomly weighted END sequences and its application.
Li, Penghua; Li, Xiaoqin; Wu, Kehan
2017-01-01
We investigate the complete convergence of partial sums of randomly weighted extended negatively dependent (END) random variables. Some results of complete moment convergence, complete convergence and the strong law of large numbers for this dependent structure are obtained. As an application, we study the convergence of the state observers of linear-time-invariant systems. Our results extend the corresponding earlier ones.
Fu, Yue; Chai, Tianyou
2016-12-01
Regarding two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics, this paper presents an online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair. First, for known systems, the simultaneous policy updating algorithm (SPUA) is reviewed. A new analytical method to prove the convergence is presented. Then, based on the SPUA, without using a priori knowledge of any system dynamics, an online algorithm is proposed to simultaneously learn in real time either the minimal nonnegative solution of the Hamilton-Jacobi-Isaacs (HJI) equation or the generalized algebraic Riccati equation for linear systems as a special case, along with the optimal policy pair. The approximate solution to the HJI equation and the admissible policy pair is reexpressed by the approximation theorem. The unknown constants or weights of each are identified simultaneously by resorting to the recursive least square method. The convergence of the online algorithm to the optimal solutions is provided. A practical online algorithm is also developed. Simulation results illustrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
Dufner, Michael; Leising, Daniel; Gebauer, Jochen E
2016-05-01
How are people who generally see others positively evaluated themselves? We propose that the answer to this question crucially hinges on the content domain: We hypothesize that Agency follows a "zero-sum principle" and therefore people who see others ashighin Agency are perceived aslowin Agency themselves. In contrast, we hypothesize that Communion follows a "non-zero-sum principle" and therefore people who see others ashighin Communion are perceived ashighin Communion themselves. We tested these hypotheses in a round-robin and a half-block study. Perceiving others as agentic was indeed linked to being perceived as low in Agency. To the contrary, perceiving others as communal was linked to being perceived as high in Communion, but only when people directly interacted with each other. These results help to clarify the nature of Agency and Communion and offer explanations for divergent findings in the literature. © 2016 by the Society for Personality and Social Psychology, Inc.
Code of Federal Regulations, 2012 CFR
2012-07-01
... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean value... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean and... operation when the pollutant concentration at the time for the measurement is zero. 1.6Calibration Drift...
Code of Federal Regulations, 2014 CFR
2014-07-01
... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean value... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean and... operation when the pollutant concentration at the time for the measurement is zero. 1.6Calibration Drift...
Code of Federal Regulations, 2013 CFR
2013-07-01
... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean value... confidence interval using Equations D-1 and D-2. Report the zero drift as the sum of the absolute mean and... operation when the pollutant concentration at the time for the measurement is zero. 1.6Calibration Drift...
Smithson, Michael; Sopeña, Arthur; Platow, Michael J.
2015-01-01
This paper presents an investigation into marginalizing racism, a form of prejudice whereby ingroup members claim that specific individuals belong to their group, but also exclude them by not granting them all of the privileges of a full ingroup member. One manifestation of this is that perceived degree of outgroup membership will covary negatively with degree of ingroup membership. That is, group membership may be treated as a zero-sum quantity (e.g., one cannot be both Australian and Iraqi). Study 1 demonstrated that judges allocate more zero-sum membership assignments and lower combined membership in their country of origin and their adopted country to high-threat migrants than low-threat migrants. Study 2 identified a subtle type of zero-sum reasoning which holds that stronger degree of membership in one’s original nationality constrains membership in a new nationality to a greater extent than stronger membership in the new nationality constrains membership in one’s original nationality. This pattern is quite general, being replicated in large samples from four nations (USA, UK, India, and China). Taken together, these studies suggest that marginalizing racism is more than a belief that people retain a “stain” from membership in their original group. Marginalizing racism also manifests itself as conditional zero-sum beliefs about multiple group memberships. PMID:26098735
Calculation of Rayleigh type sums for zeros of the equation arising in spectral problem
NASA Astrophysics Data System (ADS)
Kostin, A. B.; Sherstyukov, V. B.
2017-12-01
For zeros of the equation (arising in the oblique derivative problem) μ J n ‧ ( μ ) cos α + i n J n ( μ ) sin α = 0 , μ ∈ ℂ , with parameters n ∈ ℤ, α ∈ [-π/2, π/2] and the Bessel function Jn (μ) special summation relationships are proved. The obtained results are consistent with the theory of well-known Rayleigh sums calculating by zeros of the Bessel function.
Life Goals Matter to Happiness: A Revision of Set-Point Theory
ERIC Educational Resources Information Center
Headey, Bruce
2008-01-01
Using data from the long-running German Socio-Economic Panel Survey (SOEP), this paper provides evidence that life goals matter substantially to subjective well-being (SWB). Non-zero sum goals, which include commitment to family, friends and social and political involvement, promote life satisfaction. Zero sum goals, including commitment to career…
Deng, Xinyang; Jiang, Wen; Zhang, Jiandong
2017-01-01
The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156
Asymmetries in Responses to Attitude Statements: The Example of "Zero-Sum" Beliefs.
Smithson, Michael; Shou, Yiyun
2016-01-01
While much has been written about the consequences of zero-sum (or fixed-pie) beliefs, their measurement has received almost no systematic attention. No researchers, to our awareness, have examined the question of whether the endorsement of a zero-sum-like proposition depends on how the proposition is formed. This paper focuses on this issue, which may also apply to the measurement of other attitudes. Zero-sum statements have a form such as "The more of resource X for consumer A, the less of resource Y for consumer B." X and Y may be the same resource (such as time), but they can be different (e.g., "The more people commute by bicycle, the less revenue for the city from car parking payments"). These statements have four permutations, and a strict zero-sum believer should regard these four statements as equally valid and therefore should endorse them equally. We find, however, that three asymmetric patterns routinely occur in people's endorsement levels, i.e., clear framing effects, whereby endorsement of one permutation substantially differs from endorsement of another. The patterns seem to arise from beliefs about asymmetric resource flows and power relations between rival consumers. We report three studies, with adult samples representative of populations in two Western and two non-Western cultures, demonstrating that most of the asymmetric belief patterns are consistent across these samples. We conclude with a discussion of the implications of this kind of "order-effect" for attitude measurement.
Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun
2017-03-01
H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.
Informationally Efficient Multi-User Communication
2010-01-01
DSM algorithms, the Op- timal Spectrum Balancing ( OSB ) algorithm and the Iterative Spectrum Balanc- ing (ISB) algorithm, were proposed to solve the...problem of maximization of a weighted rate-sum across all users [CYM06, YL06]. OSB has an exponential complexity in the number of users. ISB only has a...the duality gap min λ1,λ2 D (λ1, λ2) − max P1,P2 f (P1,P2) is not zero. Fig. 3.3 summarizes the three key steps of a dual method, the OSB algorithm
NASA Technical Reports Server (NTRS)
1986-01-01
All three flowmeter concepts (vortex, dual turbine, and angular momentum) were subjected to experimental and analytical investigation to determine the potential portotype performance. The three concepts were subjected to a comprehensive rating. Eight parameters of performance were evaluated on a zero-to-ten scale, weighted, and summed. The relative ratings of the vortex, dual turbine, and angular momentum flowmeters are 0.71, 1.00, and 0.95, respectively. The dual turbine flowmeter concept was selected as the primary candidate and the angular momentum flowmeter as the secondary candidate for prototype development and evaluation.
On the Performance of the Martin Digital Filter for High- and Low-pass Applications
NASA Technical Reports Server (NTRS)
Mcclain, C. R.
1979-01-01
A nonrecursive numerical filter is described in which the weighting sequence is optimized by minimizing the excursion from the ideal rectangular filter in a least squares sense over the entire domain of normalized frequency. Additional corrections to the weights in order to reduce overshoot oscillations (Gibbs phenomenon) and to insure unity gain at zero frequency for the low pass filter are incorporated. The filter is characterized by a zero phase shift for all frequencies (due to a symmetric weighting sequence), a finite memory and stability, and it may readily be transformed to a high pass filter. Equations for the filter weights and the frequency response function are presented, and applications to high and low pass filtering are examined. A discussion of optimization of high pass filter parameters for a rather stringent response requirement is given in an application to the removal of aircraft low frequency oscillations superimposed on remotely sensed ocean surface profiles. Several frequency response functions are displayed, both in normalized frequency space and in period space. A comparison of the performance of the Martin filter with some other commonly used low pass digital filters is provided in an application to oceanographic data.
Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E
2018-02-01
In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example.
Adaptive critic designs for discrete-time zero-sum games with application to H(infinity) control.
Al-Tamimi, Asma; Abu-Khalaf, Murad; Lewis, Frank L
2007-02-01
In this correspondence, adaptive critic approximate dynamic programming designs are derived to solve the discrete-time zero-sum game in which the state and action spaces are continuous. This results in a forward-in-time reinforcement learning algorithm that converges to the Nash equilibrium of the corresponding zero-sum game. The results in this correspondence can be thought of as a way to solve the Riccati equation of the well-known discrete-time H(infinity) optimal control problem forward in time. Two schemes are presented, namely: 1) a heuristic dynamic programming and 2) a dual-heuristic dynamic programming, to solve for the value function and the costate of the game, respectively. An H(infinity) autopilot design for an F-16 aircraft is presented to illustrate the results.
Optimal Control Surface Layout for an Aeroservoelastic Wingbox
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2017-01-01
This paper demonstrates a technique for locating the optimal control surface layout of an aeroservoelastic Common Research Model wingbox, in the context of maneuver load alleviation and active utter suppression. The combinatorial actuator layout design is solved using ideas borrowed from topology optimization, where the effectiveness of a given control surface is tied to a layout design variable, which varies from zero (the actuator is removed) to one (the actuator is retained). These layout design variables are optimized concurrently with a large number of structural wingbox sizing variables and control surface actuation variables, in order to minimize the sum of structural weight and actuator weight. Results are presented that demonstrate interdependencies between structural sizing patterns and optimal control surface layouts, for both static and dynamic aeroelastic physics.
Zero-Gravity Research Facility Drop Test (2/4)
NASA Technical Reports Server (NTRS)
1995-01-01
An experiment vehicle plunges into the deceleration pit at the end of a 5.18-second drop in the Zero-Gravity Research Facility at NASA's Glenn Research Center. The Zero-Gravity Research Facility was developed to support microgravity research and development programs that investigate various physical sciences, materials, fluid physcis, and combustion and processing systems. Payloads up to 1 meter in diameter and 455 kg in weight can be accommodated. The facility has a 145-meter evacuated shaft to ensure a disturbance-free drop. This is No. 2 of a sequence of 4 images. (Credit: NASA/Glenn Research Center)
Zero-Gravity Research Facility Drop Test (1/4)
NASA Technical Reports Server (NTRS)
1995-01-01
An experiment vehicle plunges into the deceleration pit at the end of a 5.18-second drop in the Zero-Gravity Research Facility at NASA's Glenn Research Center. The Zero-Gravity Research Facility was developed to support microgravity research and development programs that investigate various physical sciences, materials, fluid physics, and combustion and processing systems. Payloads up to 1 meter in diameter and 455 kg in weight can be accommodated. The facility has a 145-meter evacuated shaft to ensure a disturbance-free drop. This is No.1 of a sequence of 4 images. (Credit: NASA/Glenn Research Center)
Zero-Gravity Research Facility Drop Test (3/4)
NASA Technical Reports Server (NTRS)
1995-01-01
An experiment vehicle plunges into the deceleration at the end of a 5.18-second drop in the Zero-Gravity Research Facility at NASA's Glenn Research Center. The Zero-Gravity Research Facility was developed to support microgravity research and development programs that investigate various physical sciences, materials, fluid physics, and combustion and processing systems. Payloads up to one-meter in diameter and 455 kg in weight can be accommodated. The facility has a 145-meter evacuated shaft to ensure a disturbance-free drop. This is No. 3 of a sequence of 4 images. (Credit: NASA/Glenn Research Center)
Zero-Gravity Research Facility Drop Test (4/4)
NASA Technical Reports Server (NTRS)
1995-01-01
An experiment vehicle plunges into the deceleration pit at the end of a 5.18-second drop in the Zero-Gravity Research Facility at NASA's Glenn Research Center. The Zero-Gravity Research Facility was developed to support microgravity research and development programs that investigate various physical sciences, materials, fluid physics, and combustion and processing systems. Payloads up to one meter in diameter and 455 kg in weight can be accommodated. The facility has a 145-meter evacuated shaft to ensure a disturbance-free drop. This is No. 4 of a sequence of 4 images. (Credit: NASA/Glenn Research Center)
Slack, J
2001-01-01
This study examines the dynamics of grass-roots decision-making processes involved in the implementation of the Ryan White CARE Act. Providing social services to persons with HIV/AIDS, the CARE act requires participation of all relevant groups, including representatives of the HIV/AIDS and gay communities. Decision-making behavior is explored by applying a political (zero-sum) model and a bureaucratic (the Herbert Thesis) model. Using qualitative research techniques, the Kern County (California) Consortium is used as a case study. Findings shed light on the decision-making behavior of social service organizations characterized by intense advocacy and structured on the basis of volunteerism and non-hierarchical relationships. Findings affirm bureaucratic behavior predicted by the Herbert Thesis and also discern factors which seem to trigger more conflictual zero-sum behavior.
Consultation sequencing of a hospital with multiple service points using genetic programming
NASA Astrophysics Data System (ADS)
Morikawa, Katsumi; Takahashi, Katsuhiko; Nagasawa, Keisuke
2018-07-01
A hospital with one consultation room operated by a physician and several examination rooms is investigated. Scheduled patients and walk-ins arrive at the hospital, each patient goes to the consultation room first, and some of them visit other service points before consulting the physician again. The objective function consists of the sum of three weighted average waiting times. The problem of sequencing patients for consultation is focused. To alleviate the stress of waiting, the consultation sequence is displayed. A dispatching rule is used to decide the sequence, and best rules are explored by genetic programming (GP). The simulation experiments indicate that the rules produced by GP can be reduced to simple permutations of queues, and the best permutation depends on the weight used in the objective function. This implies that a balanced allocation of waiting times can be achieved by ordering the priority among three queues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando
This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.
Algebraic Riccati equations in zero-sum differential games
NASA Technical Reports Server (NTRS)
Johnson, T. L.; Chao, A.
1974-01-01
The procedure for finding the closed-loop Nash equilibrium solution of two-player zero-sum linear time-invariant differential games with quadratic performance criteria and classical information pattern may be reduced in most cases to the solution of an algebraic Riccati equation. Based on the results obtained by Willems, necessary and sufficient conditions for existence of solutions to these equations are derived, and explicit conditions for a scalar example are given.
Asymmetries in Responses to Attitude Statements: The Example of “Zero-Sum” Beliefs
Smithson, Michael; Shou, Yiyun
2016-01-01
While much has been written about the consequences of zero-sum (or fixed-pie) beliefs, their measurement has received almost no systematic attention. No researchers, to our awareness, have examined the question of whether the endorsement of a zero-sum-like proposition depends on how the proposition is formed. This paper focuses on this issue, which may also apply to the measurement of other attitudes. Zero-sum statements have a form such as “The more of resource X for consumer A, the less of resource Y for consumer B.” X and Y may be the same resource (such as time), but they can be different (e.g., “The more people commute by bicycle, the less revenue for the city from car parking payments”). These statements have four permutations, and a strict zero-sum believer should regard these four statements as equally valid and therefore should endorse them equally. We find, however, that three asymmetric patterns routinely occur in people's endorsement levels, i.e., clear framing effects, whereby endorsement of one permutation substantially differs from endorsement of another. The patterns seem to arise from beliefs about asymmetric resource flows and power relations between rival consumers. We report three studies, with adult samples representative of populations in two Western and two non-Western cultures, demonstrating that most of the asymmetric belief patterns are consistent across these samples. We conclude with a discussion of the implications of this kind of “order-effect” for attitude measurement. PMID:27445942
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Quantifying cause-related mortality by weighting multiple causes of death
Moreno-Betancur, Margarita; Lamarche-Vadel, Agathe; Rey, Grégoire
2016-01-01
Abstract Objective To investigate a new approach to calculating cause-related standardized mortality rates that involves assigning weights to each cause of death reported on death certificates. Methods We derived cause-related standardized mortality rates from death certificate data for France in 2010 using: (i) the classic method, which considered only the underlying cause of death; and (ii) three novel multiple-cause-of-death weighting methods, which assigned weights to multiple causes of death mentioned on death certificates: the first two multiple-cause-of-death methods assigned non-zero weights to all causes mentioned and the third assigned non-zero weights to only the underlying cause and other contributing causes that were not part of the main morbid process. As the sum of the weights for each death certificate was 1, each death had an equal influence on mortality estimates and the total number of deaths was unchanged. Mortality rates derived using the different methods were compared. Findings On average, 3.4 causes per death were listed on each certificate. The standardized mortality rate calculated using the third multiple-cause-of-death weighting method was more than 20% higher than that calculated using the classic method for five disease categories: skin diseases, mental disorders, endocrine and nutritional diseases, blood diseases and genitourinary diseases. Moreover, this method highlighted the mortality burden associated with certain diseases in specific age groups. Conclusion A multiple-cause-of-death weighting approach to calculating cause-related standardized mortality rates from death certificate data identified conditions that contributed more to mortality than indicated by the classic method. This new approach holds promise for identifying underrecognized contributors to mortality. PMID:27994280
On Nash Equilibria in Stochastic Games
2003-10-01
Traditionally automata theory and veri cation has considered zero sum or strictly competitive versions of stochastic games . In these games there are two players...zero- sum discrete-time stochastic dynamic games . SIAM J. Control and Optimization, 19(5):617{634, 1981. 18. R.J. Lipton, E . Markakis, and A. Mehta...Playing large games using simple strate- gies. In EC 03: Electronic Commerce, pages 36{41. ACM Press, 2003. 19. A. Maitra and W. Sudderth. Finitely
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-03-16
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-01-01
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012
On the time-weighted quadratic sum of linear discrete systems
NASA Technical Reports Server (NTRS)
Jury, E. I.; Gutman, S.
1975-01-01
A method is proposed for obtaining the time-weighted quadratic sum for linear discrete systems. The formula of the weighted quadratic sum is obtained from matrix z-transform formulation. In addition, it is shown that this quadratic sum can be derived in a recursive form for several useful weighted functions. The discussion presented parallels that of MacFarlane (1963) for weighted quadratic integral for linear continuous systems.
NASA Astrophysics Data System (ADS)
Lima, Paulo C.
2016-11-01
We show that at low temperatures the d dimensional Blume-Emery-Griffiths model in the antiquadrupolar-disordered interface has all its infinite volume correlation functions < prod _{iin A}σ _i^{n_i}rangle _{τ }, where Asubset Z^d is finite and sum _{iin A}n_i is odd, equal zero, regardless of the boundary condition τ . In particular, the magnetization < σ _irangle _{τ } is zero, for all τ . We also show that the infinite volume mean magnetization lim _{Λ → ∞}Big < 1/|Λ |sum _{iin Λ }σ _iBig rangle _{Λ ,τ } is zero, for all τ.
On the Asymmetric Zero-Range in the Rarefaction Fan
NASA Astrophysics Data System (ADS)
Gonçalves, Patrícia
2014-02-01
We consider one-dimensional asymmetric zero-range processes starting from a step decreasing profile leading, in the hydrodynamic limit, to the rarefaction fan of the associated hydrodynamic equation. Under that initial condition, and for totally asymmetric jumps, we show that the weighted sum of joint probabilities for second class particles sharing the same site is convergent and we compute its limit. For partially asymmetric jumps, we derive the Law of Large Numbers for a second class particle, under the initial configuration in which all positive sites are empty, all negative sites are occupied with infinitely many first class particles and there is a single second class particle at the origin. Moreover, we prove that among the infinite characteristics emanating from the position of the second class particle it picks randomly one of them. The randomness is given in terms of the weak solution of the hydrodynamic equation, through some sort of renormalization function. By coupling the constant-rate totally asymmetric zero-range with the totally asymmetric simple exclusion, we derive limiting laws for more general initial conditions.
NASA Technical Reports Server (NTRS)
Wallace, G. R.; Weathers, G. D.; Graf, E. R.
1973-01-01
The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.
1993-06-23
mal control scheme sums the cost function for all data points from time zero to infinity; however, the preview case sums only through the preview step...shaft speed that is generated by the monitor port on the servo amplifiers. Therefore, the zero frequency gain shown in the figure contains the gain...Delivery Order 0014 SAOORESS (City, State, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS Rom415CmrnSainPROGRAM IPROJECT TASK WORK UNITAlexandriaR VA 22304-6100
Improving depth maps of plants by using a set of five cameras
NASA Astrophysics Data System (ADS)
Kaczmarek, Adam L.
2015-03-01
Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.
Direct recovery of mean gravity anomalies from satellite to satellite tracking
NASA Technical Reports Server (NTRS)
Hajela, D. P.
1974-01-01
The direct recovery was investigated of mean gravity anomalies from summed range rate observations, the signal path being ground station to a geosynchronous relay satellite to a close satellite significantly perturbed by the short wave features of the earth's gravitational field. To ensure realistic observations, these were simulated with the nominal orbital elements for the relay satellite corresponding to ATS-6, and for two different close satellites (one at about 250 km height, and the other at about 900 km height) corresponding to the nominal values for GEOS-C. The earth's gravitational field was represented by a reference set of potential coefficients up to degree and order 12, considered as known values, and by residual gravity anomalies obtained by subtracting the anomalies, implied by the potential coefficients, from their terrestrial estimates. It was found that gravity anomalies could be recovered from strong signal without using any a-priori terrestrial information, i.e. considering their initial values as zero and also assigning them a zero weight matrix. While recovering them from weak signal, it was necessary to use the a-priori estimate of the standard deviation of the anomalies to form their a-priori diagonal weight matrix.
1995-04-06
An experiment vehicle plunges into the deceleration at the end of a 5.18-second drop in the Zero-Gravity Research Facility at NASA's Glenn Research Center. The Zero-Gravity Research Facility was developed to support microgravity research and development programs that investigate various physical sciences, materials, fluid physics, and combustion and processing systems. Payloads up to one-meter in diameter and 455 kg in weight can be accommodated. The facility has a 145-meter evacuated shaft to ensure a disturbance-free drop. This is No. 3 of a sequence of 4 images. (Credit: NASA/Glenn Research Center)
1995-04-06
An experiment vehicle plunges into the deceleration pit at the end of a 5.18-second drop in the Zero-Gravity Research Facility at NASA's Glenn Research Center. The Zero-Gravity Research Facility was developed to support microgravity research and development programs that investigate various physical sciences, materials, fluid physcis, and combustion and processing systems. Payloads up to 1 meter in diameter and 455 kg in weight can be accommodated. The facility has a 145-meter evacuated shaft to ensure a disturbance-free drop. This is No. 2 of a sequence of 4 images. (Credit: NASA/Glenn Research Center)
1995-04-06
An experiment vehicle plunges into the deceleration pit at the end of a 5.18-second drop in the Zero-Gravity Research Facility at NASA's Glenn Research Center. The Zero-Gravity Research Facility was developed to support microgravity research and development programs that investigate various physical sciences, materials, fluid physics, and combustion and processing systems. Payloads up to one meter in diameter and 455 kg in weight can be accommodated. The facility has a 145-meter evacuated shaft to ensure a disturbance-free drop. This is No. 4 of a sequence of 4 images. (Credit: NASA/Glenn Research Center)
1995-04-06
An experiment vehicle plunges into the deceleration pit at the end of a 5.18-second drop in the Zero-Gravity Research Facility at NASA's Glenn Research Center. The Zero-Gravity Research Facility was developed to support microgravity research and development programs that investigate various physical sciences, materials, fluid physics, and combustion and processing systems. Payloads up to 1 meter in diameter and 455 kg in weight can be accommodated. The facility has a 145-meter evacuated shaft to ensure a disturbance-free drop. This is No.1 of a sequence of 4 images. (Credit: NASA/Glenn Research Center)
Measurement invariance of the Belief in a Zero-Sum Game scale across 36 countries.
Różycka-Tran, Joanna; Jurek, Paweł; Olech, Michał; Piotrowski, Jarosław; Żemojtel-Piotrowska, Magdalena
2017-11-28
In this paper, we examined the psychometric properties of cross-cultural validation and replicability (i.e. measurement invariance) of the Belief in a Zero-Sum Game (BZSG) scale, measuring antagonistic belief about interpersonal relations over scarce resources. The factorial structure of the BZSG scale was investigated in student samples from 36 countries (N = 9907), using separate confirmatory factor analyses (CFAs) for each country. The cross-cultural validation of the scale was based on multigroup confirmatory factor analyses (MGCFA). The results confirmed that the scale had a one-factor structure in all countries, in which configural and metric invariance between countries was confirmed. As a zero-sum belief about social relations perceived as antagonistic, BZSG is an important factor related to, for example, social and international relations, attitudes toward immigrants, or well-being. The paper proposes different uses of the BZSG scale for cross-cultural studies in different fields of psychology: social, political, or economic. © 2017 International Union of Psychological Science.
Stability of Zero-Sum Games in Evolutionary Game Theory
NASA Astrophysics Data System (ADS)
Knebel, Johannes; Krueger, Torben; Weber, Markus F.; Frey, Erwin
2014-03-01
Evolutionary game theory has evolved into a successful theoretical concept to study mechanisms that govern the evolution of ecological communities. On a mathematical level, this theory was formalized in the framework of the celebrated replicator equations (REs) and its stochastic generalizations. In our work, we analyze the long-time behavior of the REs for zero-sum games with arbitrarily many strategies, which are generalized versions of the children's game Rock-Paper-Scissors.[1] We demonstrate how to determine the strategies that survive and those that become extinct in the long run. Our results show that extinction of strategies is exponentially fast in generic setups, and that conditions for the survival can be formulated in terms of the Pfaffian of the REs' antisymmetric payoff matrix. Consequences for the stochastic dynamics, which arise in finite populations, are reflected by a generalized scaling law for the extinction time in the vicinity of critical reaction rates. Our findings underline the relevance of zero-sum games as a reference for the analysis of other models in evolutionary game theory.
Soft decoding a self-dual (48, 24; 12) code
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.
Design of optimally normal minimum gain controllers by continuation method
NASA Technical Reports Server (NTRS)
Lim, K. B.; Juang, J.-N.; Kim, Z. C.
1989-01-01
A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.
Singular Differential Game Numerical Technique and Closed Loop Guidance and Control Strategies,
1982-03-01
zero sum games one player tries to minimize and the other player tries to maxwm e payoff functions. In such games a saddle point solu- tion is sought...period- ically in a a-all time interval At tK - ti, i - O,,. e . Whe the game starts running, the pursuer takes a measurement at t 1 t 0 + At and...Loop Solutions to Non-Linear-Zero-Sum Differential Games ," Int. J. Syst., Vol. 7 (5), 1976. 41 . Johnson, D. E ., Convergence Properties of the Method
1984-05-01
Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F
Energy-weighted sum rules connecting ΔZ = 2 nuclei within the SO(8) model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Štefánik, Dušan; Šimkovic, Fedor; Faessler, Amand
2013-12-30
Energy-weighted sum rules associated with ΔZ = 2 nuclei are obtained for the Fermi and the Gamow-Teller operators within the SO(8) model. It is found that there is a dominance of contribution of a single state of the intermediate nucleus to the sum rule. The results confirm founding obtained within the SO(5) model that the energy-weighted sum rules of ΔZ = 2 nuclei are governed by the residual interactions of nuclear Hamiltonian. A short discussion concerning some aspects of energy weighted sum rules in the case of realistic nuclei is included.
Code of Federal Regulations, 2011 CFR
2011-07-01
... published day-zero Critical Entry Time at origin, where the origin P&DC/F and SCF are in the same building... pair for which Periodicals are accepted before the day zero Critical Entry Time at origin and merged... day zero Critical Entry Time at origin, is the sum of 4 or 5 days, plus the number of additional days...
On the efficiency of a randomized mirror descent algorithm in online optimization problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.
2015-04-01
A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.
12 CFR 3.152 - Simple risk weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... applicable risk weight in this section. (1) Zero percent risk weight equity exposures. An equity exposure to... assigned a zero percent risk weight. (2) 20 percent risk weight equity exposures. An equity exposure to a... equals zero. If RVC is negative and greater than or equal to -1 (that is, between zero and -1), then E...
Force of resistance to pipeline pulling in plane and volumetrically curved wells
NASA Astrophysics Data System (ADS)
Toropov, V. S.; Toropov, S. Yu; Toropov, E. S.
2018-05-01
A method has been developed for calculating the component of the pulling force of a pipeline, arising from the well curvature in one or several planes, with the assumption that the pipeline is ballasted by filling with water or otherwise until zero buoyancy in the drilling mud is reached. This paper shows that when calculating this force, one can neglect the effect of sections with zero curvature. In the other case, if buoyancy of the pipeline is other than zero, the resistance force in the curvilinear sections should be calculated taking into account the difference between the normal components of the buoyancy force and weight. In the paper, it is proved that without taking into account resistance forces from the viscosity of the drilling mud, if buoyancy of the pipeline is zero, the total resistance force is independent of the length of the pipe and is determined by the angle equal to the sum of the entry angle and the exit angle of the pipeline to the day surface. For the case of the well curvature in several planes, it is proposed to perform the calculation of such volumetrically curved well by the central angle of the well profile. Analytical dependences are obtained that allow calculating the pulling force for well profiles with a variable curvature radius, i.e. at different angles of deviation between the drill pipes along the well profile.
Modeling the solute transport by particle-tracing method with variable weights
NASA Astrophysics Data System (ADS)
Jiang, J.
2016-12-01
Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.
Response of C57Bl/6 mice to a carbohydrate-free diet.
Borghjid, Saihan; Feinman, Richard David
2012-07-28
High fat feeding in rodents generally leads to obesity and insulin resistance whereas in humans this is only seen if dietary carbohydrate is also high, the result of the anabolic effect of poor regulation of glucose and insulin. A previous study of C57Bl/6 mice (Kennedy AR, et al.: Am J Physiol Endocrinol Metab (2007) 262 E1724-1739) appeared to show the kind of beneficial effects of calorie restriction that is seen in humans but that diet was unusually low in protein (5%). In the current study, we tested a zero-carbohydrate diet that had a higher protein content (20%). Mice on the zero-carbohydrate diet, despite similar caloric intake, consistently gained more weight than animals consuming standard chow, attaining a dramatic difference by week 16 (46.1 ± 1.38 g vs. 30.4 ± 1.00 g for the chow group). Consistent with the obese phenotype, experimental mice had fatty livers and hearts as well as large fat deposits in the abdomino-pelvic cavity, and showed impaired glucose clearance after intraperitoneal injection. In sum, the response of mice to a carbohydrate-free diet was greater weight gain and metabolic disruptions in distinction to the response in humans where low carbohydrate diets cause greater weight loss than isocaloric controls. The results suggest that rodent models of obesity may be most valuable in the understanding of how metabolic mechanisms can work in ways different from the effect in humans.
NASA Technical Reports Server (NTRS)
Herskovits, E. H.; Itoh, R.; Melhem, E. R.
2001-01-01
OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.
ERIC Educational Resources Information Center
Fay, Temple H.
1997-01-01
Presents an exercise suitable for beginning calculus students that may give insight into series representations and allow students to see some elementary application of these representations. The Fourier series is used to approximate by taking sums of trigonometric functions of the form sin(ns) and cos(nx) for n is greater than or = zero. (PVD)
Sommer, Christine; Sletner, Line; Mørkrid, Kjersti; Jenum, Anne Karen; Birkeland, Kåre Inge
2015-04-03
Maternal glucose and lipid levels are associated with neonatal anthropometry of the offspring, also independently of maternal body mass index (BMI). Gestational weight gain, however, is often not accounted for. The objective was to explore whether the effects of maternal glucose and lipid levels on offspring's birth weight and subcutaneous fat were independent of early pregnancy BMI and mid-gestational weight gain. In a population-based, multi-ethnic, prospective cohort of 699 women and their offspring, maternal anthropometrics were collected in gestational week 15 and 28. Maternal fasting plasma lipids, fasting and 2-hour glucose post 75 g glucose load, were collected in gestational week 28. Maternal risk factors were standardized using z-scores. Outcomes were neonatal birth weight and sum of skinfolds in four different regions. Mean (standard deviation) birth weight was 3491 ± 498 g and mean sum of skinfolds was 18.2 ± 3.9 mm. Maternal fasting glucose and HDL-cholesterol were predictors of birth weight, and fasting and 2-hour glucose were predictors of neonatal sum of skinfolds, independently of weight gain as well as early pregnancy BMI, gestational week at inclusion, maternal age, parity, smoking status, ethnic origin, gestational age and offspring's sex. However, weight gain was the strongest independent predictor of both birth weight and neonatal sum of skinfolds, with a 0.21 kg/week increased weight gain giving a 110.7 (95% confidence interval 76.6-144.9) g heavier neonate, and with 0.72 (0.38-1.06) mm larger sum of skinfolds. The effect size of mother's early pregnancy BMI on birth weight was higher in non-Europeans than in Europeans. Maternal fasting glucose and HDL-cholesterol were predictors of offspring's birth weight, and fasting and 2-hour glucose were predictors of neonatal sum of skinfolds, independently of weight gain. Mid-gestational weight gain was a stronger predictor of both birth weight and neonatal sum of skinfolds than early pregnancy BMI, maternal glucose and lipid levels.
Robust Adaptive Dynamic Programming of Two-Player Zero-Sum Games for Continuous-Time Linear Systems.
Fu, Yue; Fu, Jun; Chai, Tianyou
2015-12-01
In this brief, an online robust adaptive dynamic programming algorithm is proposed for two-player zero-sum games of continuous-time unknown linear systems with matched uncertainties, which are functions of system outputs and states of a completely unknown exosystem. The online algorithm is developed using the policy iteration (PI) scheme with only one iteration loop. A new analytical method is proposed for convergence proof of the PI scheme. The sufficient conditions are given to guarantee globally asymptotic stability and suboptimal property of the closed-loop system. Simulation studies are conducted to illustrate the effectiveness of the proposed method.
A theorem regarding roots of the zero-order Bessel function of the first kind
NASA Technical Reports Server (NTRS)
Lin, X.-A.; Agrawal, O. P.
1993-01-01
This paper investigates a problem on the steady-state, conduction-convection heat transfer process in cylindrical porous heat exchangers. The governing partial differential equations for the system are obtained using the energy conservation law. Solution of these equations and the concept of enthalpy lead to a new approach to prove a theorem that the sum of inverse squares of all the positive roots of the zero order Bessel function of the first kind equals to one-forth. As a corollary, it is shown that the sum of one over pth power (p greater than or equal to 2) of the roots converges to some constant.
12 CFR 217.52 - Simple risk-weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... greater than or equal to −1 (that is, between zero and −1), then E equals the absolute value of RVC. If... this section) by the lowest applicable risk weight in this paragraph (b). (1) Zero percent risk weight... credit exposures receive a zero percent risk weight under § 217.32 may be assigned a zero percent risk...
NASA Astrophysics Data System (ADS)
Sun, Jingliang; Liu, Chunsheng
2018-01-01
In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.
Variable speed wind turbine generator with zero-sequence filter
Muljadi, Eduard
1998-01-01
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.
Variable Speed Wind Turbine Generator with Zero-sequence Filter
Muljadi, Eduard
1998-08-25
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.
Variable speed wind turbine generator with zero-sequence filter
Muljadi, E.
1998-08-25
A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility. 14 figs.
12 CFR 324.152 - Simple risk weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... (that is, between zero and -1), then E equals the absolute value of RVC. If RVC is negative and less... the lowest applicable risk weight in this section. (1) Zero percent risk weight equity exposures. An....131(d)(2) is assigned a zero percent risk weight. (2) 20 percent risk weight equity exposures. An...
12 CFR 3.52 - Simple risk-weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... this paragraph (b). (1) Zero percent risk weight equity exposures. An equity exposure to a sovereign... International Monetary Fund, an MDB, and any other entity whose credit exposures receive a zero percent risk weight under § 3.32 may be assigned a zero percent risk weight. (2) 20 percent risk weight equity...
Modelling rainfall amounts using mixed-gamma model for Kuantan district
NASA Astrophysics Data System (ADS)
Zakaria, Roslinazairimah; Moslim, Nor Hafizah
2017-05-01
An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.
Code of Federal Regulations, 2010 CFR
2010-07-01
... accepted before the established and published day-zero Critical Entry Time at origin, where the origin P&DC... is the sum of the applicable (1-to-3-day) First-Class Mail service standard plus one day, for each 3-digit ZIP Code origin-destination pair for which Periodicals are accepted before the day zero Critical...
Environmentalism in the Future: Reducing its Dogmatism and Pseudo-Scientism.
ERIC Educational Resources Information Center
Maruyama, Magoroh
This paper examines fallacious assumptions from which the environmentalist movement of the future must free itself. The first is the fallacy of zero sum game assumption, believing that in order to protect the environment, industry must be decreased. We are beginning to see some clever, positive sum use of industry in relation to the environment.…
NASA Astrophysics Data System (ADS)
Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.
1993-08-01
Inspired from the time delays that occur in neurobiological signal transmission, we describe an adaptive time delay neural network (ATNN) which is a powerful dynamic learning technique for spatiotemporal pattern transformation and temporal sequence identification. The dynamic properties of this network are formulated through the adaptation of time-delays and synapse weights, which are adjusted on-line based on gradient descent rules according to the evolution of observed inputs and outputs. We have applied the ATNN to examples that possess spatiotemporal complexity, with temporal sequences that are completed by the network. The ATNN is able to be applied to pattern completion. Simulation results show that the ATNN learns the topology of a circular and figure eight trajectories within 500 on-line training iterations, and reproduces the trajectory dynamically with very high accuracy. The ATNN was also trained to model the Fourier series expansion of the sum of different odd harmonics. The resulting network provides more flexibility and efficiency than the TDNN and allows the network to seek optimal values for time-delays as well as optimal synapse weights.
Sparse Zero-Sum Games as Stable Functional Feature Selection
Sokolovska, Nataliya; Teytaud, Olivier; Rizkalla, Salwa; Clément, Karine; Zucker, Jean-Daniel
2015-01-01
In large-scale systems biology applications, features are structured in hidden functional categories whose predictive power is identical. Feature selection, therefore, can lead not only to a problem with a reduced dimensionality, but also reveal some knowledge on functional classes of variables. In this contribution, we propose a framework based on a sparse zero-sum game which performs a stable functional feature selection. In particular, the approach is based on feature subsets ranking by a thresholding stochastic bandit. We provide a theoretical analysis of the introduced algorithm. We illustrate by experiments on both synthetic and real complex data that the proposed method is competitive from the predictive and stability viewpoints. PMID:26325268
On the sum of generalized Fibonacci sequence
NASA Astrophysics Data System (ADS)
Chong, Chin-Yoon; Ho, C. K.
2014-06-01
We consider the generalized Fibonacci sequence {Un defined by U0 = 0, U1 = 1, and Un+2 = pUn+1+qUn for all n∈Z0+ and p, q∈Z+. In this paper, we derived various sums of the generalized Fibonacci sequence from their recursive relations.
Slanger, W D; Marchello, M J; Busboom, J R; Meyer, H H; Mitchell, L A; Hendrix, W F; Mills, R R; Warnock, W D
1994-06-01
Data of sixty finished, crossbred lambs were used to develop prediction equations of total weight of retail-ready cuts (SUM). These cuts were the leg, sirloin, loin, rack, shoulder, neck, riblets, shank, and lean trim (85/15). Measurements were taken on live lambs and on both hot and cold carcasses. A four-terminal bioelectrical impedance analyzer (BIA) was used to measure resistance (Rs, ohms) and reactance (Xc, ohms). Distances between detector terminals (L, centimeters) were recorded. Carcass temperatures (T, degrees C) at time of BIA readings were also recorded. The equation predicting SUM from cold carcass measurements (n = 53, R2 = .97) was .093 + .621 x weight-.0219 x Rs + .0248 x Xc + .182 x L-.338 x T. Resistance accounted for variability in SUM over and above weight and L (P = .0016). The above equation was used to rank cold carcasses in descending order of predicted SUM. An analogous ranking was obtained from a prediction equation that used weight only (R2 = .88). These rankings were divided into five categories: top 25%, middle 50%, bottom 25%, top 50%, and bottom 50%. Within-category differences in average fat cover, yield grade, and SUM as a percentage of cold carcass weight of carcasses not placed in the same category by both prediction equations were quantified with independent t-tests. These differences were statistically significant for all categories except middle 50%. This shows that BIA located those lambs that could more efficiently contribute to SUM because a higher portion of their weight was lean.
Hand gesture recognition by analysis of codons
NASA Astrophysics Data System (ADS)
Ramachandra, Poornima; Shrikhande, Neelima
2007-09-01
The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.
Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models
2002-03-01
such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most
Nonlinear zero-sum differential game analysis by singular perturbation methods
NASA Technical Reports Server (NTRS)
Sinar, J.; Farber, N.
1982-01-01
A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.
A Groupwise Association Test for Rare Mutations Using a Weighted Sum Statistic
Madsen, Bo Eskerod; Browning, Sharon R.
2009-01-01
Resequencing is an emerging tool for identification of rare disease-associated mutations. Rare mutations are difficult to tag with SNP genotyping, as genotyping studies are designed to detect common variants. However, studies have shown that genetic heterogeneity is a probable scenario for common diseases, in which multiple rare mutations together explain a large proportion of the genetic basis for the disease. Thus, we propose a weighted-sum method to jointly analyse a group of mutations in order to test for groupwise association with disease status. For example, such a group of mutations may result from resequencing a gene. We compare the proposed weighted-sum method to alternative methods and show that it is powerful for identifying disease-associated genes, both on simulated and Encode data. Using the weighted-sum method, a resequencing study can identify a disease-associated gene with an overall population attributable risk (PAR) of 2%, even when each individual mutation has much lower PAR, using 1,000 to 7,000 affected and unaffected individuals, depending on the underlying genetic model. This study thus demonstrates that resequencing studies can identify important genetic associations, provided that specialised analysis methods, such as the weighted-sum method, are used. PMID:19214210
12 CFR 3.37 - Collateralized transactions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... this section: (i) A national bank or Federal savings association may assign a zero percent risk weight... qualifies for a zero percent risk weight under § 3.32. (iii) A national bank or Federal savings association may assign a zero percent risk weight to the collateralized portion of an exposure where: (A) The...
Sub-Audible Speech Recognition Based upon Electromyographic Signals
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C. (Inventor); Agabon, Shane T. (Inventor); Lee, Diana D. (Inventor)
2012-01-01
Method and system for processing and identifying a sub-audible signal formed by a source of sub-audible sounds. Sequences of samples of sub-audible sound patterns ("SASPs") for known words/phrases in a selected database are received for overlapping time intervals, and Signal Processing Transforms ("SPTs") are formed for each sample, as part of a matrix of entry values. The matrix is decomposed into contiguous, non-overlapping two-dimensional cells of entries, and neural net analysis is applied to estimate reference sets of weight coefficients that provide sums with optimal matches to reference sets of values. The reference sets of weight coefficients are used to determine a correspondence between a new (unknown) word/phrase and a word/phrase in the database.
Stephenson, Richard; Caron, Aimee M; Famina, Svetlana
2016-12-01
Sleep-wake behavior exhibits diurnal rhythmicity, rebound responses to acute total sleep deprivation (TSD), and attenuated rebounds following chronic sleep restriction (CSR). We investigated how these long-term patterns of behavior emerge from stochastic short-term dynamics of state transition. Male Sprague-Dawley rats were subjected to TSD (1day×24h, N=9), or CSR (10days×18h TSD, N=7) using a rodent walking-wheel apparatus. One baseline day and one recovery day following TSD and CSR were analyzed. The implications of the zero sum principle were evaluated using a Markov model of sleep-wake state transition. Wake bout duration (a combined function of the probability of wake maintenance and proportional representations of brief and long wake) was a key variable mediating the baseline diurnal rhythms and post-TSD responses of all three states, and the attenuation of the post-CSR rebounds. Post-NREM state transition trajectory was an important factor in REM rebounds. The zero sum constraint ensures that a change in any transition probability always affects bout frequency and cumulative time of at least two, and usually all three, of wakefulness, NREM and REM. Neural mechanisms controlling wake maintenance may play a pivotal role in regulation and dysregulation of all three states. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Brunelle, Eugene J.
1994-01-01
The first few viewgraphs describe the general solution properties of linear elasticity theory which are given by the following two statements: (1) for stress B.C. on S(sub sigma) and zero displacement B.C. on S(sub u) the altered displacements u(sub i)(*) and the actual stresses tau(sub ij) are elastically dependent on Poisson's ratio nu alone: thus the actual displacements are given by u(sub i) = mu(exp -1)u(sub i)(*); and (2) for zero stress B.C. on S(sub sigma) and displacement B.C. on S(sub u) the actual displacements u(sub i) and the altered stresses tau(sub ij)(*) are elastically dependent on Poisson's ratio nu alone: thus the actual stresses are given by tau(sub ij) = E tau(sub ij)(*). The remaining viewgraphs describe the minimum parameter formulation of the general classical laminate theory plate problem as follows: The general CLT plate problem is expressed as a 3 x 3 system of differential equations in the displacements u, v, and w. The eighteen (six each) A(sub ij), B(sub ij), and D(sub ij) system coefficients are ply-weighted sums of the transformed reduced stiffnesses (bar-Q(sub ij))(sub k); the (bar-Q(sub ij))(sub k) in turn depend on six reduced stiffnesses (Q(sub ij))(sub k) and the material and geometry properties of the k(sup th) layer. This paper develops a method for redefining the system coefficients, the displacement components (u,v,w), and the position components (x,y) such that a minimum parameter formulation is possible. The pivotal steps in this method are (1) the reduction of (bar-Q(sub ij))(sub k) dependencies to just two constants Q(*) = (Q(12) + 2Q(66))/(Q(11)Q(22))(exp 1/2) and F(*) - (Q(22)/Q(11))(exp 1/2) in terms of ply-independent reference values Q(sub ij); (2) the reduction of the remaining portions of the A, B, and D coefficients to nondimensional ply-weighted sums (with 0 to 1 ranges) that are independent of Q(*) and F(*); and (3) the introduction of simple coordinate stretchings for u, v, w and x,y such that the process is neatly completed.
12 CFR 324.37 - Collateralized transactions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... institution may assign a zero percent risk weight to an exposure to an OTC derivative contract that is marked... exposure to a sovereign that qualifies for a zero percent risk weight under § 324.32. (iii) An FDIC-supervised institution may assign a zero percent risk weight to the collateralized portion of an exposure...
Two-Way Satellite Time and Frequency Transfer (TWSTFT) Calibration Constancy From Closure Sums
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting 587 TWO-WAY SATELLITE TIME AND FREQUENCY TRANSFER ( TWSTFT ) CALIBRATION...Paris, France Abstract Two-way Satellite Time and Frequency Transfer ( TWSTFT ) is considered to be the most accurate means of long-distance...explanations for small, but non-zero, biases observed in the closure sums of uncalibrated data are presented. I. INTRODUCTION TWSTFT [1] has
Chiral symmetry and π - π scattering in the Covariant Spectator Theory
Biernat, Elmar P.; Peña, M. T.; Ribeiro, J. E.; ...
2014-11-14
The π-π scattering amplitude calculated with a model for the quark-antiquark interaction in the framework of the Covariant Spectator Theory (CST) is shown to satisfy the Adler zero constraint imposed by chiral symmetry. The CST formalism is established in Minkowski space and our calculations are performed in momentum space. We prove that the axial-vector Ward-Takahashi identity is satisfied by our model. Then we show that, similarly to what happens within the Bethe-Salpeter formalism, application of the axial-vector Ward Takahashi identity to the CST π-π scattering amplitude allows us to sum the intermediate quark-quark interactions to all orders. Thus, the Adlermore » self-consistency zero for π-π scattering in the chiral limit emerges as the result for this sum.« less
Computer Algorithms for Measurement Control and Signal Processing of Transient Scattering Signatures
1988-09-01
CURVE * C Y2 IS THE BACKGROUND CURVE * C NSHIF IS THE NUMBER OF POINT TO SHIFT * C SET IS THE SUM OF THE POINT TO SHIFT * C IN ORDER TO ZERO PADDING ...reduces the spec- tral content in both the low and high frequency regimes. If the threshold is set to zero , a "naive’ deconvolution results. This provides...side of equation 5.2 was close to zero , so it can be neglected. As a result, the expected power is equal to the variance. The signal plus noise power
Li, Wendy; Anderson, Donald D.; Goldsworthy, Jane K.; Marsh, J. Lawrence; Brown, Thomas D.
2008-01-01
SUMMARY The role of altered contact mechanics in the pathogenesis of post-traumatic osteoarthritis (PTOA) following intra-articular fracture remains poorly understood. One proposed etiology is that residual incongruities lead to altered joint contact stresses that, over time, predispose to PTOA. Prevailing joint contact stresses following surgical fracture reduction were quantified in this study using patient-specific contact finite element (FE) analysis. FE models were created for 11 ankle pairs from tibial plafond fracture patients. Both (reduced) fractured ankles and their intact contralaterals were modeled. A sequence of 13 loading instances was used to simulate the stance phase of gait. Contact stresses were summed across loadings in the simulation, weighted by resident time in the gait cycle. This chronic exposure measure, a metric of degeneration propensity, was then compared between intact and fractured ankle pairs. Intact ankles had lower peak contact stress exposures that were more uniform, and centrally located. The series-average peak contact stress elevation for fractured ankles was 38% (p=0.0015; peak elevation was 82%). Fractured ankles had less area with low contact stress exposure than intacts, and a greater area with high exposure. Chronic contact stress overexposures (stresses exceeding a damage threshold) ranged from near zero to a high of 18 times the matched intact value. The patient-specific FE models utilized in this study represent substantial progress towards elucidating the relationship between altered contact stresses and the outcome of patients treated for intra-articular fractures. PMID:18404662
Harel, Daphna; Hudson, Marie; Iliescu, Alexandra; Baron, Murray; Steele, Russell
2016-08-01
To develop a weighted summary score for the Medsger Disease Severity Scale (DSS) and to compare its measurement properties with those of a summed DSS score and a physician's global assessment (PGA) of severity score in systemic sclerosis (SSc). Data from 875 patients with SSc enrolled in a multisite observational research cohort were extracted from a central database. Item response theory was used to estimate weights for the DSS weighted score. Intraclass correlation coefficients (ICC) and convergent, discriminative, and predictive validity of the 3 summary measures in relation to patient-reported outcomes (PRO) and mortality were compared. Mean PGA was 2.69 (SD 2.16, range 0-10), mean DSS summed score was 8.60 (SD 4.02, range 0-36), and mean DSS weighted score was 8.11 (SD 4.05, range 0-36). ICC were similar for all 3 measures [PGA 6.9%, 95% credible intervals (CrI) 2.1-16.2; DSS summed score 2.5%, 95% CrI 0.4-6.7; DSS weighted score 2.0%, 95% CrI 0.1-5.6]. Convergent and discriminative validity of the 3 measures for PRO were largely similar. In Cox proportional hazards models adjusting for age and sex, the 3 measures had similar predictive ability for mortality (adjusted R(2) 13.9% for PGA, 12.3% for DSS summed score, and 10.7% DSS weighted score). The 3 summary scores appear valid and perform similarly. However, there were some concerns with the weights computed for individual DSS scales, with unexpected low weights attributed to lung, heart, and kidney, leading the PGA to be the preferred measure at this time. Further work refining the DSS could improve the measurement properties of the DSS summary scores.
76 FR 16234 - Prompt Corrective Action; Amended Definition of Low-Risk Assets
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-23
... guaranteed by NCUA. Assets in this category receive a risk-weighting of zero for regulatory capital purposes... not apply a risk-weighting of zero even when an investment carries no credit risk. The ``Low-risk assets'' risk portfolio, in contrast, does apply a risk- weighting of zero, but the NGNs did not fall...
12 CFR 324.52 - Simple risk-weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... greater than or equal to −1 (that is, between zero and −1), then E equals the absolute value of RVC. If...) Zero percent risk weight equity exposures. An equity exposure to a sovereign, the Bank for..., an MDB, and any other entity whose credit exposures receive a zero percent risk weight under § 324.32...
Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1998-01-01
In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.
Infinity and Newton's Three Laws of Motion
NASA Astrophysics Data System (ADS)
Lee, Chunghyoung
2011-12-01
It is shown that the following three common understandings of Newton's laws of motion do not hold for systems of infinitely many components. First, Newton's third law, or the law of action and reaction, is universally believed to imply that the total sum of internal forces in a system is always zero. Several examples are presented to show that this belief fails to hold for infinite systems. Second, two of these examples are of an infinitely divisible continuous body with finite mass and volume such that the sum of all the internal forces in the body is not zero and the body accelerates due to this non-null net internal force. So the two examples also demonstrate the breakdown of the common understanding that according to Newton's laws a body under no external force does not accelerate. Finally, these examples also make it clear that the expression `impressed force' in Newton's formulations of his first and second laws should be understood not as `external force' but as `exerted force' which is the sum of all the internal and external forces acting on a given body, if the body is infinitely divisible.
ERIC Educational Resources Information Center
Soh, Kaycheng
2014-01-01
World university rankings (WUR) use the weight-and-sum approach to arrive at an overall measure which is then used to rank the participating universities of the world. Although the weight-and-sum procedure seems straightforward and accords with common sense, it has hidden methodological or statistical problems which render the meaning of the…
49 CFR 393.42 - Brakes required on all wheels.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subject to this part is not required to be equipped with brakes if the axle weight of the towed vehicle does not exceed 40 percent of the sum of the axle weights of the towing vehicle. (4) Any full trailer... of the towed vehicle does not exceed 40 percent of the sum of the axle weights of the towing vehicle...
NASA Astrophysics Data System (ADS)
Qixing, Chen; Qiyu, Luo
2013-03-01
At present, the architecture of a digital-to-analog converter (DAC) in essence is based on the weight current, and the average value of its D/A signal current increases in geometric series according to its digital signal bits increase, which is 2n-1 times of its least weight current. But for a dual weight resistance chain type DAC, by using the weight voltage manner to D/A conversion, the D/A signal current is fixed to chain current Icha; it is only 1/2n-1 order of magnitude of the average signal current value of the weight current type DAC. Its principle is: n pairs dual weight resistances form a resistance chain, which ensures the constancy of the chain current; if digital signals control the total weight resistance from the output point to the zero potential point, that could directly control the total weight voltage of the output point, so that the digital signals directly turn into a sum of the weight voltage signals; thus the following goals are realized: (1) the total current is less than 200 μA (2) the total power consumption is less than 2 mW; (3) an 18-bit conversion can be realized by adopting a multi-grade structure; (4) the chip area is one order of magnitude smaller than the subsection current-steering type DAC; (5) the error depends only on the error of the unit resistance, so it is smaller than the error of the subsection current-steering type DAC; (6) the conversion time is only one action time of switch on or off, so its speed is not lower than the present DAC.
14 CFR 129.23 - Transport category cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2013 CFR
2013-01-01
...: Increased zero fuel and landing weights. 129.23 Section 129.23 Aeronautics and Space FEDERAL AVIATION... ENGAGED IN COMMON CARRIAGE General § 129.23 Transport category cargo service airplanes: Increased zero... (certificated under part 4b of the Civil Air Regulations effective before March 13, 1956) at increased zero fuel...
14 CFR 129.23 - Transport category cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2012 CFR
2012-01-01
...: Increased zero fuel and landing weights. 129.23 Section 129.23 Aeronautics and Space FEDERAL AVIATION... ENGAGED IN COMMON CARRIAGE General § 129.23 Transport category cargo service airplanes: Increased zero... (certificated under part 4b of the Civil Air Regulations effective before March 13, 1956) at increased zero fuel...
14 CFR 129.23 - Transport category cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2014 CFR
2014-01-01
...: Increased zero fuel and landing weights. 129.23 Section 129.23 Aeronautics and Space FEDERAL AVIATION... ENGAGED IN COMMON CARRIAGE General § 129.23 Transport category cargo service airplanes: Increased zero... (certificated under part 4b of the Civil Air Regulations effective before March 13, 1956) at increased zero fuel...
14 CFR 129.23 - Transport category cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2011 CFR
2011-01-01
...: Increased zero fuel and landing weights. 129.23 Section 129.23 Aeronautics and Space FEDERAL AVIATION... ENGAGED IN COMMON CARRIAGE General § 129.23 Transport category cargo service airplanes: Increased zero... (certificated under part 4b of the Civil Air Regulations effective before March 13, 1956) at increased zero fuel...
14 CFR 129.23 - Transport category cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2010 CFR
2010-01-01
...: Increased zero fuel and landing weights. 129.23 Section 129.23 Aeronautics and Space FEDERAL AVIATION... ENGAGED IN COMMON CARRIAGE General § 129.23 Transport category cargo service airplanes: Increased zero... (certificated under part 4b of the Civil Air Regulations effective before March 13, 1956) at increased zero fuel...
Coherence analysis of a class of weighted networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; He, Jiaojiao; Zong, Yue; Ju, Tingting; Sun, Yu; Su, Weiyi
2018-04-01
This paper investigates consensus dynamics in a dynamical system with additive stochastic disturbances that is characterized as network coherence by using the Laplacian spectrum. We introduce a class of weighted networks based on a complete graph and investigate the first- and second-order network coherence quantifying as the sum and square sum of reciprocals of all nonzero Laplacian eigenvalues. First, the recursive relationship of its eigenvalues at two successive generations of Laplacian matrix is deduced. Then, we compute the sum and square sum of reciprocal of all nonzero Laplacian eigenvalues. The obtained results show that the scalings of first- and second-order coherence with network size obey four and five laws, respectively, along with the range of the weight factor. Finally, it indicates that the scalings of our studied networks are smaller than other studied networks when 1/√{d }
ERIC Educational Resources Information Center
Gauthier, N.
2006-01-01
This note describes a method for evaluating the sums of the m -th powers of n consecutive terms of a general arithmetic sequence: { S[subscript m] = 0, 1, 2,...}. The method is based on the use of a differential operator that is repeatedly applied to a generating function. A known linear recurrence is then obtained and the m-th sum, S[subscript…
A Progression of Static Equilibrium Laboratory Exercises
NASA Astrophysics Data System (ADS)
Kutzner, Mickey; Kutzner, Andrew
2013-10-01
Although simple architectural structures like bridges, catwalks, cantilevers, and Stonehenge have been integral in human societies for millennia, as have levers and other simple tools, modern students of introductory physics continue to grapple with Newton's conditions for static equilibrium. As formulated in typical introductory physics textbooks, these two conditions appear as ΣF=0(1) and Στ=0,(2) where each torque τ is defined as the cross product between the lever arm vector r and the corresponding applied force F, τ =r×F,(3) having magnitude, τ =Frsinθ.(4) The angle θ here is between the two vectors F and r. In Eq. (1), upward (downward) forces are considered positive (negative). In Eq. (2), counterclockwise (clockwise) torques are considered positive (negative). Equation (1) holds that the vector sum of the external forces acting on an object must be zero to prevent linear accelerations; Eq. (2) states that the vector sum of torques due to external forces about any axis must be zero to prevent angular accelerations. In our view these conditions can be problematic for students because a) the equations contain the unfamiliar summation notation Σ, b) students are uncertain of the role of torques in causing rotations, and c) it is not clear why the sum of torques is zero regardless of the choice of axis. Gianino5 describes an experiment using MBL and a force sensor to convey the meaning of torque as applied to a rigid-body lever system without exploring quantitative aspects of the conditions for static equilibrium.
Gu, Hai Ting; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Abrupt change is an important manifestation of hydrological process with dramatic variation in the context of global climate change, the accurate recognition of which has great significance to understand hydrological process changes and carry out the actual hydrological and water resources works. The traditional method is not reliable at both ends of the samples. The results of the methods are often inconsistent. In order to solve the problem, we proposed a comprehensive weighted recognition method for hydrological abrupt change based on weighting by comparing of 12 commonly used methods for testing change points. The reliability of the method was verified by Monte Carlo statistical test. The results showed that the efficiency of the 12 methods was influenced by the factors including coefficient of variation (Cv), deviation coefficient (Cs) before the change point, mean value difference coefficient, Cv difference coefficient and Cs difference coefficient, but with no significant relationship with the mean value of the sequence. Based on the performance of each method, the weight of each test method was given following the results from statistical test. The sliding rank sum test method and the sliding run test method had the highest weight, whereas the RS test method had the lowest weight. By this means, the change points with the largest comprehensive weight could be selected as the final result when the results of the different methods were inconsistent. This method was used to analyze the daily maximum sequence of Jiajiu station in the lower reaches of the Lancang River (1-day, 3-day, 5-day, 7-day and 1-month). The results showed that each sequence had obvious jump variation in 2004, which was in agreement with the physical causes of hydrological process change and water conservancy construction. The rationality and reliability of the proposed method was verified.
Information Security Scheme Based on Computational Temporal Ghost Imaging.
Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing
2017-08-09
An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.
Matching methods evaluation framework for stereoscopic breast x-ray images.
Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric
2016-01-01
Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.
Quantum circuit dynamics via path integrals: Is there a classical action for discrete-time paths?
NASA Astrophysics Data System (ADS)
Penney, Mark D.; Enshan Koh, Dax; Spekkens, Robert W.
2017-07-01
It is straightforward to compute the transition amplitudes of a quantum circuit using the sum-over-paths methodology when the gates in the circuit are balanced, where a balanced gate is one for which all non-zero transition amplitudes are of equal magnitude. Here we consider the question of whether, for such circuits, the relative phases of different discrete-time paths through the configuration space can be defined in terms of a classical action, as they are for continuous-time paths. We show how to do so for certain kinds of quantum circuits, namely, Clifford circuits where the elementary systems are continuous-variable systems or discrete systems of odd-prime dimension. These types of circuit are distinguished by having phase-space representations that serve to define their classical counterparts. For discrete systems, the phase-space coordinates are also discrete variables. We show that for each gate in the generating set, one can associate a symplectomorphism on the phase-space and to each of these one can associate a generating function, defined on two copies of the configuration space. For discrete systems, the latter association is achieved using tools from algebraic geometry. Finally, we show that if the action functional for a discrete-time path through a sequence of gates is defined using the sum of the corresponding generating functions, then it yields the correct relative phases for the path-sum expression. These results are likely to be relevant for quantizing physical theories where time is fundamentally discrete, characterizing the classical limit of discrete-time quantum dynamics, and proving complexity results for quantum circuits.
14 CFR 121.198 - Cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Cargo service airplanes: Increased zero... AND OPERATIONS OPERATING REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Airplane Performance Operating Limitations § 121.198 Cargo service airplanes: Increased zero fuel and landing weights...
14 CFR 121.198 - Cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Cargo service airplanes: Increased zero... AND OPERATIONS OPERATING REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Airplane Performance Operating Limitations § 121.198 Cargo service airplanes: Increased zero fuel and landing weights...
14 CFR 121.198 - Cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Cargo service airplanes: Increased zero... AND OPERATIONS OPERATING REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Airplane Performance Operating Limitations § 121.198 Cargo service airplanes: Increased zero fuel and landing weights...
14 CFR 121.198 - Cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Cargo service airplanes: Increased zero... AND OPERATIONS OPERATING REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Airplane Performance Operating Limitations § 121.198 Cargo service airplanes: Increased zero fuel and landing weights...
14 CFR 121.198 - Cargo service airplanes: Increased zero fuel and landing weights.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Cargo service airplanes: Increased zero... AND OPERATIONS OPERATING REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Airplane Performance Operating Limitations § 121.198 Cargo service airplanes: Increased zero fuel and landing weights...
NASA Astrophysics Data System (ADS)
Tohara, Takashi; Liang, Haichao; Tanaka, Hirofumi; Igarashi, Makoto; Samukawa, Seiji; Endo, Kazuhiko; Takahashi, Yasuo; Morie, Takashi
2016-03-01
A nanodisk array connected with a fin field-effect transistor is fabricated and analyzed for spiking neural network applications. This nanodevice performs weighted sums in the time domain using rising slopes of responses triggered by input spike pulses. The nanodisk arrays, which act as a resistance of several giga-ohms, are fabricated using a self-assembly bio-nano-template technique. Weighted sums are achieved with an energy dissipation on the order of 1 fJ, where the number of inputs can be more than one hundred. This amount of energy is several orders of magnitude lower than that of conventional digital processors.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Asymptotic formulae for the zeros of orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badkov, V M
2012-09-30
Let p{sub n}(t) be an algebraic polynomial that is orthonormal with weight p(t) on the interval [-1, 1]. When p(t) is a perturbation (in certain limits) of the Chebyshev weight of the first kind, the zeros of the polynomial p{sub n}( cos {tau}) and the differences between pairs of (not necessarily consecutive) zeros are shown to satisfy asymptotic formulae as n{yields}{infinity}, which hold uniformly with respect to the indices of the zeros. Similar results are also obtained for perturbations of the Chebyshev weight of the second kind. First, some preliminary results on the asymptotic behaviour of the difference between twomore » zeros of an orthogonal trigonometric polynomial, which are needed, are established. Bibliography: 15 titles.« less
Physical condition for elimination of ambiguity in conditionally convergent lattice sums
NASA Astrophysics Data System (ADS)
Young, K.
1987-02-01
The conditional convergence of the lattice sum defining the Madelung constant gives rise to an ambiguity in its value. It is shown that this ambiguity is related, through a simple and universal integral, to the average charge density on the crystal surface. The physically correct value is obtained by setting the charge density to zero. A simple and universally applicable formula for the Madelung constant is derived as a consequence. It consists of adding up dipole-dipole energies together with a nontrivial correction term.
Hexagonalization of correlation functions II: two-particle contributions
NASA Astrophysics Data System (ADS)
Fleury, Thiago; Komatsu, Shota
2018-02-01
In this work, we compute one-loop planar five-point functions in N=4 super-Yang-Mills using integrability. As in the previous work, we decompose the correlation functions into hexagon form factors and glue them using the weight factors which depend on the cross-ratios. The main new ingredient in the computation, as compared to the four-point functions studied in the previous paper, is the two-particle mirror contribution. We develop techniques to evaluate it and find agreement with the perturbative results in all the cases we analyzed. In addition, we consider next-to-extremal four-point functions, which are known to be protected, and show that the sum of one-particle and two-particle contributions at one loop adds up to zero as expected. The tools developed in this work would be useful for computing higher-particle contributions which would be relevant for more complicated quantities such as higher-loop corrections and non-planar correlators.
Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.
Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen
2018-05-01
In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.
Frustration in Condensed Matter and Protein Folding
NASA Astrophysics Data System (ADS)
Lorelli, S.; Cabot, A.; Sundarprasad, N.; Boekema, C.
Using computer modeling we study frustration in condensed matter and protein folding. Frustration is due to random and/or competing interactions. One definition of frustration is the sum of squares of the differences between actual and expected distances between characters. If this sum is non-zero, then the system is said to have frustration. A simulation tracks the movement of characters to lower their frustration. Our research is conducted on frustration as a function of temperature using a logarithmic scale. At absolute zero, the relaxation for frustration is a power function for randomly assigned patterns or an exponential function for regular patterns like Thomson figures. These findings have implications for protein folding; we attempt to apply our frustration modeling to protein folding and dynamics. We use coding in Python to simulate different ways a protein can fold. An algorithm is being developed to find the lowest frustration (and thus energy) states possible. Research supported by SJSU & AFC.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
A note on the zeros of Freud-Sobolev orthogonal polynomials
NASA Astrophysics Data System (ADS)
Moreno-Balcazar, Juan J.
2007-10-01
We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.
Expansins expression is associated with grain size dynamics in wheat (Triticum aestivum L.)
Lizana, X. Carolina; Riegel, Ricardo; Gomez, Leonardo D.; Herrera, Jaime; Isla, Adolfo; McQueen-Mason, Simon J.; Calderini, Daniel F.
2010-01-01
Grain weight is one of the most important components of cereal yield and quality. A clearer understanding of the physiological and molecular determinants of this complex trait would provide an insight into the potential benefits for plant breeding. In the present study, the dynamics of dry matter accumulation, water uptake, and grain size in parallel with the expression of expansins during grain growth in wheat were analysed. The stabilized water content of grains showed a strong association with final grain weight (r2=0.88, P <0.01). Grain length was found to be the trait that best correlated with final grain weight (r2=0.98, P <0.01) and volume (r2=0.94, P <0.01). The main events that defined final grain weight occurred during the first third of grain-filling when maternal tissues (the pericarp of grains) undergo considerable expansion. Eight expansin coding sequences were isolated from pericarp RNA and the temporal profiles of accumulation of these transcripts were monitored. Sequences showing high homology with TaExpA6 were notably abundant during early grain expansion and declined as maturity was reached. RNA in situ hybridization studies revealed that the transcript for TaExpA6 was principally found in the pericarp during early growth in grain development and, subsequently, in both the endosperm and pericarp. The signal in these images is likely to be the sum of the transcript levels of all three sequences with high similarity to the TaExpA6 gene. The early part of the expression profile of this putative expansin gene correlates well with the critical periods of early grain expansion, suggesting it as a possible factor in the final determination of grain size. PMID:20080826
Ferguson, John; Wheeler, William; Fu, YiPing; Prokunina-Olsson, Ludmila; Zhao, Hongyu; Sampson, Joshua
2013-01-01
With recent advances in sequencing, genotyping arrays, and imputation, GWAS now aim to identify associations with rare and uncommon genetic variants. Here, we describe and evaluate a class of statistics, generalized score statistics (GSS), that can test for an association between a group of genetic variants and a phenotype. GSS are a simple weighted sum of single-variant statistics and their cross-products. We show that the majority of statistics currently used to detect associations with rare variants are equivalent to choosing a specific set of weights within this framework. We then evaluate the power of various weighting schemes as a function of variant characteristics, such as MAF, the proportion associated with the phenotype, and the direction of effect. Ultimately, we find that two classical tests are robust and powerful, but details are provided as to when other GSS may perform favorably. The software package CRaVe is available at our website (http://dceg.cancer.gov/bb/tools/crave). PMID:23092956
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C; Adcock, A; Azevedo, S
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less
D-tagatose is a bulk sweetener with zero energy determined in rats.
Livesey, G; Brown, J C
1996-06-01
The ketohexose D-tagatose is readily oxidized but contributes poorly to lipid deposition. We therefore examined whether this sugar contributes to energy requirements by determining its net metabolizable energy value in rats. All substrate-induced energy losses from D-tagatose, with sucrose as reference standard, were determined as a single value accounting for the sum of the energy losses to feces, urine, gaseous hydrogen and methane and substrate-induced thermogenesis. A randomized parallel design involving two treatment periods (adaptation to D-tagatose and subsequent energy balance) and two control groups (to control for treatment effects in each period) was used. Rats consumed 1.8 g test carbohydrate daily as a supplement to a basal diet for a 40- or 41-d balance period after prior adaptation for 21 d. Growth, protein and lipid deposition were unaffected by supplementary gross energy intake from D-tagatose compared with an unsupplemented control, but sucrose significantly (P < 0.05) increased all three. Based on the changes induced in protein and fat gain during the balance period it was calculated that D-tagatose contributed -3 +/- 14% of its heat of combustion to net metabolizable energy, and therefore this ketohexose effectively has a zero energy value. D-Tagatose would potentially be helpful in body weight control, especially in diabetic subjects because of its antidiabetogenic effects.
NASA Astrophysics Data System (ADS)
Koma, Y.
The derivative of the topological susceptibility at zero momentum is responsible for the validity of the Witten-Veneziano formula for the η mass, and also for the resolution of the EMC pro- ton spin problem. We investigate the momentum dependence of the topological susceptibility and its derivative at zero momentum using lattice QCD simulations with overlap fermions within quenched approximation. We expose the role of the low-lying Dirac eigenmodes for the topolog- ical charge density, and find the negative value for the derivative. While the sign of the derivative is consistent with the QCD sum rule in pure Yang-Mills theory, the absolute value becomes larger if only the contribution from the zero modes and the low-lying eigenmodes is taken into account.
Influence of age on adaptability of human mastication.
Peyron, Marie-Agnès; Blanc, Olivier; Lund, James P; Woda, Alain
2004-08-01
The objective of this work was to study the influence of age on the ability of subjects to adapt mastication to changes in the hardness of foods. The study was carried out on 67 volunteers aged from 25 to 75 yr (29 males, 38 females) who had complete healthy dentitions. Surface electromyograms of the left and right masseter and temporalis muscles were recorded simultaneously with jaw movements using an electromagnetic transducer. Each volunteer was asked to chew and swallow four visco-elastic model foods of different hardness, each presented three times in random order. The number of masticatory cycles, their frequency, and the sum of all electromyographic (EMG) activity in all four muscles were calculated for each masticatory sequence. Multiple linear regression analyses were used to assess the effects of hardness, age, and gender. Hardness was associated to an increase in the mean number of cycles and mean summed EMG activity per sequence. It also increased mean vertical amplitude. Mean vertical amplitude and mean summed EMG activity per sequence were higher in males. These adaptations were present at all ages. Age was associated with an increase of 0.3 cycles per sequence per year of life and with a progressive increase in mean summed EMG activity per sequence. Cycle and opening duration early in the sequence also fell with age. We concluded that although the number of cycles needed to chew a standard piece of food increases progressively with age, the capacity to adapt to changes in the hardness of food is maintained.
On optimal strategies in event-constrained differential games
NASA Technical Reports Server (NTRS)
Heymann, M.; Rajan, N.; Ardema, M.
1985-01-01
Combat games are formulated as zero-sum differential games with unilateral event constraints. An interior penalty function approach is employed to approximate optimal strategies for the players. The method is very attractive computationally and possesses suitable approximation and convergence properties.
Sum Rule for a Schiff-Like Dipole Moment
NASA Astrophysics Data System (ADS)
Raduta, A. A.; Budaca, R.
The energy-weighted sum rule for an electric dipole transition operator of a Schiff type differs from the Thomas-Reiche-Kuhn (TRK) sum rule by several corrective terms which depend on the number of system components, N. For illustration the formalism was applied to the case of Na clusters. One concludes that the random phase approximation (RPA) results for Na clusters obey the modified TRK sum rule.
A Solution to Weighted Sums of Squares as a Square
ERIC Educational Resources Information Center
Withers, Christopher S.; Nadarajah, Saralees
2012-01-01
For n = 1, 2, ... , we give a solution (x[subscript 1], ... , x[subscript n], N) to the Diophantine integer equation [image omitted]. Our solution has N of the form n!, in contrast to other solutions in the literature that are extensions of Euler's solution for N, a sum of squares. More generally, for given n and given integer weights m[subscript…
NASA Technical Reports Server (NTRS)
Hinson, E. W.
1981-01-01
The preliminary analysis and data analysis system development for the shuttle upper atmosphere mass spectrometer (SUMS) experiment are discussed. The SUMS experiment is designed to provide free stream atmospheric density, pressure, temperature, and mean molecular weight for the high altitude, high Mach number region.
Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations
NASA Astrophysics Data System (ADS)
Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.
2018-07-01
One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We consider simulations in which the number of photon packets is Poisson distributed, while the weight assigned to a single photon packet follows any distribution of choice. We show how to estimate the statistical uncertainty of the sum of weights in each bin from the output of a single radiative-transfer simulation. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalize existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.
Propagation of angular errors in two-axis rotation systems
NASA Astrophysics Data System (ADS)
Torrington, Geoffrey K.
2003-10-01
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
NASA Astrophysics Data System (ADS)
Luy, N. T.
2018-04-01
The design of distributed cooperative H∞ optimal controllers for multi-agent systems is a major challenge when the agents' models are uncertain multi-input and multi-output nonlinear systems in strict-feedback form in the presence of external disturbances. In this paper, first, the distributed cooperative H∞ optimal tracking problem is transformed into controlling the cooperative tracking error dynamics in affine form. Second, control schemes and online algorithms are proposed via adaptive dynamic programming (ADP) and the theory of zero-sum differential graphical games. The schemes use only one neural network (NN) for each agent instead of three from ADP to reduce computational complexity as well as avoid choosing initial NN weights for stabilising controllers. It is shown that despite not using knowledge of cooperative internal dynamics, the proposed algorithms not only approximate values to Nash equilibrium but also guarantee all signals, such as the NN weight approximation errors and the cooperative tracking errors in the closed-loop system, to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is shown by simulation results of an application to wheeled mobile multi-robot systems.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Motkova, A. V.
2018-01-01
A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.
12 CFR 217.152 - Simple risk weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... than or equal to -1 (that is, between zero and -1), then E equals the absolute value of RVC. If RVC is... this section. (1) Zero percent risk weight equity exposures. An equity exposure to an entity whose credit exposures are exempt from the 0.03 percent PD floor in § 217.131(d)(2) is assigned a zero percent...
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Resource Management in Peace and War
1990-04-01
the relatively uncon- strained use of available military forces and weapons, including nuclear, chemical, biological , or other weapons capable of...The Zero-Sum Solution, 1985 (New York: Simon & Schuster, 1985), p. 333. 50. John Naisbitt, Megatrends (New York: Warner Books, 1984), pp. 53-60. 51
Integrated optimization of planetary rover layout and exploration routes
NASA Astrophysics Data System (ADS)
Lee, Dongoo; Ahn, Jaemyung
2018-01-01
This article introduces an optimization framework for the integrated design of a planetary surface rover and its exploration route that is applicable to the initial phase of a planetary exploration campaign composed of multiple surface missions. The scientific capability and the mobility of a rover are modelled as functions of the science weight fraction, a key parameter characterizing the rover. The proposed problem is formulated as a mixed-integer nonlinear program that maximizes the sum of profits obtained through a planetary surface exploration mission by simultaneously determining the science weight fraction of the rover, the sites to visit and their visiting sequences under resource consumption constraints imposed on each route and collectively on a mission. A solution procedure for the proposed problem composed of two loops (the outer loop and the inner loop) is developed. The results of test cases demonstrating the effectiveness of the proposed framework are presented.
Transition sum rules in the shell model
NASA Astrophysics Data System (ADS)
Lu, Yi; Johnson, Calvin W.
2018-03-01
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.
On the fluctuations of sums of independent random variables.
Feller, W
1969-07-01
If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.
Approximation of eigenvalues of some differential equations by zeros of orthogonal polynomials
NASA Astrophysics Data System (ADS)
Volkmer, Hans
2008-04-01
Sequences of polynomials, orthogonal with respect to signed measures, are associated with a class of differential equations including the Mathieu, Lame and Whittaker-Hill equation. It is shown that the zeros of pn form sequences which converge to the eigenvalues of the corresponding differential equations. Moreover, interlacing properties of the zeros of pn are found. Applications to the numerical treatment of eigenvalue problems are given.
ERIC Educational Resources Information Center
Tucker, Bill
2009-01-01
Education reform often appears a zero-sum battle, one that pits crusaders demanding accountability and choice against much of the traditional education establishment, including teachers unions. The political skirmishes in Florida, including court fights over vouchers and charter schools, and ongoing struggles over a parade of different merit pay…
Stochastic Differential Games with Asymmetric Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardaliaguet, Pierre, E-mail: Pierre.Cardaliaguet@univ-brest.fr; Rainer, Catherine
2009-02-15
We investigate a two-player zero-sum stochastic differential game in which the players have an asymmetric information on the random payoff. We prove that the game has a value and characterize this value in terms of dual viscosity solutions of some second order Hamilton-Jacobi equation.
Decentralized indirect methods for learning automata games.
Tilak, Omkar; Martin, Ryan; Mukhopadhyay, Snehasis
2011-10-01
We discuss the application of indirect learning methods in zero-sum and identical payoff learning automata games. We propose a novel decentralized version of the well-known pursuit learning algorithm. Such a decentralized algorithm has significant computational advantages over its centralized counterpart. The theoretical study of such a decentralized algorithm requires the analysis to be carried out in a nonstationary environment. We use a novel bootstrapping argument to prove the convergence of the algorithm. To our knowledge, this is the first time that such analysis has been carried out for zero-sum and identical payoff games. Extensive simulation studies are reported, which demonstrate the proposed algorithm's fast and accurate convergence in a variety of game scenarios. We also introduce the framework of partial communication in the context of identical payoff games of learning automata. In such games, the automata may not communicate with each other or may communicate selectively. This comprehensive framework has the capability to model both centralized and decentralized games discussed in this paper.
Chaos in learning a simple two-person game
Sato, Yuzuru; Akiyama, Eizo; Farmer, J. Doyne
2002-01-01
We investigate the problem of learning to play the game of rock–paper–scissors. Each player attempts to improve her/his average score by adjusting the frequency of the three possible responses, using reinforcement learning. For the zero sum game the learning process displays Hamiltonian chaos. Thus, the learning trajectory can be simple or complex, depending on initial conditions. We also investigate the non-zero sum case and show that it can give rise to chaotic transients. This is, to our knowledge, the first demonstration of Hamiltonian chaos in learning a basic two-person game, extending earlier findings of chaotic attractors in dissipative systems. As we argue here, chaos provides an important self-consistency condition for determining when players will learn to behave as though they were fully rational. That chaos can occur in learning a simple game indicates one should use caution in assuming real people will learn to play a game according to a Nash equilibrium strategy. PMID:11930020
Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun
2018-01-01
Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.
Zhao, Tanfeng; Zhang, Qingyou; Long, Hailin; Xu, Lu
2014-01-01
In order to explore atomic asymmetry and molecular chirality in 2D space, benzenoids composed of 3 to 11 hexagons in 2D space were enumerated in our laboratory. These benzenoids are regarded as planar connected polyhexes and have no internal holes; that is, their internal regions are filled with hexagons. The produced dataset was composed of 357,968 benzenoids, including more than 14 million atoms. Rather than simply labeling the huge number of atoms as being either symmetric or asymmetric, this investigation aims at exploring a quantitative graph theoretical descriptor of atomic asymmetry. Based on the particular characteristics in the 2D plane, we suggested the weighted atomic sum as the descriptor of atomic asymmetry. This descriptor is measured by circulating around the molecule going in opposite directions. The investigation demonstrates that the weighted atomic sums are superior to the previously reported quantitative descriptor, atomic sums. The investigation of quantitative descriptors also reveals that the most asymmetric atom is in a structure with a spiral ring with the convex shape going in clockwise direction and concave shape going in anticlockwise direction from the atom. Based on weighted atomic sums, a weighted F index is introduced to quantitatively represent molecular chirality in the plane, rather than merely regarding benzenoids as being either chiral or achiral. By validating with enumerated benzenoids, the results indicate that the weighted F indexes were in accordance with their chiral classification (achiral or chiral) over the whole benzenoids dataset. Furthermore, weighted F indexes were superior to previously available descriptors. Benzenoids possess a variety of shapes and can be extended to practically represent any shape in 2D space—our proposed descriptor has thus the potential to be a general method to represent 2D molecular chirality based on the difference between clockwise and anticlockwise sums around a molecule. PMID:25032832
Origin and implications of zero degeneracy in networks spectra.
Yadav, Alok; Jalan, Sarika
2015-04-01
The spectra of many real world networks exhibit properties which are different from those of random networks generated using various models. One such property is the existence of a very high degeneracy at the zero eigenvalue. In this work, we provide all the possible reasons behind the occurrence of the zero degeneracy in the network spectra, namely, the complete and partial duplications, as well as their implications. The power-law degree sequence and the preferential attachment are the properties which enhances the occurrence of such duplications and hence leading to the zero degeneracy. A comparison of the zero degeneracy in protein-protein interaction networks of six different species and in their corresponding model networks indicates importance of the degree sequences and the power-law exponent for the occurrence of zero degeneracy.
Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2009-12-01
We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.
NASA Astrophysics Data System (ADS)
Prihandini, Rafiantika M.; Agustin, I. H.; Dafik
2018-04-01
In this paper we use simple and non trivial graph. If there exist a bijective function g:V(G) \\cup E(G)\\to \\{1,2,\\ldots,|V(G)|+|E(G)|\\}, such that for all subgraphs {P}2\\vartriangleright H of G isomorphic to H, then graph G is called an (a, b)-{P}2\\vartriangleright H-antimagic total graph. Furthermore, we can consider the total {P}2\\vartriangleright H-weights W({P}2\\vartriangleright H)={\\sum }v\\in V({P2\\vartriangleright H)}f(v)+{\\sum }e\\in E({P2\\vartriangleright H)}f(e) which should form an arithmetic sequence {a, a + d, a + 2d, …, a + (n ‑ 1)d}, where a and d are positive integers and n is the number of all subgraphs isomorphic to H. Our paper describes the existence of super (a, b)-{P}2\\vartriangleright H antimagic total labeling for graph operation of comb product namely of G=L\\vartriangleright H, where L is a (b, d*)-edge antimagic vertex labeling graph and H is a connected graph.
Zero absolute vorticity: insight from experiments in rotating laminar plane Couette flow.
Suryadi, Alexandre; Segalini, Antonio; Alfredsson, P Henrik
2014-03-01
For pressure-driven turbulent channel flows undergoing spanwise system rotation, it has been observed that the absolute vorticity, i.e., the sum of the averaged spanwise flow vorticity and system rotation, tends to zero in the central region of the channel. This observation has so far eluded a convincing theoretical explanation, despite experimental and numerical evidence reported in the literature. Here we show experimentally that three-dimensional laminar structures in plane Couette flow, which appear under anticyclonic system rotation, give the same effect, namely, that the absolute vorticity tends to zero if the rotation rate is high enough. It is shown that this is equivalent to a local Richardson number of approximately zero, which would indicate a stable condition. We also offer an explanation based on Kelvin's circulation theorem to demonstrate that the absolute vorticity should remain constant and approximately equal to zero in the central region of the channel when going from the nonrotating fully turbulent state to any state with sufficiently high rotation.
Transition sum rules in the shell model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Yi; Johnson, Calvin W.
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy- weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, in the case of the EWSR a double commutator. While most prior applications of the double-commutator have been to special cases, we derive general formulas for matrix elements of bothmore » operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We then apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E1) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground state electric quadrupole (E2) centroids in the $sd$-shell.« less
Transition sum rules in the shell model
Lu, Yi; Johnson, Calvin W.
2018-03-29
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy- weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, in the case of the EWSR a double commutator. While most prior applications of the double-commutator have been to special cases, we derive general formulas for matrix elements of bothmore » operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We then apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E1) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground state electric quadrupole (E2) centroids in the $sd$-shell.« less
Rui, Wenting; Ren, Yan; Wang, Yin; Gao, Xinyi; Xu, Xiao; Yao, Zhenwei
2017-11-15
The genetic status of 1p/19q is important for differentiating oligodendroglioma, isocitrate-dehydrogenase (IDH)-mutant, and 1p/19q-codeleted from diffuse astrocytoma, IDH-mutant according to the 2016 World Health Organization (WHO) criteria. To assess the value of magnetic resonance textural analysis (MRTA) on T 2 fluid-attenuated inversion recovery (FLAIR) images for making a genetically integrated diagnosis of true oligodendroglioma by WHO guidelines. Retrospective case control. In all, there were 54 patients with a histopathological diagnosis of diffuse glioma (grade II). All were tested for IDH and 1p/19q. 3.0T, including T 2 FLAIR sequence, axial T 1 -weighted, and T 2 -weighted sequence. MRTA on a representative tumor region of interest (ROI) was made on preoperative T 2 FLAIR images around the area that had the largest diameter of solid tumor using Omni Kinetics software. Differences between IDH-mutant and 1p/19q-codeleted and IDH-mutant and 1p/19q-intact gliomas were analyzed by the Mann-Whitney rank sum test. Receiver operating characteristic curves (ROC) were created to assess MRTA diagnostic performance. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated with a cutoff value according to the Youden Index. Comparisons demonstrated significant differences in kurtosis (P = 0.007), energy (0.008), entropy (0.008), mean deviation (MD) (<0.001), and high gray-level run emphasis (HGLRE) (0.002), cluster shade (0.025), and sum average (0.002). First-order features comprising entropy (area under the curve [AUC] = 0.718, sensitivity = 97.1%) and energy (0.719, 94.1%) had the highest sensitivity but lower specificity (both 45%). Second-order features such as HGLRE (AUC = 0.750, sensitivity = 73.5%, specificity = 80.0%) and sum average (0.751, 70.6%, 80.0%) had relatively higher specificity, and all had AUC >0.7. MD had the highest diagnostic performance, with AUC = 0.878, sensitivity = 94.1%, specificity = 75.0%, PPV = 86.5%, and NPV = 88.2%. MRTA on T 2 FLAIR images may be helpful in identifying oligodendroglioma, IDH-mutant, and 1p/19q-codeleted. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.
Chamberlain, Ryan; Reyes, Denise; Curran, Geoffrey L.; Marjanska, Malgorzata; Wengenack, Thomas M.; Poduslo, Joseph F.; Garwood, Michael; Jack, Clifford R.
2009-01-01
One of the hallmark pathologies of Alzheimer’s disease (AD) is amyloid plaque deposition. Plaques appear hypointense on T2- and T2*-weighted MR images probably due to the presence of endogenous iron, but no quantitative comparison of various imaging techniques has been reported. We estimated the T1, T2, T2*, and proton density values of cortical plaques and normal cortical tissue and analyzed the plaque contrast generated by a collection of T2-, T2*-, and susceptibility-weighted imaging (SWI) methods in ex vivo transgenic mouse specimens. The proton density and T1 values were similar for both cortical plaques and normal cortical tissue. The T2 and T2* values were similar in cortical plaques, which indicates that the iron content of cortical plaques may not be as large as previously thought. Ex vivo plaque contrast was increased compared to a previously reported spin echo sequence by summing multiple echoes and by performing SWI; however, gradient echo and susceptibility weighted imaging was found to be impractical for in vivo imaging due to susceptibility interface-related signal loss in the cortex. PMID:19253386
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Heymann, M.; Rajan, N.
1985-01-01
A mathematical formulation is proposed of a combat game between two opponents with offensive capabilities and offensive objective is proposed. Resolution of the combat involves solving two differential games with state constraints. Depending on the game dynamics and parameters, the combat can terminate in one of four ways: the first player wins; the second player wins; a draw (neither wins); or joint capture. In the first two cases, the optimal strategies of the two players are determined from suitable zero-sum games, whereas in the latter two the relevant games are nonzero-sum. Further, to avoid certain technical difficulties, the concept of a delta-combat game is introduced.
A narrow band pattern-matching model of vowel perception
NASA Astrophysics Data System (ADS)
Hillenbrand, James M.; Houde, Robert A.
2003-02-01
The purpose of this paper is to propose and evaluate a new model of vowel perception which assumes that vowel identity is recognized by a template-matching process involving the comparison of narrow band input spectra with a set of smoothed spectral-shape templates that are learned through ordinary exposure to speech. In the present simulation of this process, the input spectra are computed over a sufficiently long window to resolve individual harmonics of voiced speech. Prior to template creation and pattern matching, the narrow band spectra are amplitude equalized by a spectrum-level normalization process, and the information-bearing spectral peaks are enhanced by a ``flooring'' procedure that zeroes out spectral values below a threshold function consisting of a center-weighted running average of spectral amplitudes. Templates for each vowel category are created simply by averaging the narrow band spectra of like vowels spoken by a panel of talkers. In the present implementation, separate templates are used for men, women, and children. The pattern matching is implemented with a simple city-block distance measure given by the sum of the channel-by-channel differences between the narrow band input spectrum (level-equalized and floored) and each vowel template. Spectral movement is taken into account by computing the distance measure at several points throughout the course of the vowel. The input spectrum is assigned to the vowel template that results in the smallest difference accumulated over the sequence of spectral slices. The model was evaluated using a large database consisting of 12 vowels in /h
Monotonic sequences related to zeros of Bessel functions
NASA Astrophysics Data System (ADS)
Lorch, Lee; Muldoon, Martin
2008-12-01
In the course of their work on Salem numbers and uniform distribution modulo 1, A. Akiyama and Y. Tanigawa proved some inequalities concerning the values of the Bessel function J 0 at multiples of π, i.e., at the zeros of J 1/2. This raises the question of inequalities and monotonicity properties for the sequences of values of one cylinder function at the zeros of another such function. Here we derive such results by differential equations methods.
Water and energy balances in the soil-plant atmosphere continuum
USDA-ARS?s Scientific Manuscript database
Energy fluxes at soil-atmosphere and plant-atmosphere interfaces can be summed to zero because the surfaces have no capacity for energy storage. The resulting energy balance equations may be written in terms of physical descriptions of these fluxes; and have been the basis for problem casting and so...
The Common Application: When Competitors Collaborate
ERIC Educational Resources Information Center
Ehrenberg, Ronald G.; Liu, Albert Yung-Hsu
2009-01-01
American colleges and universities compete with each other for undergraduate and graduate students, faculty and staff, research funding, and external contributions, as well as on athletic fields. This competition is often alleged to be a zero-sum game; what one institution wins, another must lose. However, as the authors show here, the Common…
ERIC Educational Resources Information Center
Quehl, Gary H.
1977-01-01
It is time to drop the "zero sum style" that has public and private institutions divisively arguing over private and public funds, and adopt a posture of mutual support. The president of the Council for the Advancement of Small Colleges (CASC) emphasizes that both independent and state-owned colleges and universities are needed and an educational…
Extremely low birth weight and body size in early adulthood
Doyle, L; Faber, B; Callanan, C; Ford, G; Davis, N
2004-01-01
Aims: To determine the body size of extremely low birth weight (ELBW, birth weight 500–999 g) subjects in early adulthood. Methods: Cohort study examining the height and weight of 42 ELBW survivors free of cerebral palsy between birth and 20 years of age. Weight and height measurements were converted to Z (SD) scores. Results: At birth the subjects had weight Z scores substantially below zero (mean birth weight Z score -0.90, 95% CI -1.25 to -0.54), and had been lighter than average at ages 2, 5, and 8 years. However, by 14, and again at 20 years of age their weight Z scores were not significantly different from zero. At ages 2, 5, 8, 14, and 20 years of age their height Z scores were significantly below zero. Their height at 20 years of age was, however, consistent with their parents' height. As a group they were relatively heavy for their height and their mean body mass index (BMI) Z score was almost significantly different from zero (mean difference 0.42, 95% CI -0.02 to 0.84). Their mean BMI (kg/m2) was 24.0 (SD 5.2); 14 had a BMI >25, and four had a BMI >30. Conclusions: Despite their early small size, by early adulthood the ELBW subjects had attained an average weight, and their height was consistent with their parents' height. They were, however, relatively heavy for their height. PMID:15033844
Zero Thermal Noise in Resistors at Zero Temperature
NASA Astrophysics Data System (ADS)
Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran
2016-06-01
The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.
Fast single-pass alignment and variant calling using sequencing data
USDA-ARS?s Scientific Manuscript database
Sequencing research requires efficient computation. Few programs use already known information about DNA variants when aligning sequence data to the reference map. New program findmap.f90 reads the previous variant list before aligning sequence, calling variant alleles, and summing the allele counts...
Four new topological indices based on the molecular path code.
Balaban, Alexandru T; Beteringhe, Adrian; Constantinescu, Titus; Filip, Petru A; Ivanciuc, Ovidiu
2007-01-01
The sequence of all paths pi of lengths i = 1 to the maximum possible length in a hydrogen-depleted molecular graph (which sequence is also called the molecular path code) contains significant information on the molecular topology, and as such it is a reasonable choice to be selected as the basis of topological indices (TIs). Four new (or five partly new) TIs with progressively improved performance (judged by correctly reflecting branching, centricity, and cyclicity of graphs, ordering of alkanes, and low degeneracy) have been explored. (i) By summing the squares of all numbers in the sequence one obtains Sigmaipi(2), and by dividing this sum by one plus the cyclomatic number, a Quadratic TI is obtained: Q = Sigmaipi(2)/(mu+1). (ii) On summing the Square roots of all numbers in the sequence one obtains Sigmaipi(1/2), and by dividing this sum by one plus the cyclomatic number, the TI denoted by S is obtained: S = Sigmaipi(1/2)/(mu+1). (iii) On dividing terms in this sum by the corresponding topological distances, one obtains the Distance-reduced index D = Sigmai{pi(1/2)/[i(mu+1)]}. Two similar formulas define the next two indices, the first one with no square roots: (iv) distance-Attenuated index: A = Sigmai{pi/[i(mu + 1)]}; and (v) the last TI with two square roots: Path-count index: P = Sigmai{pi(1/2)/[i(1/2)(mu + 1)]}. These five TIs are compared for their degeneracy, ordering of alkanes, and performance in QSPR (for all alkanes with 3-12 carbon atoms and for all possible chemical cyclic or acyclic graphs with 4-6 carbon atoms) in correlations with six physical properties and one chemical property.
Ramanujan sums for signal processing of low-frequency noise.
Planat, Michel; Rosu, Haret; Perrine, Serge
2002-11-01
An aperiodic (low-frequency) spectrum may originate from the error term in the mean value of an arithmetical function such as Möbius function or Mangoldt function, which are coding sequences for prime numbers. In the discrete Fourier transform the analyzing wave is periodic and not well suited to represent the low-frequency regime. In place we introduce a different signal processing tool based on the Ramanujan sums c(q)(n), well adapted to the analysis of arithmetical sequences with many resonances p/q. The sums are quasiperiodic versus the time n and aperiodic versus the order q of the resonance. Different results arise from the use of this Ramanujan-Fourier transform in the context of arithmetical and experimental signals.
Ramanujan sums for signal processing of low-frequency noise
NASA Astrophysics Data System (ADS)
Planat, Michel; Rosu, Haret; Perrine, Serge
2002-11-01
An aperiodic (low-frequency) spectrum may originate from the error term in the mean value of an arithmetical function such as Möbius function or Mangoldt function, which are coding sequences for prime numbers. In the discrete Fourier transform the analyzing wave is periodic and not well suited to represent the low-frequency regime. In place we introduce a different signal processing tool based on the Ramanujan sums cq(n), well adapted to the analysis of arithmetical sequences with many resonances p/q. The sums are quasiperiodic versus the time n and aperiodic versus the order q of the resonance. Different results arise from the use of this Ramanujan-Fourier transform in the context of arithmetical and experimental signals.
Dependence of Coulomb Sum Rule on the Short Range Correlation by Using Av18 Potential
NASA Astrophysics Data System (ADS)
Modarres, M.; Moeini, H.; Moshfegh, H. R.
The Coulomb sum rule (CSR) and structure factor are calculated for inelastic electron scattering from nuclear matter at zero and finite temperature in the nonrelativistic limit. The effect of short-range correlation (SRC) is presented by using lowest order constrained variational (LOCV) method and the Argonne Av18 and Δ-Reid soft-core potentials. The effects of different potentials as well as temperature are investigated. It is found that the nonrelativistic version of Bjorken scaling approximately sets in at the momentum transfer of about 1.1 to 1.2 GeV/c and the increase of temperature makes it to decrease. While different potentials do not significantly change CSR, the SRC improves the Coulomb sum rule and we get reasonably close results to both experimental data and others theoretical predictions.
1988-03-01
framework for acquistion management to analyzing the Identification Friend, Foe or Neutral (IFFN) Joint Testbed to evaluating C2 components of 0 the...measure. The results on the worksheet were columns consisting of ones and zeroes . Every summed measure (e.g.,FAIR, XMOTi, and XCSTi) received a cumulative...were networked by the gateway and through TASS to one another. c. Structural Components The valL-- of the structural measure remained at zero
Classifying next-generation sequencing data using a zero-inflated Poisson model.
Zhou, Yan; Wan, Xiang; Zhang, Baoxue; Tong, Tiejun
2018-04-15
With the development of high-throughput techniques, RNA-sequencing (RNA-seq) is becoming increasingly popular as an alternative for gene expression analysis, such as RNAs profiling and classification. Identifying which type of diseases a new patient belongs to with RNA-seq data has been recognized as a vital problem in medical research. As RNA-seq data are discrete, statistical methods developed for classifying microarray data cannot be readily applied for RNA-seq data classification. Witten proposed a Poisson linear discriminant analysis (PLDA) to classify the RNA-seq data in 2011. Note, however, that the count datasets are frequently characterized by excess zeros in real RNA-seq or microRNA sequence data (i.e. when the sequence depth is not enough or small RNAs with the length of 18-30 nucleotides). Therefore, it is desired to develop a new model to analyze RNA-seq data with an excess of zeros. In this paper, we propose a Zero-Inflated Poisson Logistic Discriminant Analysis (ZIPLDA) for RNA-seq data with an excess of zeros. The new method assumes that the data are from a mixture of two distributions: one is a point mass at zero, and the other follows a Poisson distribution. We then consider a logistic relation between the probability of observing zeros and the mean of the genes and the sequencing depth in the model. Simulation studies show that the proposed method performs better than, or at least as well as, the existing methods in a wide range of settings. Two real datasets including a breast cancer RNA-seq dataset and a microRNA-seq dataset are also analyzed, and they coincide with the simulation results that our proposed method outperforms the existing competitors. The software is available at http://www.math.hkbu.edu.hk/∼tongt. xwan@comp.hkbu.edu.hk or tongt@hkbu.edu.hk. Supplementary data are available at Bioinformatics online.
Del Grande, Filippo; Subhawong, Ty; Weber, Kristy; Aro, Michael; Mugera, Charles; Fayad, Laura M
2014-05-01
To determine the added value of functional magnetic resonance (MR) sequences (dynamic contrast material-enhanced [DCE] and quantitative diffusion-weighted [DW] imaging with apparent diffusion coefficient [ADC] mapping) for the detection of recurrent soft-tissue sarcomas following surgical resection. This retrospective study was approved by the institutional review board. The requirement to obtain informed consent was waived. Thirty-seven patients referred for postoperative surveillance after resection of soft-tissue sarcoma (35 with high-grade sarcoma) were studied. Imaging at 3.0 T included conventional (T1-weighted, fluid-sensitive, and contrast-enhanced T1-weighted imaging) and functional (DCE MR imaging, DW imaging with ADC mapping) sequences. Recurrences were confirmed with biopsy or resection. A disease-free state was determined with at least 6 months of follow-up. Two readers independently recorded the signal and morphologic characteristics with conventional sequences, the presence or absence of arterial enhancement at DCE MR imaging, and ADCs of the surgical bed. The accuracy of conventional MR imaging in the detection of recurrence was compared with that with the addition of functional sequences. The Fisher exact and Wilcoxon rank sum tests were used to define the accuracy of imaging features, the Cohen κ and Lin interclass correlation were used to define interobserver variability, and receiver operating characteristic analysis was used to define a threshold to detect recurrence and assess reader confidence after the addition of functional imaging to conventional sequences. There were six histologically proved recurrences in 37 patients. Sensitivity and specificity of MR imaging in the detection of tumor recurrence were 100% (six of six patients) and 52% (16 of 31 patients), respectively, with conventional sequences, 100% (six of six patients) and 97% (30 of 31 patients) with the addition of DCE MR imaging, and 60% (three of five patients) and 97% (30 of 31 patients) with the addition of DW imaging and ADC mapping. The average ADC of recurrence (1.08 mm(2)/sec ± 0.19) was significantly different from those of postoperative scarring (0.9 mm(2)/sec ± 0.00) and hematomas (2.34 mm(2)/sec ± 0.72) (P = .03 for both). The addition of functional MR sequences to a routine MR protocol, in particular DCE MR imaging, offers a specificity of more than 95% for distinguishing recurrent sarcoma from postsurgical scarring.
The Two Cultures: A Zero-Sum Game?
ERIC Educational Resources Information Center
Scheessele, Michael R.
2007-01-01
In "The two cultures and the scientific revolution," C.P. Snow (1959) described the chasm between pure and applied science, on the one hand, and the arts and humanities, on the other. Snow was concerned that the complete lack of understanding between these "two cultures" would hamper the spread of the scientific/industrial…
Compensatory dynamics are rare in natural ecological communities.
J.E. Houlahan; D.J. Currie; K. Cottenie; G.S. Cumming; S.K.M. Ernest; C.S. Findlay; S.D. Fuhlendorf; R.D. Stevens; T.J. Willis; I.P. Woiwod; S.M. Wondzell
2007-01-01
Hubbell recently presented a theoretical framework, neutral models, for explaining large-scale patterns of community structure. This theory rests on the foundation of zero-sum ecological communities, that is, the assumption that the number of individuals in a community stays constant over time. If community abundances stay relatively constant, (i.e. approximating the...
A Zero Sum Game? Eliminating Course Repetition and Its Effects on Arts Education
ERIC Educational Resources Information Center
Carrigan, Ting-Pi Joyce
2014-01-01
In 2011, with ongoing concerns over state budget shortfalls and the increasing educational cost structure, California state legislators focused their attention on measures that could lead to access, added productivity, and value in order to sustain the current educational system. One of the recommendations provided by the Legislative Analyst's…
40 CFR 1065.20 - Units of measure and overview of calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... in units of degrees Celsius (°C) unless a calculation requires an absolute temperature. In that case..., formerly ppm (mass). (c) Absolute pressure. Measure absolute pressure directly or calculate it as the sum... at least one additional non-zero digit following the five, remove all the appropriate digits and...
40 CFR 1065.20 - Units of measure and overview of calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... in units of degrees Celsius (°C) unless a calculation requires an absolute temperature. In that case..., formerly ppm (mass). (c) Absolute pressure. Measure absolute pressure directly or calculate it as the sum... at least one additional non-zero digit following the five, remove all the appropriate digits and...
Formulating Social Policy vis-a-vis Immigrants: Win-Win or Zero-Sum Game?
ERIC Educational Resources Information Center
Kissam, Ed
This paper examines the effectiveness of social services provided to Mexican immigrants in rural California. In addition, the paper offers recommendations for service delivery models and for rethinking the objectives of immigrant social policy. At the most basic level, current social program planning and associated analyses of policy options fail…
ERIC Educational Resources Information Center
Seow, Poh-Sun; Pan, Gary
2014-01-01
Extracurricular activities (ECA) have become an important component of students' school life and many schools have invested significant resources on extracurricular activities. The authors suggest three major theoretical frameworks (zero-sum, developmental, and threshold) to explain the impact of ECA participation on students' academic…
Substitution or Symbiosis? Assessing the Relationship between Religious and Secular Giving
ERIC Educational Resources Information Center
Hill, Jonathan P.; Vaidyanathan, Brandon
2011-01-01
Research on philanthropy has not sufficiently examined whether charitable giving to religious causes impinges on giving to secular causes. Examining three waves of national panel data, we find that the relationship between religious and secular giving is generally not of a zero-sum nature; families that increase their religious giving also…
NASA Technical Reports Server (NTRS)
Varaiya, P. P.
1972-01-01
General discussion of the theory of differential games with two players and zero sum. Games starting at a fixed initial state and ending at a fixed final time are analyzed. Strategies for the games are defined. The existence of saddle values and saddle points is considered. A stochastic version of a differential game is used to examine the synthesis problem.
Push and Pull in the Classroom: Competition, Gender and the Neoliberal Subject
ERIC Educational Resources Information Center
Wilkins, Andrew
2012-01-01
In this paper I explore how learning strategies based on competition and zero-sum thinking are inscribed into the dynamics of classroom interaction shaping relations between high-achieving pupils, and link elements of these practices to market trends in British education policy discourse. A detour through the politico-historical negotiations…
Fast summation of divergent series and resurgent transseries from Meijer-G approximants
NASA Astrophysics Data System (ADS)
Mera, Héctor; Pedersen, Thomas G.; Nikolić, Branislav K.
2018-05-01
We develop a resummation approach based on Meijer-G functions and apply it to approximate the Borel sum of divergent series and the Borel-Écalle sum of resurgent transseries in quantum mechanics and quantum field theory (QFT). The proposed method is shown to vastly outperform the conventional Borel-Padé and Borel-Padé-Écalle summation methods. The resulting Meijer-G approximants are easily parametrized by means of a hypergeometric ansatz and can be thought of as a generalization to arbitrary order of the Borel-hypergeometric method [Mera et al., Phys. Rev. Lett. 115, 143001 (2015), 10.1103/PhysRevLett.115.143001]. Here we demonstrate the accuracy of this technique in various examples from quantum mechanics and QFT, traditionally employed as benchmark models for resummation, such as zero-dimensional ϕ4 theory; the quartic anharmonic oscillator; the calculation of critical exponents for the N -vector model; ϕ4 with degenerate minima; self-interacting QFT in zero dimensions; and the summation of one- and two-instanton contributions in the quantum-mechanical double-well problem.
A formulation and analysis of combat games
NASA Technical Reports Server (NTRS)
Heymann, M.; Ardema, M. D.; Rajan, N.
1985-01-01
Combat is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives. With each opponent is associated a target in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously or in neither, a joint capture or a draw, respectively, is said to occur. Resolution of the encounter is formulated as a combat game; namely, as a pair of competing event-constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero-sum differential game. Otherwise the optimal strategies are computed from a resulting non-zero-sum game. Since optimal combat strategies frequencies may not exist, approximate of delta-combat games are also formulated leading to approximate or delta-optimal strategies. To illustrate combat games, an example, called the turret game, is considered. This game may be thought of as a highly simplified model of air combat, yet it is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit-evasion games.
Classification of digital affine noncommutative geometries
NASA Astrophysics Data System (ADS)
Majid, Shahn; Pachoł, Anna
2018-03-01
It is known that connected translation invariant n-dimensional noncommutative differentials dxi on the algebra k[x1, …, xn] of polynomials in n-variables over a field k are classified by commutative algebras V on the vector space spanned by the coordinates. These data also apply to construct differentials on the Heisenberg algebra "spacetime" with relations [xμ, xν] = λΘμν, where Θ is an antisymmetric matrix, as well as to Lie algebras with pre-Lie algebra structures. We specialise the general theory to the field k =F2 of two elements, in which case translation invariant metrics (i.e., with constant coefficients) are equivalent to making V a Frobenius algebra. We classify all of these and their quantum Levi-Civita bimodule connections for n = 2, 3, with partial results for n = 4. For n = 2, we find 3 inequivalent differential structures admitting 1, 2, and 3 invariant metrics, respectively. For n = 3, we find 6 differential structures admitting 0, 1, 2, 3, 4, 7 invariant metrics, respectively. We give some examples for n = 4 and general n. Surprisingly, not all our geometries for n ≥ 2 have zero quantum Riemann curvature. Quantum gravity is normally seen as a weighted "sum" over all possible metrics but our results are a step towards a deeper approach in which we must also "sum" over differential structures. Over F2 we construct some of our algebras and associated structures by digital gates, opening up the possibility of "digital geometry."
Quasi-Classical Asymptotics for the Pauli Operator
NASA Astrophysics Data System (ADS)
Sobolev, Alexander V.
We study the behaviour of the sums of the eigenvalues of the Pauli operator in , in a magnetic field and electric field V(x) as the Planck constant ħ tends to zero and the magnetic field strength μ tends to infinity. We show that for the sum obeys the natural Weyl type formula
Hisada, Hiromoto; Tsutsumi, Hiroko; Ishida, Hiroki; Hata, Yoji
2013-01-01
Llama variable heavy-chain antibody fragment (VHH) fused to four different reader proteins was produced and secreted in culture medium by Aspergillus oryzae. These fusion proteins consisted of N-terminal reader proteins, VHH, and a C-terminal his-tag sequence which facilitated purification using one-step his-tag affinity chromatography. SDS-PAGE analysis of the deglycosylated purified fusion proteins confirmed that the molecular weight of each corresponded to the expected sum of VHH and the respective reader proteins. The apparent high molecular weight reader protein glucoamylase (GlaB) was found to be suitable for efficient VHH production. The GlaB-VHH-His protein bound its antigen, human chorionic gonadotropin, and was detectable by a new ELISA-based method using a coupled assay with glucoamylase, glucose oxidase, peroxidase, maltose, and 3,3',5,5'-tetramethylbenzidine as substrates. Addition of potassium phosphate to the culture medium induced secretion of 0.61 mg GlaB-VHH-His protein/ml culture medium in 5 days.
Auditory alert systems with enhanced detectability
NASA Technical Reports Server (NTRS)
Begault, Durand R. (Inventor)
2008-01-01
Methods and systems for distinguishing an auditory alert signal from a background of one or more non-alert signals. In a first embodiment, a prefix signal, associated with an existing alert signal, is provided that has a signal component in each of three or more selected frequency ranges, with each signal component in each of three or more selected level at least 3-10 dB above an estimated background (non-alert) level in that frequency range. The alert signal may be chirped within one or more frequency bands. In another embodiment, an alert signal moves, continuously or discontinuously, from one location to another over a short time interval, introducing a perceived spatial modulation or jitter. In another embodiment, a weighted sum of background signals adjacent to each ear is formed, and the weighted sum is delivered to each ear as a uniform background; a distinguishable alert signal is presented on top of this weighted sum signal at one ear, or distinguishable first and second alert signals are presented at two ears of a subject.
Multiple Interactive Pollutants in Water Quality Trading
NASA Astrophysics Data System (ADS)
Sarang, Amin; Lence, Barbara J.; Shamsai, Abolfazl
2008-10-01
Efficient environmental management calls for the consideration of multiple pollutants, for which two main types of transferable discharge permit (TDP) program have been described: separate permits that manage each pollutant individually in separate markets, with each permit based on the quantity of the pollutant or its environmental effects, and weighted-sum permits that aggregate several pollutants as a single commodity to be traded in a single market. In this paper, we perform a mathematical analysis of TDP programs for multiple pollutants that jointly affect the environment (i.e., interactive pollutants) and demonstrate the practicality of this approach for cost-efficient maintenance of river water quality. For interactive pollutants, the relative weighting factors are functions of the water quality impacts, marginal damage function, and marginal treatment costs at optimality. We derive the optimal set of weighting factors required by this approach for important scenarios for multiple interactive pollutants and propose using an analytical elasticity of substitution function to estimate damage functions for these scenarios. We evaluate the applicability of this approach using a hypothetical example that considers two interactive pollutants. We compare the weighted-sum permit approach for interactive pollutants with individual permit systems and TDP programs for multiple additive pollutants. We conclude by discussing practical considerations and implementation issues that result from the application of weighted-sum permit programs.
Stochastic many-body perturbation theory for anharmonic molecular vibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hermes, Matthew R.; Hirata, So, E-mail: sohirata@illinois.edu; CREST, Japan Science and Technology Agency, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012
2014-08-28
A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value ofmore » a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm{sup −1} and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.« less
Endo, Hironobu; Sekiguchi, Kenji; Shimada, Hitoshi; Ueda, Takehiro; Kowa, Hisatomo; Kanda, Fumio; Toda, Tatsushi
2018-03-01
There is no reliable objective indicator for upper motor neuron dysfunction in amyotrophic lateral sclerosis (ALS). To determine the clinical significance and potential utility of magnetic resonance (MR) signals, we investigated the relationship between clinical symptoms and susceptibility changes in the motor cortex measured using susceptibility-weighted MR imaging taken by readily available 3-T MRI in clinical practice. Twenty-four ALS patients and 14 control subjects underwent 3-T MR T1-weighted imaging and susceptibility-weighted MR imaging with the principles of echo-shifting with a train of observations (PRESTO) sequence. We analysed relationships between relative susceptibility changes in the motor cortex assessed using voxel-based analysis (VBA) and clinical scores, including upper motor neuron score, ALS functional rating scale revised score, and Medical Research Council sum score on physical examination. Patients with ALS exhibited significantly lower signal intensity in the precentral gyrus on susceptibility-weighted MR imaging compared with controls. Clinical scores were significantly correlated with susceptibility changes. Importantly, the extent of the susceptibility changes in the bilateral precentral gyri was significantly correlated with upper motor neuron scores. The results of our pilot study using VBA indicated that low signal intensity in motor cortex on susceptibility-weighted MR imaging may correspond to clinical symptoms, particularly upper motor neuron dysfunction. Susceptibility-weighted MR imaging may be a useful diagnostic tool as an objective indicator of upper motor neuron dysfunction.
Propulsion Investigation for Zero and Near-Zero Emissions Aircraft
NASA Technical Reports Server (NTRS)
Snyder, Christopher A.; Berton, Jeffrey J.; Brown, Gerald v.; Dolce, James L.; Dravid, Marayan V.; Eichenberg, Dennis J.; Freeh, Joshua E.; Gallo, Christopher A.; Jones, Scott M.; Kundu, Krishna P.;
2009-01-01
As world emissions are further scrutinized to identify areas for improvement, aviation s contribution to the problem can no longer be ignored. Previous studies for zero or near-zero emissions aircraft suggest aircraft and propulsion system sizes that would perform propulsion system and subsystems layout and propellant tankage analyses to verify the weight-scaling relationships. These efforts could be used to identify and guide subsequent work on systems and subsystems to achieve viable aircraft system emissions goals. Previous work quickly focused these efforts on propulsion systems for 70- and 100-passenger aircraft. Propulsion systems modeled included hydrogen-fueled gas turbines and fuel cells; some preliminary estimates combined these two systems. Hydrogen gas-turbine engines, with advanced combustor technology, could realize significant reductions in nitrogen emissions. Hydrogen fuel cell propulsion systems were further laid out, and more detailed analysis identified systems needed and weight goals for a viable overall system weight. Results show significant, necessary reductions in overall weight, predominantly on the fuel cell stack, and power management and distribution subsystems to achieve reasonable overall aircraft sizes and weights. Preliminary conceptual analyses for a combination of gas-turbine and fuel cell systems were also performed, and further studies were recommended. Using gas-turbine engines combined with fuel cell systems can reduce the fuel cell propulsion system weight, but at higher fuel usage than using the fuel cell only.
Minimizing the Sum of Completion Times with Resource Dependant Times
NASA Astrophysics Data System (ADS)
Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe
2008-10-01
We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.
On the Hardness of Subset Sum Problem from Different Intervals
NASA Astrophysics Data System (ADS)
Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke
The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.
Warburton, William K.; Momayezi, Michael
2006-06-20
A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.
Cerebral Microbleeds: Burden Assessment by Using Quantitative Susceptibility Mapping
Liu, Tian; Surapaneni, Krishna; Lou, Min; Cheng, Liuquan; Spincemaille, Pascal
2012-01-01
Purpose: To assess quantitative susceptibility mapping (QSM) for reducing the inconsistency of standard magnetic resonance (MR) imaging sequences in measurements of cerebral microbleed burden. Materials and Methods: This retrospective study was HIPAA compliant and institutional review board approved. Ten patients (5.6%) were selected from among 178 consecutive patients suspected of having experienced a stroke who were imaged with a multiecho gradient-echo sequence at 3.0 T and who had cerebral microbleeds on T2*-weighted images. QSM was performed for various ranges of echo time by using both the magnitude and phase components in the morphology-enabled dipole inversion method. Cerebral microbleed size was measured by two neuroradiologists on QSM images, T2*-weighted images, susceptibility-weighted (SW) images, and R2* maps calculated by using different echo times. The sum of susceptibility over a region containing a cerebral microbleed was also estimated on QSM images as its total susceptibility. Measurement differences were assessed by using the Student t test and the F test; P < .05 was considered to indicate a statistically significant difference. Results: When echo time was increased from approximately 20 to 40 msec, the measured cerebral microbleed volume increased by mean factors of 1.49 ± 0.86 (standard deviation), 1.64 ± 0.84, 2.30 ± 1.20, and 2.30 ± 1.19 for QSM, R2*, T2*-weighted, and SW images, respectively (P < .01). However, the measured total susceptibility with QSM did not show significant change over echo time (P = .31), and the variation was significantly smaller than any of the volume increases (P < .01 for each). Conclusion: The total susceptibility of a cerebral microbleed measured by using QSM is a physical property that is independent of echo time. © RSNA, 2011 PMID:22056688
Finite-width Laplace sum rules for 0-+ pseudoscalar glueball in the instanton vacuum model
NASA Astrophysics Data System (ADS)
Wang, Feng; Chen, Junlong; Liu, Jueping
2015-10-01
The correlation function of the 0-+ pseudoscalar glueball current is calculated based on the semiclassical expansion for quantum chromodynamics (QCD) in the instanton liquid background. Besides taking the pure classical contribution from instantons and the perturbative one into account, we calculate the contribution arising from the interaction (or the interference) between instantons and the quantum gluon fields, which is infrared free and more important than the pure perturbative one. Instead of the usual zero-width approximation for the resonances, the Breit-Wigner form with a correct threshold behavior for the spectral function of the finite-width resonance is adopted. The properties of the 0-+ pseudoscalar glueball are investigated via a family of the QCD Laplacian sum rules. A consistency between the subtracted and unsubtracted sum rules is very well justified. The values of the mass, decay width, and coupling constants for the 0-+ resonance in which the glueball fraction is dominant are obtained.
A parallel approach of COFFEE objective function to multiple sequence alignment
NASA Astrophysics Data System (ADS)
Zafalon, G. F. D.; Visotaky, J. M. V.; Amorim, A. R.; Valêncio, C. R.; Neves, L. A.; de Souza, R. C. G.; Machado, J. M.
2015-09-01
The computational tools to assist genomic analyzes show even more necessary due to fast increasing of data amount available. With high computational costs of deterministic algorithms for sequence alignments, many works concentrate their efforts in the development of heuristic approaches to multiple sequence alignments. However, the selection of an approach, which offers solutions with good biological significance and feasible execution time, is a great challenge. Thus, this work aims to show the parallelization of the processing steps of MSA-GA tool using multithread paradigm in the execution of COFFEE objective function. The standard objective function implemented in the tool is the Weighted Sum of Pairs (WSP), which produces some distortions in the final alignments when sequences sets with low similarity are aligned. Then, in studies previously performed we implemented the COFFEE objective function in the tool to smooth these distortions. Although the nature of COFFEE objective function implies in the increasing of execution time, this approach presents points, which can be executed in parallel. With the improvements implemented in this work, we can verify the execution time of new approach is 24% faster than the sequential approach with COFFEE. Moreover, the COFFEE multithreaded approach is more efficient than WSP, because besides it is slightly fast, its biological results are better.
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
Spacecraft intercept guidance using zero effort miss steering
NASA Astrophysics Data System (ADS)
Newman, Brett
The suitability of proportional navigation, or an equivalent zero effort miss formulation, for spacecraft intercepts during midcourse guidance, followed by a ballistic coast to the endgame, is addressed. The problem is formulated in terms of relative motion in a general 3D framework. The proposed guidance law for the commanded thrust vector orientation consists of the sum of two terms: (1) along the line of sight unit direction and (2) along the zero effort miss component perpendicular to the line of sight and proportional to the miss itself and a guidance gain. If the guidance law is to be suitable for longer range targeting applications with significant ballistic coasting after burnout, determination of the zero effort miss must account for the different gravitational accelerations experienced by each vehicle. The proposed miss determination techniques employ approximations for the true differential gravity effect. Theoretical results are applied to a numerical engagement scenario and the resulting performance is evaluated in terms of the miss distances determined from nonlinear simulation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... weights corresponding to the airplane operating conditions (such as ramp, ground or water taxi, takeoff... conditions (such as zero fuel weight, center of gravity position and weight distribution) must be established...
Code of Federal Regulations, 2012 CFR
2012-01-01
... weights corresponding to the airplane operating conditions (such as ramp, ground or water taxi, takeoff... conditions (such as zero fuel weight, center of gravity position and weight distribution) must be established...
Code of Federal Regulations, 2013 CFR
2013-01-01
... weights corresponding to the airplane operating conditions (such as ramp, ground or water taxi, takeoff... conditions (such as zero fuel weight, center of gravity position and weight distribution) must be established...
Swingle, Brian
2013-09-06
We compute the entanglement entropy of a wide class of models that may be characterized as describing matter coupled to gauge fields. Our principle result is an entanglement sum rule that states that the entropy of the full system is the sum of the entropies of the two components. In the context of the models we consider, this result applies to the full entropy, but more generally it is a statement about the additivity of universal terms in the entropy. Our proof simultaneously extends and simplifies previous arguments, with extensions including new models at zero temperature as well as the ability to treat finite temperature crossovers. We emphasize that while the additivity is an exact statement, each term in the sum may still be difficult to compute. Our results apply to a wide variety of phases including Fermi liquids, spin liquids, and some non-Fermi liquid metals. For example, we prove that our model of an interacting Fermi liquid has exactly the log violation of the area law for entanglement entropy predicted by the Widom formula in agreement with earlier arguments.
On the origin independence of the Verdet tensor†
NASA Astrophysics Data System (ADS)
Caputo, M. C.; Coriani, S.; Pelloni, S.; Lazzeretti, P.
2013-07-01
The condition for invariance under a translation of the coordinate system of the Verdet tensor and the Verdet constant, calculated via quantum chemical methods using gaugeless basis sets, is expressed by a vanishing sum rule involving a third-rank polar tensor. The sum rule is, in principle, satisfied only in the ideal case of optimal variational electronic wavefunctions. In general, it is not fulfilled in non-variational calculations and variational calculations allowing for the algebraic approximation, but it can be satisfied for reasons of molecular symmetry. Group-theoretical procedures have been used to determine (i) the total number of non-vanishing components and (ii) the unique components of both the polar tensor appearing in the sum rule and the axial Verdet tensor, for a series of symmetry groups. Test calculations at the random-phase approximation level of accuracy for water, hydrogen peroxide and ammonia molecules, using basis sets of increasing quality, show a smooth convergence to zero of the sum rule. Verdet tensor components calculated for the same molecules converge to limit values, estimated via large basis sets of gaugeless Gaussian functions and London orbitals.
Exploring the Sums of Powers of Consecutive q-Integers
ERIC Educational Resources Information Center
Kim, T.; Ryoo, C. S.; Jang, L. C.; Rim, S. H.
2005-01-01
The Bernoulli numbers are among the most interesting and important number sequences in mathematics. They first appeared in the posthumous work "Ars Conjectandi" (1713) by Jacob Bernoulli (1654-1705) in connection with sums of powers of consecutive integers (Bernoulli, 1713; or Smith, 1959). Bernoulli numbers are particularly important in number…
Methods for Scaling to Doubly Stochastic Form,
1981-06-26
Frobenius -Konig Theorem (MARCUS and MINC [1964],p 97) A nonnegative n xn matrix without support contains an s x t zero subma- trix where: s +t =n + -3...that YA(k) has row sums 1. Then normalize the columns by a diagonal similarity transform defined as follows: Let x = (zx , • z,,) be a left Perron vector
ERIC Educational Resources Information Center
Raths, David
2011-01-01
In this era of political rancor when everything is a zero-sum game, it is refreshing to think that two parties could actually come together to create a win-win situation. That is exactly what is happening, though, as a number of universities take on the role of application service provider (ASP) for smaller schools. It is not all confetti and…
Project Physics Programmed Instruction, Vectors 2.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
This is the second of a series of three programmed instruction booklets on vectors developed by Harvard Project Physics. It covers adding two or more vectors together, and finding a third vector that could be added to two given vectors to make a sum of zero. For other booklets in this series, see SE 015 549 and SE 015 551. (DT)
A Probabilistic-Numerical Approximation for an Obstacle Problem Arising in Game Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruen, Christine, E-mail: christine.gruen@univ-brest.fr
We investigate a two-player zero-sum stochastic differential game in which one of the players has more information on the game than his opponent. We show how to construct numerical schemes for the value function of this game, which is given by the solution of a quasilinear partial differential equation with obstacle.
TABLES OF VALUES AND SHOOTING TIMES IN NOISY DUELS.
A noisy duel is a zero-sum, two-person game with the following structure: Each player has bullets which he can fire at any times in (0, 1). If...shooting times for noisy duels are presented, which, in some cases, can be used to trace the play of the game. An additional table illustrates how
A game-theoretic model is proposed for the generalization of a discrete-fire silent duel to a silent duel with continuous firing. This zero-sum two...person game is solved in the symmetric case. It is shown that pure optimal strategies exist and hence also solve a noisy duel with continuous firing. A solution for the general non-symmetric duel is conjectured. (Author)
The Spectre of Neoliberalism: Pedagogy, Gender and the Construction of Learner Identities
ERIC Educational Resources Information Center
Wilkins, Andrew
2012-01-01
In this paper I draw on ethnographic observation data taken from a school-based study of two groups of 12-13-year-old pupils identified as high achieving and popular to explore how relations between teachers and pupils are mediated and constituted through the spectre of neoliberal values and sensibilities--zero-sum thinking, individualism and…
Good Evaluation Measures: More than Their Psychometric Properties
ERIC Educational Resources Information Center
Weitzman, Beth C.; Silver, Diana
2013-01-01
In this commentary, we examine Braverman's insights into the trade-offs between feasibility and rigor in evaluation measures and reject his assessment of the trade-off as a zero-sum game. We, argue that feasibility and policy salience are, like reliability and validity, intrinsic to the definition of a good measure. To reduce the tension between…
ERIC Educational Resources Information Center
Honig, Meredith I.; Lorton, Juli Swinnerton; Copland, Michael A.
2009-01-01
Over the past 15 years, a growing number of mid-sized to large school district central offices have engaged in radical reforms to strengthen teaching and learning for all students districtwide. Such efforts mark a significant change in urban educational governance. The authors call these efforts "district central office transformation for teaching…
78 FR 63279 - Public Notice for Waiver of Aeronautical Land-Use Assurance
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-23
... Commissioners of Orange County for the construction of County Road CR 300 South/Airport Road to facilitate... property for a nominal sum of One Dollar and zero cents ($1.00) for the construction of County Road CR 300 South/Airport Road. Construction of the road will facilitate access to the airport. The aforementioned...
The Israeli-Palestinian Peace Process and Its Vicissitudes: Insights from Attitude Theory
ERIC Educational Resources Information Center
Kelman, Herbert C.
2007-01-01
The vicissitudes of the Israeli-Palestinian peace process since 1967 are analyzed using attitudes and related concepts where relevant. The 1967 war returned the two peoples' zero-sum conflict around national identity to its origin as a conflict within the land both peoples claim. Gradually, new attitudes evolved regarding the necessity and…
International STEM Achievement: Not a Zero-Sum Game
ERIC Educational Resources Information Center
Heilbronner, Nancy N.
2014-01-01
Nancy N. Heilbronner, the associate dean for academic affairs at Mercy College in Dobbs Ferry, New York, lectures internationally on topics related to gifted and science education. In this article she writes that we have not fully come to terms with two influential factors that will likely impact future STEM outcomes across the planet: (1) changes…
NASA Astrophysics Data System (ADS)
Göschl, Daniel
2018-03-01
We discuss simulation strategies for the massless lattice Schwinger model with a topological term and finite chemical potential. The simulation is done in a dual representation where the complex action problem is solved and the partition function is a sum over fermion loops, fermion dimers and plaquette-occupation numbers. We explore strategies to update the fermion loops coupled to the gauge degrees of freedom and check our results with conventional simulations (without topological term and at zero chemical potential), as well as with exact summation on small volumes. Some physical implications of the results are discussed.
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Real time pipelined system for forming the sum of products in the processing of video data
NASA Technical Reports Server (NTRS)
Wilcox, Brian (Inventor)
1988-01-01
A 3-by-3 convolver utilizes 9 binary arithmetic units connected in cascade for multiplying 12-bit binary pixel values P sub i which are positive or two's complement binary numbers by 5-bit magnitide (plus sign) weights W sub i which may be positive or negative. The weights are stored in registers including the sign bits. For a negative weight, the one's complement of the pixel value to be multiplied is formed at each unit by a bank of 17 exclusive or gates G sub i under control of the sign of the corresponding weight W sub i, and a correction is made by adding the sum of the absolute values of all the negative weights for each 3-by-3 kernel. Since this correction value remains constant as long as the weights are constant, it can be precomputed and stored in a register as a value to be added to the product PW of the first arithmetic unit.
NASA Astrophysics Data System (ADS)
Zhang, Ji; Li, Tao; Zheng, Shiqiang; Li, Yiyong
2015-03-01
To reduce the effects of respiratory motion in the quantitative analysis based on liver contrast-enhanced ultrasound (CEUS) image sequencesof single mode. The image gating method and the iterative registration method using model image were adopted to register liver contrast-enhanced ultrasound image sequences of single mode. The feasibility of the proposed respiratory motion correction method was explored preliminarily using 10 hepatocellular carcinomas CEUS cases. The positions of the lesions in the time series of 2D ultrasound images after correction were visually evaluated. Before and after correction, the quality of the weighted sum of transit time (WSTT) parametric images were also compared, in terms of the accuracy and spatial resolution. For the corrected and uncorrected sequences, their mean deviation values (mDVs) of time-intensity curve (TIC) fitting derived from CEUS sequences were measured. After the correction, the positions of the lesions in the time series of 2D ultrasound images were almost invariant. In contrast, the lesions in the uncorrected images all shifted noticeably. The quality of the WSTT parametric maps derived from liver CEUS image sequences were improved more greatly. Moreover, the mDVs of TIC fitting derived from CEUS sequences after the correction decreased by an average of 48.48+/-42.15. The proposed correction method could improve the accuracy of quantitative analysis based on liver CEUS image sequences of single mode, which would help in enhancing the differential diagnosis efficiency of liver tumors.
Moreno, J P; Johnston, C A; Hernandez, D C; LeNoble, J; Papaioannou, M A; Foreyt, J P
2016-10-01
While overweight and obese children are more likely to have overweight or obese parents, less is known about the effect of parental weight status on children's success in weight management programmes. This study was a secondary data analysis of a randomized controlled trial and investigated the impact of having zero, one or two obese parents on children's success in a school-based weight management programme. Sixty-one Mexican-American children participated in a 24-week school-based weight management intervention which took place in 2005-2006. Children's heights and weights were measured at baseline, 3, 6 and 12 months. Parental weight status was assessed at baseline. Repeated measures anova and ancova were conducted to compare changes in children's weight within and between groups, respectively. Within-group comparisons revealed that the intervention led to significant decreases in standardized body mass index (zBMI) for children with zero (F = 23.16, P < .001) or one obese (F = 4.99, P < .05) parent. Between-group comparisons indicated that children with zero and one obese parents demonstrated greater decreases in zBMI compared to children with two obese parents at every time point. The school-based weight management programme appears to be most efficacious for children with one or no obese parents compared to children with two obese parents. These results demonstrate the need to consider parental weight status when engaging in childhood weight management efforts. © 2015 World Obesity.
Shang, Song'an; Ye, Jing; Luo, Xianfu; Qu, Jianxun; Zhen, Yong; Wu, Jingtao
2017-10-01
To prospectively assess coiled intracranial aneurysms using a novel non-contrast enhanced zero echo time (zTE) MR angiography (MRA) method, and compare its image quality with time-of-flight (TOF) MRA, using digital subtraction angiography (DSA) as reference. Twenty-five patients (10 males and 15 females; age 53.96 ± 12.46 years) were enrolled in this monocentric study. MRA sequences were performed 24 h before DSA. Susceptibility artefact intensity and flow signal within the parent artery were carried out using a 4-point scale. Occlusion status was assessed using the 3-grade Montreal scale. Scores of zTE were higher than TOF for both susceptibility artefact intensity (3.42 ± 0.64, 2.92 ± 0.63, P = 0.01) and flow signal (3.66 ± 0.95, 3.24 ± 1.24, P = 0.01). DSA revealed 17 complete occlusions, five residual neck aneurysms and two residual aneurysms. Inter-observer agreement was excellent (weighted κ: 0.89) for zTE and good (weighted κ: 0.68) for TOF. Intermodality agreement was excellent for zTE (weighted κ: 0.95) and good for TOF (weighted κ: 0.80). Correlations of both MRA sequences with DSA were high (zTE, Spearman's ρ: 0.91; TOF, Spearman's ρ: 0.81). zTE MRA showed promising results for follow-up assessment of coiled intracranial aneurysms and was superior to TOF MRA for visualizing the parent artery and evaluating occlusion status. • Various MRA sequences were applied for follow-up assessment of coiled intracranial aneurysms. • zTE MRA was less sensitive to susceptibility artefacts and haemodynamics. • In this monocentric study, zTE MRA was equivalent to DSA. • zTE MRA maybe an alternative to TOF MRA for follow-up assessment.
Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function
NASA Astrophysics Data System (ADS)
Seo, Sang-Wha; Kim, Yong; Choi, Han Ho
2017-11-01
This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.
Counter-ions at single charged wall: Sum rules.
Samaj, Ladislav
2013-09-01
For inhomogeneous classical Coulomb fluids in thermal equilibrium, like the jellium or the two-component Coulomb gas, there exists a variety of exact sum rules which relate the particle one-body and two-body densities. The necessary condition for these sum rules is that the Coulomb fluid possesses good screening properties, i.e. the particle correlation functions or the averaged charge inhomogeneity, say close to a wall, exhibit a short-range (usually exponential) decay. In this work, we study equilibrium statistical mechanics of an electric double layer with counter-ions only, i.e. a globally neutral system of equally charged point-like particles in the vicinity of a plain hard wall carrying a fixed uniform surface charge density of opposite sign. At large distances from the wall, the one-body and two-body counter-ion densities go to zero slowly according to the inverse-power law. In spite of the absence of screening, all known sum rules are shown to hold for two exactly solvable cases of the present system: in the weak-coupling Poisson-Boltzmann limit (in any spatial dimension larger than one) and at a special free-fermion coupling constant in two dimensions. This fact indicates an extended validity of the sum rules and provides a consistency check for reasonable theoretical approaches.
Teaching quantum physics by the sum over paths approach and GeoGebra simulations
NASA Astrophysics Data System (ADS)
Malgieri, M.; Onorato, P.; De Ambrosis, A.
2014-09-01
We present a research-based teaching sequence in introductory quantum physics using the Feynman sum over paths approach. Our reconstruction avoids the historical pathway, and starts by reconsidering optics from the standpoint of the quantum nature of light, analysing both traditional and modern experiments. The core of our educational path lies in the treatment of conceptual and epistemological themes, peculiar of quantum theory, based on evidence from quantum optics, such as the single photon Mach-Zehnder and Zhou-Wang-Mandel experiments. The sequence is supported by a collection of interactive simulations, realized in the open source GeoGebra environment, which we used to assist students in learning the basics of the method, and help them explore the proposed experimental situations as modeled in the sum over paths perspective. We tested our approach in the context of a post-graduate training course for pre-service physics teachers; according to the data we collected, student teachers displayed a greatly improved understanding of conceptual issues, and acquired significant abilities in using the sum over path method for problem solving.
Heterodimer Binding Scaffolds Recognition via the Analysis of Kinetically Hot Residues.
Perišić, Ognjen
2018-03-16
Physical interactions between proteins are often difficult to decipher. The aim of this paper is to present an algorithm that is designed to recognize binding patches and supporting structural scaffolds of interacting heterodimer proteins using the Gaussian Network Model (GNM). The recognition is based on the (self) adjustable identification of kinetically hot residues and their connection to possible binding scaffolds. The kinetically hot residues are residues with the lowest entropy, i.e., the highest contribution to the weighted sum of the fastest modes per chain extracted via GNM. The algorithm adjusts the number of fast modes in the GNM's weighted sum calculation using the ratio of predicted and expected numbers of target residues (contact and the neighboring first-layer residues). This approach produces very good results when applied to dimers with high protein sequence length ratios. The protocol's ability to recognize near native decoys was compared to the ability of the residue-level statistical potential of Lu and Skolnick using the Sternberg and Vakser decoy dimers sets. The statistical potential produced better overall results, but in a number of cases its predicting ability was comparable, or even inferior, to the prediction ability of the adjustable GNM approach. The results presented in this paper suggest that in heterodimers at least one protein has interacting scaffold determined by the immovable, kinetically hot residues. In many cases, interacting proteins (especially if being of noticeably different sizes) either behave as a rigid lock and key or, presumably, exhibit the opposite dynamic behavior. While the binding surface of one protein is rigid and stable, its partner's interacting scaffold is more flexible and adaptable.
Interpretation of body residues for natural resources damage assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubitz, J.A.; Markarian, R.K.; Lauren, D.J.
1995-12-31
A 28-day caged mussel study using Corbicula sp. was conducted on Sugarland Run and the Potomac River following a spill of No. 2 fuel oil. In addition, resident Corbicula sp. from the Potomac River were sampled at the beginning and end of the study. The summed body residues of 39 polycyclic aromatic hydrocarbons (PAHs) ranged from 0.56 to 41 mg/kg dry weight within the study area. The summed body residues of the 18 PAHs that are routinely measured in the national oceanic and Atmospheric Administration Status and Trends Program (NST) ranged from 0.5 to 20 mg/kg dry weight for musselsmore » in this study. These data were similar to summed PAH concentrations reported in the NST for mussels from a variety of US coastal waters, which ranged from 0.4 to 24.5 mg/kg dry weight. This paper will discuss interpretation of PAH residues in Corbicula sp. to determine the spatial extent of the area affected by the oil spill. The toxicological significance of the PAH residues in both resident and caged mussels will also be presented.« less
Methods and statistics for combining motif match scores.
Bailey, T L; Gribskov, M
1998-01-01
Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.
Adopting epidemic model to optimize medication and surgical intervention of excess weight
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2017-01-01
We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.
Main sequence models for massive zero-metal stars
NASA Technical Reports Server (NTRS)
Cary, N.
1974-01-01
Zero-age main-sequence models for stars of 20, 10, 5, and 2 solar masses with no heavy elements are constructed for three different possible primordial helium abundances: Y=0.00, Y=0.23, and Y=0.30. The latter two values of Y bracket the range of primordial helium abundances cited by Wagoner. With the exceptions of the two 20 solar mass models that contain helium, these models are found to be self-consistent in the sense that the formation of carbon through the triple-alpha process during premain sequence contraction is not sufficient to bring the CN cycle into competition with the proton-proton chain on the ZAMS. The zero-metal models of the present study have higher surface and central temperatures, higher central densities, smaller radii, and smaller convective cores than do the population I models with the same masses.
Particle Filtering Methods for Incorporating Intelligence Updates
2017-03-01
methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non
ERIC Educational Resources Information Center
Donnor, Jamel K.
2016-01-01
Despite being academically unqualified for admission to the University of Texas at Austin, Abigail Fisher, a White female, argued that she was not admitted due to the university's diversity policy. In addition to framing post-secondary admissions as a zero-sum phenomenon, Ms. Fisher intentionally framed students of color who are admitted to the…
First in the Class? Age and the Education Production Function. NBER Working Paper No. 13663
ERIC Educational Resources Information Center
Cascio, Elizabeth; Schanzenbach, Diane Whitmore
2007-01-01
Older children outperform younger children in a school-entry cohort well into their school careers. The existing literature has provided little insight into the causes of this phenomenon, leaving open the possibility that school-entry age is zero-sum game, where relatively young students lose what relatively old students gain. In this paper, we…
Speed Versus Accuracy: A Zero Sum Game
2009-05-11
sacrificed to mitigate the risk to credibility. In Ongoing Crisis Communication : Planning, Managing, and Responding, Timothy W. Coombs presents speed in...Carlson and Abelson, 24. 10 Timothy W. Coombs , Crisis Management and Communication , http://www.instituteforpr.org/essential_knowledge/detail...crisi_management_and_communication s (accessed 30 April 2009). 11 Timothy W. Coombs , Ongoing Crisis Communication : Planning, Managing, and Responding, 2nd
ERIC Educational Resources Information Center
Kelly, Anthony
2016-01-01
Like its 2008 predecessor, the, 2014 Research Excellence Framework was a high-stakes exercise. For universities and their constituent departments, it had zero-sum implications for league table position in a way that the 2001 exercise did not, and "post facto" it is having a significant effect on investment and disinvestment as…
ERIC Educational Resources Information Center
Baldacchino, Godfrey
2006-01-01
The "brain drain" phenomenon is typically seen as a zero-sum game, where one party's gain is presumed to be another's drain. This corresponds to deep-seated assumptions about what is "home" and what is "away". This article challenges the view, driven by much "brain drain" literature, that the dynamic is an…
ERIC Educational Resources Information Center
Guiniven, John E.
2002-01-01
Notes that disputes are seen as much less confrontational, much less zero-sum games in Canada than in the United States. Interviews 15 communications and public relations practitioners and professors with experiences on both sides of the 49th parallel and reviews relevant literature. Concludes that the greater acceptance of two-way symmetrical…
Probing Majorana neutrino textures at DUNE
NASA Astrophysics Data System (ADS)
Bora, Kalpana; Borah, Debasish; Dutta, Debajyoti
2017-10-01
We study the possibility of probing different texture zero neutrino mass matrices at the long baseline neutrino experiment DUNE, particularly focusing on its sensitivity to the octant of atmospheric mixing angle θ23 and leptonic Dirac C P phase δcp. Assuming a diagonal charged lepton basis and Majorana nature of light neutrinos, we first classify the possible light neutrino mass matrices with one and two texture zeros and then numerically evaluate the parameter space which satisfies the texture zero conditions. Apart from using the latest global fit 3 σ values of neutrino oscillation parameters, we also use the latest bound on the sum of absolute neutrino masses (∑i |mi|) from the Planck mission data and the updated bound on effective neutrino mass Me e from neutrinoless double beta decay (0 ν β β ) experiments to find the allowed Majorana texture zero mass matrices. For the allowed texture zero mass matrices from all these constraints, we then feed the corresponding light neutrino parameter values satisfying the texture zero conditions into the numerical analysis in order to study the capability of DUNE to allow or exclude them once it starts taking data. We find that DUNE will be able to exclude some of these texture zero mass matrices which restrict (θ23-δcp) to a very specific range of values, depending on the values of the parameters that nature has chosen.
Knutsen, Helle K; Kvalem, Helen E; Thomsen, Cathrine; Frøshaug, May; Haugen, Margaretha; Becher, Georg; Alexander, Jan; Meltzer, Helle M
2008-02-01
This study investigates dietary exposure and serum levels of polybrominated diphenyl ethers (PBDEs) and hexabromocyclododecane (HBCD) in a group of Norwegians (n = 184) with a wide range of seafood consumption (4-455 g/day). Mean dietary exposure to Sum 5 PBDEs (1.5 ng/kg body weight/day) is among the highest reported. Since concentrations in foods were similar to those found elsewhere in Europe, this may be explained by high seafood consumption among Norwegians. Oily fish was the main dietary contributor both to Sum PBDEs and to the considerably lower HBCD intake (0.3 ng/kg body weight/day). Milk products appeared to contribute most to the BDE-209 intake (1.4 ng/kg body weight/day). BDE-209 and HBCD exposures are based on few food samples and need to be confirmed. Serum levels (mean Sum 7 PBDEs = 5.2 ng/g lipid) and congener patterns (BDE-47 > BDE-153 > BDE-99) were comparable with other European reports. Correlations between individual congeners were higher for the calculated dietary exposure than for serum levels. Further, significant but weak correlations were found between dietary exposure and serum levels for Sum PBDEs, BDE-47, and BDE-28 in males. This indicates that other sources in addition to diet need to be addressed.
Delivering both sum and difference beam distributions to a planar monopulse antenna array
Strassner, II, Bernd H.
2015-12-22
A planar monopulse radar apparatus includes a planar distribution matrix coupled to a planar antenna array having a linear configuration of antenna elements. The planar distribution matrix is responsive to first and second pluralities of weights applied thereto for providing both sum and difference beam distributions across the antenna array.
A 2-categorical state sum model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baratin, Aristide, E-mail: abaratin@uwaterloo.ca; Freidel, Laurent, E-mail: lfreidel@perimeterinstitute.ca
It has long been argued that higher categories provide the proper algebraic structure underlying state sum invariants of 4-manifolds. This idea has been refined recently, by proposing to use 2-groups and their representations as specific examples of 2-categories. The challenge has been to make these proposals fully explicit. Here, we give a concrete realization of this program. Building upon our earlier work with Baez and Wise on the representation theory of 2-groups, we construct a four-dimensional state sum model based on a categorified version of the Euclidean group. We define and explicitly compute the simplex weights, which may be viewedmore » a categorified analogue of Racah-Wigner 6j-symbols. These weights solve a hexagon equation that encodes the formal invariance of the state sum under the Pachner moves of the triangulation. This result unravels the combinatorial formulation of the Feynman amplitudes of quantum field theory on flat spacetime proposed in A. Baratin and L. Freidel [Classical Quantum Gravity 24, 2027–2060 (2007)] which was shown to lead after gauge-fixing to Korepanov’s invariant of 4-manifolds.« less
Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065
Neyman-Pearson biometric score fusion as an extension of the sum rule
NASA Astrophysics Data System (ADS)
Hube, Jens Peter
2007-04-01
We define the biometric performance invariance under strictly monotonic functions on match scores as normalization symmetry. We use this symmetry to clarify the essential difference between the standard score-level fusion approaches of sum rule and Neyman-Pearson. We then express Neyman-Pearson fusion assuming match scores defined using false acceptance rates on a logarithmic scale. We show that by stating Neyman-Pearson in this form, it reduces to sum rule fusion for ROC curves with logarithmic slope. We also introduce a one parameter model of biometric performance and use it to express Neyman-Pearson fusion as a weighted sum rule.
Complex-energy approach to sum rules within nuclear density functional theory
Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...
2015-04-27
The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less
Measures with locally finite support and spectrum.
Meyer, Yves F
2016-03-22
The goal of this paper is the construction of measures μ on R(n)enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ f μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order.
Measures with locally finite support and spectrum
Meyer, Yves F.
2016-01-01
The goal of this paper is the construction of measures μ on Rn enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ^ of μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order. PMID:26929358
Exact sum rules for inhomogeneous strings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amore, Paolo, E-mail: paolo.amore@gmail.com
2013-11-15
We derive explicit expressions for the sum rules of the eigenvalues of inhomogeneous strings with arbitrary density and with different boundary conditions. We show that the sum rule of order N may be obtained in terms of a diagrammatic expansion, with (N−1)!/2 independent diagrams. These sum rules are used to derive upper and lower bounds to the energy of the fundamental mode of an inhomogeneous string; we also show that it is possible to improve these approximations taking into account the asymptotic behavior of the spectrum and applying the Shanks transformation to the sequence of approximations obtained to the differentmore » orders. We discuss three applications of these results. -- Highlights: •We derive an explicit expression for the sum rules of an inhomogeneous string. •We obtain a diagrammatic representation for the sum rules of a given order. •We obtain precise bounds on the lowest eigenvalue of the string.« less
Canonical Drude Weight for Non-integrable Quantum Spin Chains
NASA Astrophysics Data System (ADS)
Mastropietro, Vieri; Porta, Marcello
2018-03-01
The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.
Nonsymbolic, Approximate Arithmetic in Children: Abstract Addition Prior to Instruction
ERIC Educational Resources Information Center
Barth, Hilary; Beckmann, Lacey; Spelke, Elizabeth S.
2008-01-01
Do children draw upon abstract representations of number when they perform approximate arithmetic operations? In this study, kindergarten children viewed animations suggesting addition of a sequence of sounds to an array of dots, and they compared the sum to a second dot array that differed from the sum by 1 of 3 ratios. Children performed this…
Nonlinear dynamics of the rock-paper-scissors game with mutations.
Toupo, Danielle F P; Strogatz, Steven H
2015-05-01
We analyze the replicator-mutator equations for the rock-paper-scissors game. Various graph-theoretic patterns of mutation are considered, ranging from a single unidirectional mutation pathway between two of the species, to global bidirectional mutation among all the species. Our main result is that the coexistence state, in which all three species exist in equilibrium, can be destabilized by arbitrarily small mutation rates. After it loses stability, the coexistence state gives birth to a stable limit cycle solution created in a supercritical Hopf bifurcation. This attracting periodic solution exists for all the mutation patterns considered, and persists arbitrarily close to the limit of zero mutation rate and a zero-sum game.
A Nonparametric Approach For Representing Interannual Dependence In Monthly Streamflow Sequences
NASA Astrophysics Data System (ADS)
Sharma, A.; Oneill, R.
The estimation of risks associated with water management plans requires generation of synthetic streamflow sequences. The mathematical algorithms used to generate these sequences at monthly time scales are found lacking in two main respects: inability in preserving dependence attributes particularly at large (seasonal to interannual) time lags; and, a poor representation of observed distributional characteristics, in partic- ular, representation of strong assymetry or multimodality in the probability density function. Proposed here is an alternative that naturally incorporates both observed de- pendence and distributional attributes in the generated sequences. Use of a nonpara- metric framework provides an effective means for representing the observed proba- bility distribution, while the use of a Svariable kernelT ensures accurate modeling of & cedil;streamflow data sets that contain a substantial number of zero flow values. A careful selection of prior flows imparts the appropriate short-term memory, while use of an SaggregateT flow variable allows representation of interannual dependence. The non- & cedil;parametric simulation model is applied to monthly flows from the Beaver River near Beaver, Utah, USA, and the Burrendong dam inflows, New South Wales, Australia. Results indicate that while the use of traditional simulation approaches leads to an inaccurate representation of dependence at long (annual and interannual) time scales, the proposed model can simulate both short and long-term dependence. As a result, the proposed model ensures a significantly improved representation of reservoir storage statistics, particularly for systems influenced by long droughts. It is important to note that the proposed method offers a simpler and better alternative to conventional dis- aggregation models as: (a) a separate annual flow series is not required, (b) stringent assumptions relating annual and monthly flows are not needed, and (c) the method does not require the specification of a "water year", instead ensuring that the sum of any sequence of flows lasting twelve months will result in the type of dependence that is observed in the historical annual flow series.
12 CFR 217.37 - Collateralized transactions.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... Notwithstanding paragraph (b)(2)(i) of this section: (i) A Board-regulated institution may assign a zero percent... the extent that the contract is collateralized by an exposure to a sovereign that qualifies for a zero percent risk weight under § 217.32. (iii) A Board-regulated institution may assign a zero percent risk...
Nurses' short-term prediction of violence in acute psychiatric intensive care.
Björkdahl, A; Olsson, D; Palmstierna, T
2006-03-01
To evaluate the short-term predictive capacity of the Brøset Violence Checklist (BVC) when used by nurses in a psychiatric intensive care unit. Seventy-three patients were assessed according to the BVC three times daily. Violent incidents were recorded with the Staff Observation Aggression Scale, revised version. An extended Cox proportional hazards model with multiple events and time-dependent covariates was estimated to evaluate how the highest BVC sum of the last 24 h and its separate items affect the risk for severe violence within the next 24 h. With a BVC sum of one or more, hazard for severe violence was six times higher than if the sum was zero. Four of the six separate items significantly increased the risk for severe violence with hazard ratios between 3.0 and 6.3. Risk for in-patient violence in a short-term perspective can to a high degree be predicted by nurses using the BVC.
Digital second-order phase-locked loop
NASA Technical Reports Server (NTRS)
Holes, J. K.; Carl, C.; Tegnelia, C. R. (Inventor)
1973-01-01
A digital second-order phase-locked loop is disclosed in which a counter driven by a stable clock pulse source is used to generate a reference waveform of the same frequency as an incoming waveform, and to sample the incoming waveform at zero-crossover points. The samples are converted to digital form and accumulated over M cycles, reversing the sign of every second sample. After every M cycles, the accumulated value of samples is hard limited to a value SGN = + or - 1 and multiplied by a value delta sub 1 equal to a number of n sub 1 of fractions of a cycle. An error signal is used to advance or retard the counter according to the sign of the sum by an amount equal to the sum.
Ceelen, Manon; van Weissenbruch, Mirjam M; Prein, Janneke; Smit, Judith J; Vermeiden, Jan P W; Spreeuwenberg, Marieke; van Leeuwen, Flora E; Delemarre-van de Waal, Henriette A
2009-11-01
Little is known about post-natal growth in IVF offspring and the effects of rates of early post-natal growth on blood pressure and body fat composition during childhood and adolescence. The follow-up study comprised 233 IVF children aged 8-18 years and 233 spontaneously conceived controls born to subfertile parents. Growth data from birth to 4 years of age, available for 392 children (n = 193 IVF, n = 199 control), were used to study early post-natal growth. Furthermore, early post-natal growth velocity (weight gain) was related to blood pressure and skinfold measurements at follow-up. We found significantly lower weight, height and BMI standard deviation scores (SDSs) at 3 months, and weight SDS at 6 months of age in IVF children compared with controls. Likewise, IVF children demonstrated a greater gain in weight SDS (P < 0.001), height SDS (P = 0.013) and BMI SDS (P = 0.029) during late infancy (3 months to 1 year) versus controls. Weight gain during early childhood (1-3 years) was related to blood pressure in IVF children (P = 0.014 systolic, 0.04 diastolic) but not in controls. Growth during late infancy was not related to skinfold thickness in IVF children, unlike controls (P = 0.002 peripheral sum, 0.003 total sum). Growth during early childhood was related to skinfold thickness in both IVF and controls (P = 0.005 and 0.01 peripheral sum and P = 0.003 and 0.005 total sum, respectively). Late infancy growth velocity of IVF children was significantly higher compared with controls. Nevertheless, early childhood growth instead of infancy growth seemed to predict cardiovascular risk factors in IVF children. Further research is needed to confirm these findings and to follow-up growth and development of IVF children into adulthood.
NASA Astrophysics Data System (ADS)
Kuijlaars, A. B. J.
2001-08-01
The asymptotic behavior of polynomials that are orthogonal with respect to a slowly decaying weight is very different from the asymptotic behavior of polynomials that are orthogonal with respect to a Freud-type weight. While the latter has been extensively studied, much less is known about the former. Following an earlier investigation into the zero behavior, we study here the asymptotics of the density of states in a unitary ensemble of random matrices with a slowly decaying weight. This measure is also naturally connected with the orthogonal polynomials. It is shown that, after suitable rescaling, the weak limit is the same as the weak limit of the rescaled zeros.
Influence of a Horizontal Approach on the Mechanical Output during Drop Jumps
ERIC Educational Resources Information Center
Ruan, Mianfang; Li, Li
2008-01-01
This study investigated the influence of a horizontal approach to mechanical output during drop jumps. Participants performed drop jumps from heights of 15, 30, 45, and 60 cm with zero, one, two, and three approach steps. The peak summed power during the push-off phase changed quadratically across heights (6.2 [plus or minus] 0.3, 6.7 [plus or…
Fisk, A T; Stern, G A; Hobson, K A; Strachan, W J; Loewen, M D; Norstrom, R J
2001-01-01
Samples of Calanus hyperboreus, a herbivorous copepod, were collected (n = 20) between April and July 1998, and water samples (n = 6) were collected in May 1998, in the Northwater Polynya (NOW) to examine persistent organic pollutants (POPs) in a high Arctic marine zooplankton. Lipid content (dry weight) doubled, water content (r2 = 0.88) and delta15N (r2 = 0.54) significantly decreased, and delta13C significantly increased (r2 = 0.30) in the C. hyperboreus over the collection period allowing an examination of the role of these variables in POP dynamics in this small pelagic zooplankton. The rank and concentrations of POP groups in C. hyperboreus over the entire sampling was sum of PCB (30.1 +/- 4.03 ng/g, dry weight) > sum of HCH (11.8 +/- 3.23) > sum of DDT (4.74 +/- 0.74), sum of CHLOR (4.44 +/- 1.0) > sum of CIBz (2.42 +/- 0.18), although these rankings varied considerably over the summer. The alpha- and gamma-HCH and lower chlorinated PCB congeners were the most common POPs in C. hyperboreus. The relationship between bioconcentration factor (BCF) and octanol-water partition coefficient (Kow) observed for the C. hyperboreus was linear and near 1:1 (slope = 0.72) for POPs with a log Kow between 3 and 6 but curvilinear when hydrophobic POPs (log Kow > 6) were included. Concentrations of sum of HCH. Sum of CHLOR and sum of CIBz increased over the sampling period, but no change in sum of PCB or sum of DDT was observed. After removing the effects of time, the variables lipid content, water content, delta15N and delta13C did not describe POP concentrations in C. hyperboreus. These results suggest that hydrophobic POP (log Kow = 3.86.0) concentrations in zooplankton are likely to reflect water concentrations and that POPs do not biomagnify in C. hyperboreus or likely in other small, herbivorous zooplankton.
14 CFR 23.343 - Design fuel loads.
Code of Federal Regulations, 2012 CFR
2012-01-01
... zero fuel to the selected maximum fuel load. (b) If fuel is carried in the wings, the maximum allowable weight of the airplane without any fuel in the wing tank(s) must be established as “maximum zero wing... part and— (1) The structure must be designed to withstand a condition of zero fuel in the wing at limit...
14 CFR 23.343 - Design fuel loads.
Code of Federal Regulations, 2014 CFR
2014-01-01
... zero fuel to the selected maximum fuel load. (b) If fuel is carried in the wings, the maximum allowable weight of the airplane without any fuel in the wing tank(s) must be established as “maximum zero wing... part and— (1) The structure must be designed to withstand a condition of zero fuel in the wing at limit...
14 CFR 23.343 - Design fuel loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... zero fuel to the selected maximum fuel load. (b) If fuel is carried in the wings, the maximum allowable weight of the airplane without any fuel in the wing tank(s) must be established as “maximum zero wing... part and— (1) The structure must be designed to withstand a condition of zero fuel in the wing at limit...
14 CFR 23.343 - Design fuel loads.
Code of Federal Regulations, 2013 CFR
2013-01-01
... zero fuel to the selected maximum fuel load. (b) If fuel is carried in the wings, the maximum allowable weight of the airplane without any fuel in the wing tank(s) must be established as “maximum zero wing... part and— (1) The structure must be designed to withstand a condition of zero fuel in the wing at limit...
Zhang, Huisheng; Zhang, Ying; Xu, Dongpo; Liu, Xiaodong
2015-06-01
It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.
Explaining the harmonic sequence paradox.
Schmidt, Ulrich; Zimper, Alexander
2012-05-01
According to the harmonic sequence paradox, an expected utility decision maker's willingness to pay for a gamble whose expected payoffs evolve according to the harmonic series is finite if and only if his marginal utility of additional income becomes zero for rather low payoff levels. Since the assumption of zero marginal utility is implausible for finite payoff levels, expected utility theory - as well as its standard generalizations such as cumulative prospect theory - are apparently unable to explain a finite willingness to pay. This paper presents first an experimental study of the harmonic sequence paradox. Additionally, it demonstrates that the theoretical argument of the harmonic sequence paradox only applies to time-patient decision makers, whereas the paradox is easily avoided if time-impatience is introduced. ©2011 The British Psychological Society.
Asymptotics of quantum weighted Hurwitz numbers
NASA Astrophysics Data System (ADS)
Harnad, J.; Ortmann, Janosch
2018-06-01
This work concerns both the semiclassical and zero temperature asymptotics of quantum weighted double Hurwitz numbers. The partition function for quantum weighted double Hurwitz numbers can be interpreted in terms of the energy distribution of a quantum Bose gas with vanishing fugacity. We compute the leading semiclassical term of the partition function for three versions of the quantum weighted Hurwitz numbers, as well as lower order semiclassical corrections. The classical limit is shown to reproduce the simple single and double Hurwitz numbers studied by Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74). The KP-Toda τ-function that serves as generating function for the quantum Hurwitz numbers is shown to have the τ-function of Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74) as its leading term in the classical limit, and, with suitable scaling, the same holds for the partition function, the weights and expectations of Hurwitz numbers. We also compute the zero temperature limit of the partition function and quantum weighted Hurwitz numbers. The KP or Toda τ-function serving as generating function for the quantum Hurwitz numbers are shown to give the one for Belyi curves in the zero temperature limit and, with suitable scaling, the same holds true for the partition function, the weights and the expectations of Hurwitz numbers.
Zero entropy continuous interval maps and MMLS-MMA property
NASA Astrophysics Data System (ADS)
Jiang, Yunping
2018-06-01
We prove that the flow generated by any continuous interval map with zero topological entropy is minimally mean-attractable and minimally mean-L-stable. One of the consequences is that any oscillating sequence is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy. In particular, the Möbius function is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy (Sarnak’s conjecture for continuous interval maps). Another consequence is a non-trivial example of a flow having discrete spectrum. We also define a log-uniform oscillating sequence and show a result in ergodic theory for comparison. This material is based upon work supported by the National Science Foundation. It is also partially supported by a collaboration grant from the Simons Foundation (grant number 523341) and PSC-CUNY awards and a grant from NSFC (grant number 11571122).
Tuan, Pham Viet; Koo, Insoo
2017-10-06
In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.
A new enhanced index tracking model in portfolio optimization with sum weighted approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng
2017-04-01
Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.
Novick, Richard J; Fox, Stephanie A; Stitt, Larry W; Forbes, Thomas L; Steiner, Stefan
2006-08-01
We previously applied non-risk-adjusted cumulative sum methods to analyze coronary bypass outcomes. The objective of this study was to assess the incremental advantage of risk-adjusted cumulative sum methods in this setting. Prospective data were collected in 793 consecutive patients who underwent coronary bypass grafting performed by a single surgeon during a period of 5 years. The composite occurrence of an "adverse outcome" included mortality or any of 10 major complications. An institutional logistic regression model for adverse outcome was developed by using 2608 contemporaneous patients undergoing coronary bypass. The predicted risk of adverse outcome in each of the surgeon's 793 patients was then calculated. A risk-adjusted cumulative sum curve was then generated after specifying control limits and odds ratio. This risk-adjusted curve was compared with the non-risk-adjusted cumulative sum curve, and the clinical significance of this difference was assessed. The surgeon's adverse outcome rate was 96 of 793 (12.1%) versus 270 of 1815 (14.9%) for all the other institution's surgeons combined (P = .06). The non-risk-adjusted curve reached below the lower control limit, signifying excellent outcomes between cases 164 and 313, 323 and 407, and 667 and 793, but transgressed the upper limit between cases 461 and 478. The risk-adjusted cumulative sum curve never transgressed the upper control limit, signifying that cases preceding and including 461 to 478 were at an increased predicted risk. Furthermore, if the risk-adjusted cumulative sum curve was reset to zero whenever a control limit was reached, it still signaled a decrease in adverse outcome at 166, 653, and 782 cases. Risk-adjusted cumulative sum techniques provide incremental advantages over non-risk-adjusted methods by not signaling a decrement in performance when preoperative patient risk is high.
Nonlinear system guidance in the presence of transmission zero dynamics
NASA Technical Reports Server (NTRS)
Meyer, G.; Hunt, L. R.; Su, R.
1995-01-01
An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.
NASA Technical Reports Server (NTRS)
Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.
1959-01-01
A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.
Statistical mechanics of the international trade network.
Fronczak, Agata; Fronczak, Piotr
2012-05-01
Analyzing real data on international trade covering the time interval 1950-2000, we show that in each year over the analyzed period the network is a typical representative of the ensemble of maximally random weighted networks, whose directed connections (bilateral trade volumes) are only characterized by the product of the trading countries' GDPs. It means that time evolution of this network may be considered as a continuous sequence of equilibrium states, i.e., a quasistatic process. This, in turn, allows one to apply the linear response theory to make (and also verify) simple predictions about the network. In particular, we show that bilateral trade fulfills a fluctuation-response theorem, which states that the average relative change in imports (exports) between two countries is a sum of the relative changes in their GDPs. Yearly changes in trade volumes prove that the theorem is valid.
Statistical mechanics of the international trade network
NASA Astrophysics Data System (ADS)
Fronczak, Agata; Fronczak, Piotr
2012-05-01
Analyzing real data on international trade covering the time interval 1950-2000, we show that in each year over the analyzed period the network is a typical representative of the ensemble of maximally random weighted networks, whose directed connections (bilateral trade volumes) are only characterized by the product of the trading countries' GDPs. It means that time evolution of this network may be considered as a continuous sequence of equilibrium states, i.e., a quasistatic process. This, in turn, allows one to apply the linear response theory to make (and also verify) simple predictions about the network. In particular, we show that bilateral trade fulfills a fluctuation-response theorem, which states that the average relative change in imports (exports) between two countries is a sum of the relative changes in their GDPs. Yearly changes in trade volumes prove that the theorem is valid.
A Biologically-Inspired Neural Network Architecture for Image Processing
1990-12-01
was organized into twelve groups of 8-by-8 node arrays. Weights were con- strained for each group of nodes, with each node "viewing" a 5-by-5 pixel...single wIndow * smk 0; for(J-0; j< 6 4 ; J++)( sum -sum +t %borfi][J]*rfarray[j]; ) /* Finished calculating ore block, one j:,,sition (first layer
Approximation algorithm for the problem of partitioning a sequence into clusters
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Mikhailova, L. V.; Khamidullin, S. A.; Khandeev, V. I.
2017-08-01
We consider the problem of partitioning a finite sequence of Euclidean points into a given number of clusters (subsequences) using the criterion of the minimal sum (over all clusters) of intercluster sums of squared distances from the elements of the clusters to their centers. It is assumed that the center of one of the desired clusters is at the origin, while the center of each of the other clusters is unknown and determined as the mean value over all elements in this cluster. Additionally, the partition obeys two structural constraints on the indices of sequence elements contained in the clusters with unknown centers: (1) the concatenation of the indices of elements in these clusters is an increasing sequence, and (2) the difference between an index and the preceding one is bounded above and below by prescribed constants. It is shown that this problem is strongly NP-hard. A 2-approximation algorithm is constructed that is polynomial-time for a fixed number of clusters.
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
[Hydrostatic weighing, skinfold thickness, body mass index relationships in high school girls].
Tahara, Y; Yukawa, K; Tsunawake, N; Saeki, S; Nishiyama, K; Urata, H; Katsuno, K; Fukuyama, Y; Michimukou, R; Uekata, M
1995-12-01
A study was conducted to evaluate body composition by hydrostatic weighing, skinfold thickness, and body mass index (BMI) in 102 senior high school girls, aged 15 to 18 in Nagasaki City. Body density measured by the underwater weighing method, was used to determine the fat weight (Fat) and lean body mass (LBM. or fat free weight: FFW) utilizing the formulas by Brozek et al. The results were as follows; 1. Mean values of body density were 1.04428 in the first grade girls, 1.04182 in the second grade, and 1.04185 in the third grade. 2. Mean values of percentage body fat (%Fat) were 23.5% in the first grade, 24.5% in the second and 24.5% in the third. 3. Percentage body fat (%Fat), lean body mass (LBM) and LBM/Height were not significantly with different advance of grade from the first to the third. 4. The correlation coefficients between percent body fat and the sum of two skinfold thicknesses, the sum of three skinfold thicknesses and the sum of seven skinfold thicknesses was 0.78, 0.79, and 0.80 respectively and were all statistically significant (p < 0.001). 5. The correlation coefficients between BMI and the sum of two skinfold thicknesses, the sum of three skinfold thicknesses and the sum of seven skinfold thicknesses was 0.74, 0.74, and 0.74 respectively and were all statistically significant (p < 0.001). 6. Mean values of BMI, Rohrer index and waist-hip ratio (WHR) in all subjects (n = 102) were 20.3, 128.2 and 0.72 respectively.
Heterodimer Binding Scaffolds Recognition via the Analysis of Kinetically Hot Residues
Perišić, Ognjen
2018-01-01
Physical interactions between proteins are often difficult to decipher. The aim of this paper is to present an algorithm that is designed to recognize binding patches and supporting structural scaffolds of interacting heterodimer proteins using the Gaussian Network Model (GNM). The recognition is based on the (self) adjustable identification of kinetically hot residues and their connection to possible binding scaffolds. The kinetically hot residues are residues with the lowest entropy, i.e., the highest contribution to the weighted sum of the fastest modes per chain extracted via GNM. The algorithm adjusts the number of fast modes in the GNM’s weighted sum calculation using the ratio of predicted and expected numbers of target residues (contact and the neighboring first-layer residues). This approach produces very good results when applied to dimers with high protein sequence length ratios. The protocol’s ability to recognize near native decoys was compared to the ability of the residue-level statistical potential of Lu and Skolnick using the Sternberg and Vakser decoy dimers sets. The statistical potential produced better overall results, but in a number of cases its predicting ability was comparable, or even inferior, to the prediction ability of the adjustable GNM approach. The results presented in this paper suggest that in heterodimers at least one protein has interacting scaffold determined by the immovable, kinetically hot residues. In many cases, interacting proteins (especially if being of noticeably different sizes) either behave as a rigid lock and key or, presumably, exhibit the opposite dynamic behavior. While the binding surface of one protein is rigid and stable, its partner’s interacting scaffold is more flexible and adaptable. PMID:29547506
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
Quantum Hurwitz numbers and Macdonald polynomials
NASA Astrophysics Data System (ADS)
Harnad, J.
2016-11-01
Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.
Pipeline active filter utilizing a booth type multiplier
NASA Technical Reports Server (NTRS)
Nathan, Robert (Inventor)
1987-01-01
Multiplier units of the modified Booth decoder and carry-save adder/full adder combination are used to implement a pipeline active filter wherein pixel data is processed sequentially, and each pixel need only be accessed once and multiplied by a predetermined number of weights simultaneously, one multiplier unit for each weight. Each multiplier unit uses only one row of carry-save adders, and the results are shifted to less significant multiplier positions and one row of full adders to add the carry to the sum in order to provide the correct binary number for the product Wp. The full adder is also used to add this product Wp to the sum of products .SIGMA.Wp from preceding multiply units. If m.times.m multiplier units are pipelined, the system would be capable of processing a kernel array of m.times.m weighting factors.
Neutrino Masses, Cosmological Bound and Four Zero Yukawa Textures
NASA Astrophysics Data System (ADS)
Adhikary, Biswajit; Ghosal, Ambar; Roy, Probir
Four zero neutrino Yukawa textures in a specified weak basis, combined with μτ symmetry and type-I seesaw, yield a highly constrained and predictive scheme. Two alternately viable 3×3 light neutrino Majorana mass matrices mνA/mνB result with inverted/normal mass ordering. Neutrino masses, Majorana in character and predicted within definite ranges with laboratory and cosmological inputs, will have their sum probed cosmologically. The rate for 0νββ decay, though generally below the reach of planned experiments, could approach it in some parameter region. Departure from μτ symmetry due to RG evolution from a high scale and consequent CP violation, with a Jarlskog invariant whose magnitude could almost reach 6×10-3, are explored.
Spectral properties of the massless relativistic quartic oscillator
NASA Astrophysics Data System (ADS)
Durugo, Samuel O.; Lőrinczi, József
2018-03-01
An explicit solution of the spectral problem of the non-local Schrödinger operator obtained as the sum of the square root of the Laplacian and a quartic potential in one dimension is presented. The eigenvalues are obtained as zeroes of special functions related to the fourth order Airy function, and closed formulae for the Fourier transform of the eigenfunctions are derived. These representations allow to derive further spectral properties such as estimates of spectral gaps, heat trace and the asymptotic distribution of eigenvalues, as well as a detailed analysis of the eigenfunctions. A subtle spectral effect is observed which manifests in an exponentially tight approximation of the spectrum by the zeroes of the dominating term in the Fourier representation of the eigenfunctions and its derivative.
A Model for Comparing Game Theory and Artificial Intelligence Decision Making Processes
1989-12-01
I V. Re-visions to PALANTIR Program .. .. .... .... .... ..... 20 4.1 Rewritten in the C Programming Lanigiiage...The PALANTIR , created by Capt Robert Palmer, had two uses: to train per- sonnel on the use of satellites and to provide insight into the movements...zero-sum game theory approach to reach decisions (11:23). This approach is discussed further in Chapter 2. 2 Although PALANTIR used game theory players
Network Support for Group Coordination
2000-01-01
telecommuting and ubiquitous computing [40], the advent of networked multimedia, and less expensive technology have shifted telecollaboration into...of Computer Engineering,Santa Cruz,CA,95064 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/ MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10...participants A and B, the payoff structure for choosing two actions i and j is P = Aij + Bij . If P = 0, then the interaction is called a zero -sum game, and
Decision and Game Theory for Security
NASA Astrophysics Data System (ADS)
Alpcan, Tansu; Buttyán, Levente; Baras, John S.
Attack--defense trees are used to describe security weaknesses of a system and possible countermeasures. In this paper, the connection between attack--defense trees and game theory is made explicit. We show that attack--defense trees and binary zero-sum two-player extensive form games have equivalent expressive power when considering satisfiability, in the sense that they can be converted into each other while preserving their outcome and their internal structure.
Dynamic Asset Allocation Approaches for Counter-Piracy Operations
2012-07-01
problem, has attracted much interest due to an increase in the number of pirate activities in recent years. Marsh [26] provided a game theoretic...model, where one interdiction asset and one surveillance asset are utilized for a counter-piracy mission. Due to the two-person zero sum game structure...that policy using online learning and simulation. The attractive aspects of rollout algorithms are its simplicity, broad applicability, and
Colliding droplets: A short film presentation
NASA Astrophysics Data System (ADS)
Hendricks, C. D.
1981-12-01
A series of experiments were performed in which liquid droplets were caused to collide. Impact velocities to several meters per second and droplet diameters up to 600 micrometers were used. The impact parameters in the collisions vary from zero to greater than the sum of the droplet radii. Photographs of the collisions were taken with a high speed framing camera in order to study the impacts and subsequent behavior of the droplets.
Supertranslations and Superrotations at the Black Hole Horizon.
Donnay, Laura; Giribet, Gaston; González, Hernán A; Pino, Miguel
2016-03-04
We show that the asymptotic symmetries close to nonextremal black hole horizons are generated by an extension of supertranslations. This group is generated by a semidirect sum of Virasoro and Abelian currents. The charges associated with the asymptotic Killing symmetries satisfy the same algebra. When considering the special case of a stationary black hole, the zero mode charges correspond to the angular momentum and the entropy at the horizon.
Stochastic Game Approach to Guidance Design
1988-09-09
maneuverable aircraft, which can employ also electronic counter measures, is formulated as an imper- fect information zero-sum pursuit-evasion game played...on Differential Game Applications [36]. However, some other examples which included " electronic jinking", indicated that in an ECM environment a mixed...radius, b) missile/target maneuver ratio, c) nonlinear maneuver similarity parameter (aE T2 max d) normalized end- game duration, e ) ini’tial end- game
Ultrasonic Imaging in Solids Using Wave Mode Beamforming.
di Scalea, Francesco Lanza; Sternini, Simone; Nguyen, Thompson Vu
2017-03-01
This paper discusses some improvements to ultrasonic synthetic imaging in solids with primary applications to nondestructive testing of materials and structures. Specifically, the study proposes new adaptive weights applied to the beamforming array that are based on the physics of the propagating waves, specifically the displacement structure of the propagating longitudinal (L) mode and shear (S) mode that are naturally coexisting in a solid. The wave mode structures can be combined with the wave geometrical spreading to better filter the array (in a matched filter approach) and improve its focusing ability compared to static array weights. This paper also proposes compounding, or summing, images obtained from the different wave modes to further improve the array gain without increasing its physical aperture. The wave mode compounding can be performed either incoherently or coherently, in analogy with compounding multiple frequencies or multiple excitations. Numerical simulations and experimental testing demonstrate the potential improvements obtainable by the wave structure adaptive weights compared to either static weights in conventional delay-and-sum focusing, or adaptive weights based on geometrical spreading alone in minimum-variance distortionless response focusing.
Research on Parallel Three Phase PWM Converters base on RTDS
NASA Astrophysics Data System (ADS)
Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun
2018-01-01
Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.
A novel beamformer design method for medical ultrasound. Part I: Theory.
Ranganathan, Karthik; Walker, William F
2003-01-01
The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.
Design of an automatic weight scale for an isolette
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Griffin, W.
1974-01-01
The design of an infant weight scale is reported that fits into an isolette without disturbing its controlled atmosphere. The scale platform uses strain gages to measure electronically deflections of cantilever beams positioned at its four corners. The weight of the infant is proportional to the sum of the output voltages produced by the gauges on each beam of the scale.
Soh, Nerissa L; Touyz, Stephen; Dobbins, Timothy A; Clarke, Simon; Kohn, Michael R; Lee, Ee Lian; Leow, Vincent; Ung, Ken E K; Walter, Garry
2009-01-01
To investigate the relationship between skinfold thickness and body mass index (BMI) in North European Caucasian and East Asian young women with and without anorexia nervosa (AN) in two countries. Height, weight and skinfold thicknesses were assessed in 137 young women with and without AN, in Australia and Singapore. The relationship between BMI and the sum of triceps, biceps, subscapular and iliac crest skinfolds was analysed with clinical status, ethnicity, age and country of residence as covariates. For the same BMI, women with AN had significantly smaller sums of skinfolds than women without AN. East Asian women both with and without AN had significantly greater skinfold sums than their North European Caucasian counterparts after adjusting for BMI. Lower BMI goals may be appropriate when managing AN patients of East Asian ancestry and the weight for height diagnostic criterion should be reconsidered for this group.
Remote sensing of earth terrain
NASA Technical Reports Server (NTRS)
Kong, J. A.
1988-01-01
Two monographs and 85 journal and conference papers on remote sensing of earth terrain have been published, sponsored by NASA Contract NAG5-270. A multivariate K-distribution is proposed to model the statistics of fully polarimetric data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean bar N. Subsequently, and n-dimensional K-distribution, with either zero or non-zero mean, is developed in the limit of infinite bar N or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector. The above model is well supported by experimental data provided by MIT Lincoln Laboratory and the Jet Propulsion Laboratory in the form of polarimetric measurements.
Monyeki, Kotsedi; Kemper, Han; Mogale, Alfred; Hay, Leon; Sekgala, Machoene; Mashiane, Tshephang; Monyeki, Suzan; Sebati, Betty
2017-08-29
The aim of this cross-sectional study was to investigate the association between birth weight, underweight, and blood pressure (BP) among Ellisras rural children aged between 5 and 15 years. Data were collected from 528 respondents who participated in the Ellisras Longitudinal Study (ELS) and had their birth weight recorded on their health clinic card. Standard procedure was used to measure the anthropometric measurements and BP. Linear regression was used to assess BP, underweight variables, and birth weight. Logistic regression was used to assess the association of hypertension risks, low birth weight, and underweight. The association between birth weight and BP was not statistically significant. There was a significant ( p < 0.05) association between mean BP and the sum of four skinfolds (β = 0.26, 95% CI 0.15-0.23) even after adjusting for age (β = 0.18, 95% CI 0.01-0.22). Hypertension was significantly associated with weight for age z-scores (OR = 5.13, 95% CI 1.89-13.92) even after adjusting for age and sex (OR = 5.26, 95% CI 1.93-14.34). BP was significantly associated with the sum of four skinfolds, but not birth weight. Hypertension was significantly associated with underweight. Longitudinal studies should confirm whether the changes in body weight we found can influence the risk of cardiovascular diseases.
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
NASA Technical Reports Server (NTRS)
Wood, C. A.
1974-01-01
For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.
Barat, Maxime; Soyer, Philippe; Dautry, Raphael; Pocard, Marc; Lo-Dico, Rea; Najah, Haythem; Eveno, Clarisse; Cassinotto, Christophe; Dohan, Anthony
2018-03-01
To assess the performances of three-dimensional (3D)-T2-weighted sequences compared to standard T2-weighted turbo spin echo (T2-TSE), T2-half-Fourier acquisition single-shot turbo spin-echo (T2-HASTE), diffusion weighted imaging (DWI) and 3D-T1-weighted VIBE sequences in the preoperative detection of malignant liver tumors. From 2012 to 2015, all patients of our institution undergoing magnetic resonance imaging (MRI) examination for suspected malignant liver tumors were prospectively included. Patients had contrast-enhanced 3D-T1-weighted, DWI, 3D-T2-SPACE, T2-HASTE and T2-TSE sequences. Imaging findings were compared with those obtained at follow-up, surgery and histopathological analysis. Sensitivities for the detection of malignant liver tumors were compared for each sequence using McNemar test. A subgroup analysis was conducted for HCCs. Image artifacts were analyzed and compared using Wilcoxon paired signed rank-test. Thirty-three patients were included: 13 patients had 40 hepatocellular carcinomas (HCC) and 20 had 54 liver metastases. 3D-T2-weighted sequences had a higher sensitivity than T2-weighted TSE sequences for the detection of malignant liver tumors (79.8% versus 68.1%; P < 0.001). The difference did not reach significance for HCC. T1-weighted VIBE and DWI had a higher sensitivity than T2-weighted sequences. 3D-T2-weighted-SPACE sequences showed significantly less artifacts than T2-weitghted TSE. 3D-T2-weighted sequences show very promising performances for the detection of liver malignant tumors compared to T2-weighted TSE sequences. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna
2017-01-01
In this paper we present the results of a research-based teaching-learning sequence on introductory quantum physics based on Feynman's sum over paths approach in the Italian high school. Our study focuses on students' understanding of two founding ideas of quantum physics, wave particle duality and the uncertainty principle. In view of recent…
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
Comparison of two weighted integration models for the cueing task: linear and likelihood
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2003-01-01
In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.
Exoatmospheric intercepts using zero effort miss steering for midcourse guidance
NASA Astrophysics Data System (ADS)
Newman, Brett
The suitability of proportional navigation, or an equivalent zero effort miss formulation, for exatmospheric intercepts during midcourse guidance, followed by a ballistic coast to the endgame, is addressed. The problem is formulated in terms of relative motion in a general, three dimensional framework. The proposed guidance law for the commanded thrust vector orientation consists of the sum of two terms: (1) along the line of sight unit direction and (2) along the zero effort miss component perpendicular to the line of sight and proportional to the miss itself and a guidance gain. If the guidance law is to be suitable for longer range targeting applications with significant ballistic coasting after burnout, determination of the zero effort miss must account for the different gravitational accelerations experienced by each vehicle. The proposed miss determination techniques employ approximations for the true differential gravity effect and thus, are less accurate than a direct numerical propagation of the governing equations, but more accurate than a baseline determination, which assumes equal accelerations for both vehicles. Approximations considered are constant, linear, quadratic, and linearized inverse square models. Theoretical results are applied to a numerical engagement scenario and the resulting performance is evaluated in terms of the miss distances determined from nonlinear simulation.
NASA Astrophysics Data System (ADS)
Gusev, A. A.; Chuluunbaatar, O.; Popov, Yu. V.; Vinitsky, S. I.; Derbov, V. L.; Lovetskiy, K. P.
2018-04-01
The exactly soluble model of a train of zero-duration electromagnetic pulses interacting with a 1D atom with short-range interaction potential modelled by a δ-function is considered. The model is related to the up-to-date laser techniques providing the duration of pulses as short as a few attoseconds and the intensities higher than 1014 W/cm2.
Absence of left ventricular concentric hypertrophy: a prerequisite for zero coronary calcium score.
Ehara, Shoichi; Shirai, Nobuyuki; Okuyama, Takuhiro; Matsumoto, Kenji; Matsumura, Yoshiki; Yoshiyama, Minoru
2011-09-01
The identification and intervention of factors associated with a coronary artery calcification (CAC) score of zero, suggesting the absence of significant coronary artery disease (CAD) with high probability, would be meaningful in the clinical setting. Thus far, the relationship between CAC and left ventricular (LV) hypertrophy has not been documented. We identified factors associated with a CAC score of zero and evaluated the relationship between this score and LV concentric hypertrophy in 309 consecutive patients with suspected CAD who were clinically indicated to undergo multislice computed tomography angiography for coronary artery evaluation. The quantitative CAC score was calculated according to Agatston's method. The total coronary calcium score (TCS) was defined as the sum of the scores for each lesion. Four absolute TCS categories were considered: zero, mild (0-100), moderate (100-400), and severe (>400). LV hypertrophy was classified into concentric (LV mass index >104 g/m(2) in women or >116 g/m(2) in men; LV end-diastolic volume index ≤109.2 mL/m(2)) and eccentric (LV end-diastolic volume index >109.2 mL/m(2)) patterns. In the zero-TCS group, the frequency of LV concentric hypertrophy was extremely low (zero 6%, mild 17%, moderate 26%, severe 19%). Multivariate analysis revealed that age, hypercholesterolemia, diabetes mellitus, LV concentric hypertrophy, and LV mass index, but not hypertension, were the independent factors associated with a CAC score of zero. The present study demonstrated that the absence of LV concentric hypertrophy was a prerequisite for a CAC score of zero. That is, the presence of LV concentric hypertrophy, which indicated more severe underlying hypertension, long duration, or poor control of blood pressure, implicates the presence of CAC.
NASA Astrophysics Data System (ADS)
Mittal, R.; Rao, P.; Kaur, P.
2018-01-01
Elemental evaluations in scanty powdered material have been made using energy dispersive X-ray fluorescence (EDXRF) measurements, for which formulations along with specific procedure for sample target preparation have been developed. Fractional amount evaluation involves an itinerary of steps; (i) collection of elemental characteristic X-ray counts in EDXRF spectra recorded with different weights of material, (ii) search for linearity between X-ray counts and material weights, (iii) calculation of elemental fractions from the linear fit, and (iv) again linear fitting of calculated fractions with sample weights and its extrapolation to zero weight. Thus, elemental fractions at zero weight are free from material self absorption effects for incident and emitted photons. The analytical procedure after its verification with known synthetic samples of macro-nutrients, potassium and calcium, was used for wheat plant/ soil samples obtained from a pot experiment.
Not Congruent but Quite Complementary: U.S. and Chinese Approaches to Nontraditional Security
2012-07-01
reconnaissance (C4ISR) systems and most effective ASW capabilities, and it is specialized for sea-control missions. Destroyer 171 is one of the two surface not...51 by Pang Zhongying CHAPTER SIX Chinese Peacekeeping in the Asia Pacific: A Case Study of East Timor...of interstate conflict and zero-sum outcomes.3 Un- fortunately, relatively few academic studies have fully explored the potential for overlap in
A Methodology for Projecting U.S.-Flag Commercial Tanker Capacity
1986-03-01
total crude supply for the total US is less than the sum of the total crude supplies of the PADDs . The algorithm generating the output shown in tables...other PADDs . Accordingly, projected receipts for PADD V are zero , and in conjunction with the values for the vari- ables that previously were...SHIPMENTS ALGORITHM This section presents the mathematics of the algorithm that generates the shipments projections for each PADD . The notation
Dialogue-Games: Meta-Communication Structures for Natural Language Interaction
1977-01-01
Dialogue- games are only those described here. For example, they are not necessarily competitive , consciously pursued, or zero-sum. 3. THE DIALOGUE- GAME ...fr«. CO / (Mt l / H- James A. Levin James A. Moore ARPA ORDER NO. 2930 NR 134 374 ISI/RR 77-53 January 1977 Dialogue Games : Meta...these patterns. These patterns have been represented by a set of knowledge structures called Dialogue- games , capturing shared conventional Knowledge
Freezing transition of the random bond RNA model: Statistical properties of the pairing weights
NASA Astrophysics Data System (ADS)
Monthus, Cécile; Garel, Thomas
2007-03-01
To characterize the pairing specificity of RNA secondary structures as a function of temperature, we analyze the statistics of the pairing weights as follows: for each base (i) of the sequence of length N , we consider the (N-1) pairing weights wi(j) with the other bases (j≠i) of the sequence. We numerically compute the probability distributions P1(w) of the maximal weight wimax=maxj[wi(j)] , the probability distribution Π(Y2) of the parameter Y2(i)=∑jwi2(j) , as well as the average values of the moments Yk(i)=∑jwik(j) . We find that there are two important temperatures Tc
Computing the Distribution of Pareto Sums Using Laplace Transformation and Stehfest Inversion
NASA Astrophysics Data System (ADS)
Harris, C. K.; Bourne, S. J.
2017-05-01
In statistical seismology, the properties of distributions of total seismic moment are important for constraining seismological models, such as the strain partitioning model (Bourne et al. J Geophys Res Solid Earth 119(12): 8991-9015, 2014). This work was motivated by the need to develop appropriate seismological models for the Groningen gas field in the northeastern Netherlands, in order to address the issue of production-induced seismicity. The total seismic moment is the sum of the moments of individual seismic events, which in common with many other natural processes, are governed by Pareto or "power law" distributions. The maximum possible moment for an induced seismic event can be constrained by geomechanical considerations, but rather poorly, and for Groningen it cannot be reliably inferred from the frequency distribution of moment magnitude pertaining to the catalogue of observed events. In such cases it is usual to work with the simplest form of the Pareto distribution without an upper bound, and we follow the same approach here. In the case of seismicity, the exponent β appearing in the power-law relation is small enough for the variance of the unbounded Pareto distribution to be infinite, which renders standard statistical methods concerning sums of statistical variables, based on the central limit theorem, inapplicable. Determinations of the properties of sums of moderate to large numbers of Pareto-distributed variables with infinite variance have traditionally been addressed using intensive Monte Carlo simulations. This paper presents a novel method for accurate determination of the properties of such sums that is accurate, fast and easily implemented, and is applicable to Pareto-distributed variables for which the power-law exponent β lies within the interval [0, 1]. It is based on shifting the original variables so that a non-zero density is obtained exclusively for non-negative values of the parameter and is identically zero elsewhere, a property that is shared by the sum of an arbitrary number of such variables. The technique involves applying the Laplace transform to the normalized sum (which is simply the product of the Laplace transforms of the densities of the individual variables, with a suitable scaling of the Laplace variable), and then inverting it numerically using the Gaver-Stehfest algorithm. After validating the method using a number of test cases, it was applied to address the distribution of total seismic moment, and the quantiles computed for various numbers of seismic events were compared with those obtained in the literature using Monte Carlo simulation. Excellent agreement was obtained. As an application, the method was applied to the evolution of total seismic moment released by tremors due to gas production in the Groningen gas field in the northeastern Netherlands. The speed, accuracy and ease of implementation of the method allows the development of accurate correlations for constraining statistical seismological models using, for example, the maximum-likelihood method. It should also be of value in other natural processes governed by Pareto distributions with exponent less than unity.
A covariant multiple scattering series for elastic projectile-target scattering
NASA Technical Reports Server (NTRS)
Gross, Franz; Maung-Maung, Khin
1989-01-01
A covariant formulation of the multiple scattering series for the optical potential is presented. The case of a scalar nucleon interacting with a spin zero isospin zero A-body target through meson exchange, is considered. It is shown that a covariant equation for the projectile-target t-matrix can be obtained which sums the ladder and crossed ladder diagrams efficiently. From this equation, a multiple scattering series for the optical potential is derived, and it is shown that in the impulse approximation, the two-body t-matrix associated with the first order optical potential is the one in which one particle is kept on mass-shell. The meaning of various terms in the multiple scattering series is given. The construction of the first-order optical potential for elastic scattering calculations is described.
Super Yang Mills, matrix models and geometric transitions
NASA Astrophysics Data System (ADS)
Ferrari, Frank
2005-03-01
I explain two applications of the relationship between four-dimensional N=1 supersymmetric gauge theories, zero-dimensional gauged matrix models, and geometric transitions in string theory. The first is related to the spectrum of BPS domain walls or BPS branes. It is shown that one can smoothly interpolate between a D-brane state, whose weak coupling tension scales as N˜1/g, and a closed string solitonic state, whose weak coupling tension scales as N˜1/gs2. This is part of a larger theory of N=1 quantum parameter spaces. The second is a new purely geometric approach to sum exactly over planar diagrams in zero dimension. It is an example of open/closed string duality. To cite this article: F. Ferrari, C. R. Physique 6 (2005).
Parametric inference for biological sequence analysis.
Pachter, Lior; Sturmfels, Bernd
2004-11-16
One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.
Bidar, Fatemeh; Faeghi, Fariborz; Ghorbani, Askar
2016-01-01
Background: The purpose of this study is to demonstrate the advantages of gradient echo (GRE) sequences in the detection and characterization of cerebral venous sinus thrombosis compared to conventional magnetic resonance sequences. Methods: A total of 17 patients with cerebral venous thrombosis (CVT) were evaluated using different magnetic resonance imaging (MRI) sequences. The MRI sequences included T1-weighted spin echo (SE) imaging, T*2-weighted turbo SE (TSE), fluid attenuated inversion recovery (FLAIR), T*2-weighted conventional GRE, and diffusion weighted imaging (DWI). MR venography (MRV) images were obtained as the golden standard. Results: Venous sinus thrombosis was best detectable in T*2-weighted conventional GRE sequences in all patients except in one case. Venous thrombosis was undetectable in DWI. T*2-weighted GRE sequences were superior to T*2-weighted TSE, T1-weighted SE, and FLAIR. Enhanced MRV was successful in displaying the location of thrombosis. Conclusion: T*2-weighted conventional GRE sequences are probably the best method for the assessment of cerebral venous sinus thrombosis. The mentioned method is non-invasive; therefore, it can be employed in the clinical evaluation of cerebral venous sinus thrombosis. PMID:27326365
Vibration isolation using six degree-of-freedom quasi-zero stiffness magnetic levitation
NASA Astrophysics Data System (ADS)
Zhu, Tao; Cazzolato, Benjamin; Robertson, William S. P.; Zander, Anthony
2015-12-01
In laboratories and high-tech manufacturing applications, passive vibration isolators are often used to isolate vibration sensitive equipment from ground-borne vibrations. However, in traditional passive isolation devices, where the payload weight is supported by elastic structures with finite stiffness, a design trade-off between the load capacity and the vibration isolation performance is unavoidable. Low stiffness springs are often required to achieve vibration isolation, whilst high stiffness is desired for supporting payload weight. In this paper, a novel design of a six degree of freedom (six-dof) vibration isolator is presented, as well as the control algorithms necessary for stabilising the passively unstable maglev system. The system applies magnetic levitation as the payload support mechanism, which realises inherent quasi-zero stiffness levitation in the vertical direction, and zero stiffness in the other five dofs. While providing near zero stiffness in multiple dofs, the design is also able to generate static magnetic forces to support the payload weight. This negates the trade-off between load capacity and vibration isolation that often exists in traditional isolator designs. The paper firstly presents the novel design concept of the isolator and associated theories, followed by the mechanical and control system designs. Experimental results are then presented to demonstrate the vibration isolation performance of the proposed system in all six directions.
Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
Zarr, Noah; Alexander, William H.; Brown, Joshua W.
2014-01-01
Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. PMID:24639662
1982-01-01
second) Dia propeller diameter (expressed in inches) T°F air temperature in degrees Farenheit T°C air temperature in degrees Celsius T:dBA total dBA...eMpiriC31 function to the absolute noise level ordinate. The term 240 log ( MH is the most sensitive and important part of the equation. The constant (240...standard day, zero wind, dry, zero gradient runway, at a sea level airport. 2. All aircraft operate at maximum takeoff gross weight. 3. All aircraft climb
Computation of repetitions and regularities of biologically weighted sequences.
Christodoulakis, M; Iliopoulos, C; Mouchard, L; Perdikuri, K; Tsakalidis, A; Tsichlas, K
2006-01-01
Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.
Sandau, Courtney D; Ayotte, Pierre; Dewailly, Eric; Duffe, Jason; Norstrom, Ross J
2002-01-01
Concentrations of polychlorinated biphenyls (PCBs), hydroxylated metabolites of PCBs (HO-PCBs) and octachlorostyrene (4-HO-HpCS), and pentachlorophenol (PCP) were determined in umbilical cord plasma samples from three different regions of Québec. The regions studied included two coastal areas where exposure to PCBs is high because of marine-food-based diets--Nunavik (Inuit people) and the Lower North Shore of the Gulf of St. Lawrence (subsistence fishermen)--and a southern Québec urban center where PCB exposure is at background levels (Québec City). The main chlorinated phenolic compound in all regions was PCP. Concentrations of PCP were not significantly different among regions (geometric mean concentration 1,670 pg/g, range 628-7,680 pg/g wet weight in plasma). The ratio of PCP to polychlorinated biphenyl congener number 153 (CB153) concentration ranged from 0.72 to 42.3. Sum HO-PCB (sigma HO-PCBs) concentrations were different among regions, with geometric mean concentrations of 553 (range 238-1,750), 286 (103-788), and 234 (147-464) pg/g wet weight plasma for the Lower North Shore, Nunavik, and the southern Québec groups, respectively. Lower North Shore samples also had the highest geometric mean concentration of sum PCBs (sum of 49 congeners; sigma PCBs), 2,710 (525-7,720) pg/g wet weight plasma. sigma PCB concentrations for Nunavik samples and southern samples were 1,510 (309-6,230) and 843 (290-1,650) pg/g wet weight plasma. Concentrations (log transformed) of sigma HO-PCBs and sigma PCBs were significantly correlated (r = 0.62, p < 0.001), as were concentrations of all major individual HO-PCB congeners and individual PCB congeners. In Nunavik and Lower North Shore samples, free thyroxine (T4) concentrations (log transformed) were negatively correlated with the sum of quantitated chlorinated phenolic compounds (sum PCP and sigma HO-PCBs; r = -0.47, p = 0.01, n = 20) and were not correlated with any PCB congeners or sigma PCBs. This suggests that PCP and HO-PCBs are possibly altering thyroid hormone status in newborns, which could lead to neurodevelopmental effects in infants. Further studies are needed to examine the effects of chlorinated phenolic compounds on thyroid hormone status in newborns. PMID:11940460
Wiley, A S; Lubree, H G; Joshi, S M; Bhat, D S; Ramdas, L V; Rao, A S; Thuse, N V; Deshpande, V U; Yajnik, C S
2016-04-01
Indian newborns have been described as 'thin-fat' compared with European babies, but little is known about how this phenotype relates to the foetal growth factor IGF-I (insulin-like growth factor I) or its binding protein IGFBP-3. To assess cord IGF-I and IGFBP-3 concentrations in a sample of Indian newborns and evaluate their associations with neonatal adiposity and maternal factors. A prospective cohort study of 146 pregnant mothers with dietary, anthropometric and biochemical measurements at 28 and 34 weeks gestation. Neonatal weight, length, skin-folds, circumferences, and cord blood IGF-I and IGFBP-3 concentrations were measured at birth. Average cord IGF-I and IGFBP-3 concentrations were 46.6 (2.2) and 1269.4 (41) ng mL(-1) , respectively. Girls had higher mean IGF-I than boys (51.4 ng mL(-1) vs. 42.9 ng mL(-1) ; P < 0.03), but IGFBP-3 did not differ. Cord IGF-I was positively correlated with all birth size measures except length, and most strongly with neonatal sum-of-skin-folds (r = 0.50, P < 0.001). IGFBP-3 was positively correlated with ponderal index, sum-of-skin-folds and placenta weight (r = 0.21, 0.19, 0.16, respectively; P < 0.05). Of maternal demographic and anthropometric characteristics, only parity was correlated with cord IGF-I (r = 0.27, P < 0.001). Among dietary behaviours, maternal daily milk intake at 34 weeks gestation predicted higher cord IGF-I compared to no-milk intake (51.8 ng mL(-1) vs. 36.5 ng mL(-1) , P < 0.01) after controlling for maternal characteristics, placental weight, and newborn gestational age, sex, weight and sum-of-skin-folds. Sum-of-skin-folds were positively associated with cord IGF-I in this multivariate model (57.3 ng mL(-1) vs. 35.1 ng mL(-1) for highest and lowest sum-of skin-fold quartile, P < 0.001). IGFBP-3 did not show significant relationships with these covariates. In this Indian study, cord IGF-I concentration was associated with greater adiposity among newborns. Maternal milk intake may play a role in this relationship. © 2015 World Obesity.
14 CFR 25.349 - Rolling conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... zero and of two-thirds of the positive maneuvering factor used in design. In determining the required... other weight concentrations outboard of the fuselage. For the angular acceleration conditions, zero... less than one-third of that in paragraph (a)(2) of this section. (b) Unsymmetrical gusts. The airplane...
Determination of total dissolved solids in water analysis
Howard, C.S.
1933-01-01
The figure for total dissolved solids, based on the weight of the residue on evaporation after heating for 1 hour at 180??C., is reasonably close to the sum of the determined constituents for most natural waters. Waters of the carbonate type that are high in magnesium may give residues that weigh less than the sum. Natural waters of the sulfate type usually give residues that are too high on account of incomplete drying.
A nonlinear merging protocol for consensus in multi-agent systems on signed and weighted graphs
NASA Astrophysics Data System (ADS)
Feng, Shasha; Wang, Li; Li, Yijia; Sun, Shiwen; Xia, Chengyi
2018-01-01
In this paper, we investigate the multi-agent consensus for networks with undirected graphs which are not connected, especially for the signed graph in which some edge weights are positive and some edges have negative weights, and the negative-weight graph whose edge weights are negative. We propose a novel nonlinear merging consensus protocol to drive the states of all agents to converge to the same state zero which is not dependent upon the initial states of agents. If the undirected graph whose edge weights are positive is connected, then the states of all agents converge to the same state more quickly when compared to most other protocols. While the undirected graph whose edge weights might be positive or negative is unconnected, the states of all agents can still converge to the same state zero under the premise that the undirected graph can be divided into several connected subgraphs with more than one node. Furthermore, we also discuss the impact of parameter r presented in our protocol. Current results can further deepen the understanding of consensus processes for multi-agent systems.
Sykes-Muskett, Bianca J; Prestwich, Andrew; Lawton, Rebecca J; Armitage, Christopher J
2015-01-01
Financial incentives to improve health have received increasing attention, but are subject to ethical concerns. Monetary Contingency Contracts (MCCs), which require individuals to deposit money that is refunded contingent on reaching a goal, are a potential alternative strategy. This review evaluates systematically the evidence for weight loss-related MCCs. Randomised controlled trials testing the effect of weight loss-related MCCs were identified in online databases. Random-effects meta-analyses were used to calculate overall effect sizes for weight loss and participant retention. The association between MCC characteristics and weight loss/participant retention effects was calculated using meta-regression. There was a significant small-to-medium effect of MCCs on weight loss during treatment when one outlier study was removed. Group refunds, deposit not paid as lump sum, participants setting their own deposit size and additional behaviour change techniques were associated with greater weight loss during treatment. Post-treatment, there was no significant effect of MCCs on weight loss. There was a significant small-to-medium effect of MCCs on participant retention during treatment. Researcher-set deposits paid as one lump sum, refunds delivered on an all-or-nothing basis and refunds contingent on attendance at classes were associated with greater retention during treatment. Post-treatment, there was no significant effect of MCCs on participant retention. The results support the use of MCCs to promote weight loss and participant retention up to the point that the incentive is removed and identifies the conditions under which MCCs work best.
40 CFR 60.562-1 - Standards: Process emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methane and ethane) (TOC) by 98 weight percent, or to a concentration of 20 parts per million by volume (ppmv) on a dry basis, whichever is less stringent. The TOC is expressed as the sum of the actual... Polypropylene and Polyethylene Affected Facilities Procedure /a/ Applicable TOC weight percent range Control/no...
Algebraic grid adaptation method using non-uniform rational B-spline surface modeling
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, B. K.
1992-01-01
An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.
A Decision Support System for Solving Multiple Criteria Optimization Problems
ERIC Educational Resources Information Center
Filatovas, Ernestas; Kurasova, Olga
2011-01-01
In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…
The Seven Deadly Sins of World University Ranking: A Summary from Several Papers
ERIC Educational Resources Information Center
Soh, Kaycheng
2017-01-01
World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…
Code of Federal Regulations, 2012 CFR
2012-07-01
... the sandwich. This is referred to as the 24 hr weight. (C). 13. Bake sandwich at 110 degrees Celsius... temperature. Record post bake sandwich weight. (D). Procedure B 1. Zero electronic balance. 2. Place two... the sandwich. This is referred to as the 24 hr weight. (C). 15. Bake sandwich at 110 degrees Celsius...
Code of Federal Regulations, 2014 CFR
2014-07-01
... the sandwich. This is referred to as the 24 hr weight. (C). 13. Bake sandwich at 110 degrees Celsius... temperature. Record post bake sandwich weight. (D). Procedure B 1. Zero electronic balance. 2. Place two... the sandwich. This is referred to as the 24 hr weight. (C). 15. Bake sandwich at 110 degrees Celsius...
Center for Intelligent Control Systems
1992-12-01
difficult than anyone expected 50 years ago, and it now seems that it will require inputs from such diver fields as brain and cognitive sience ...9/147 Wilisky. A.S. 17 Fleming, W.H. P Two-Player. Zero-Sum Differential Games 5/1/87 Sougmnidis, PS. 18 Gemnan, S-A. P Ststistical Methods for...Mansour, Y. Shavit, N. 175 Tshsiklis, J.N. P Extremal Properties of Likelihood-Ratio Quantizers 11/1/89 176 Awerbuch, B. P Online Tracking of Mobile
The Complexity of Quantitative Concurrent Parity Games
2004-11-01
for each player. In this paper we study only zero-sum games [20, 11], where the objectives of the two players are strictly competitive . In other words...Aided Verification, volume 1102 of LNCS, pages 75–86. Springer, 1996. [14] R.J. Lipton, E . Markakis, and A. Mehta. Playing large games using simple...strategies. In EC 03: Electronic Commerce, pages 36–41. ACM Press, 2003. 28 [15] D.A. Martin. The determinacy of Blackwell games . The Journal of Symbolic
Biosensors for DNA sequence detection
NASA Technical Reports Server (NTRS)
Vercoutere, Wenonah; Akeson, Mark
2002-01-01
DNA biosensors are being developed as alternatives to conventional DNA microarrays. These devices couple signal transduction directly to sequence recognition. Some of the most sensitive and functional technologies use fibre optics or electrochemical sensors in combination with DNA hybridization. In a shift from sequence recognition by hybridization, two emerging single-molecule techniques read sequence composition using zero-mode waveguides or electrical impedance in nanoscale pores.
Social dilemma cooperation (unlike Dictator Game giving) is intuitive for men as well as women.
Rand, David G
2017-11-01
Does intuition favor prosociality, or does prosocial behavior require deliberative self-control? The Social Heuristics Hypothesis (SHH) stipulates that intuition favors typically advantageous behavior - but which behavior is typically advantageous depends on both the individual and the context. For example, non-zero-sum cooperation (e.g. in social dilemmas like the Prisoner's Dilemma) typically pays off because of the opportunity for reciprocity. Conversely, reciprocity does not promote zero-sum cash transfers (e.g. in the Dictator Game, DG). Instead, DG giving can be long-run advantageous because of reputation concerns: social norms often require such behavior of women but not men. Thus, the SHH predicts that intuition will favor social dilemma cooperation regardless of gender, but only favor DG giving among women. Here I present meta-analytic evidence in support of this prediction. In 31 studies examining social dilemma cooperation (N=13,447), I find that promoting intuition increases cooperation to a similar extent for both men and women. This stands in contrast to the results from 22 DG studies (analyzed in Rand et al., 2016) where intuition promotes giving among women but not men. Furthermore, I show using meta-regression that the interaction between gender and intuition is significantly larger in the DG compared to the cooperation games. Thus, I find clear evidence that the role of intuition and deliberation varies across both setting and individual as predicted by the SHH.
Vibrational energies for HFCO using a neural network sum of exponentials potential energy surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pradhan, Ekadashi; Brown, Alex, E-mail: alex.brown@ualberta.ca
2016-05-07
A six-dimensional potential energy surface (PES) for formyl fluoride (HFCO) is fit in a sum-of-products form using neural network exponential fitting functions. The ab initio data upon which the fit is based were computed at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations [CCSD(T)-F12]/cc-pVTZ-F12 level of theory. The PES fit is accurate (RMSE = 10 cm{sup −1}) up to 10 000 cm{sup −1} above the zero point energy and covers most of the experimentally measured IR data. The PES is validated by computing vibrational energies for both HFCO and deuterated formyl fluoride (DFCO) using block improved relaxationmore » with the multi-configuration time dependent Hartree approach. The frequencies of the fundamental modes, and all other vibrational states up to 5000 cm{sup −1} above the zero-point energy, are more accurate than those obtained from the previous MP2-based PES. The vibrational frequencies obtained on the PES are compared to anharmonic frequencies at the MP2/aug-cc-pVTZ and CCSD(T)/aug-cc-pVTZ levels of theory obtained using second-order vibrational perturbation theory. The new PES will be useful for quantum dynamics simulations for both HFCO and DFCO, e.g., studies of intramolecular vibrational redistribution leading to unimolecular dissociation and its laser control.« less
Effect of feedback mode and task difficulty on quality of timing decisions in a zero-sum game.
Tikuisis, Peter; Vartanian, Oshin; Mandel, David R
2014-09-01
The objective was to investigate the interaction between the mode of performance outcome feedback and task difficulty on timing decisions (i.e., when to act). Feedback is widely acknowledged to affect task performance. However, the extent to which feedback display mode and its impact on timing decisions is moderated by task difficulty remains largely unknown. Participants repeatedly engaged a zero-sum game involving silent duels with a computerized opponent and were given visual performance feedback after each engagement. They were sequentially tested on three different levels of task difficulty (low, intermediate, and high) in counterbalanced order. Half received relatively simple "inside view" binary outcome feedback, and the other half received complex "outside view" hit rate probability feedback. The key dependent variables were response time (i.e., time taken to make a decision) and survival outcome. When task difficulty was low to moderate, participants were more likely to learn and perform better from hit rate probability feedback than binary outcome feedback. However, better performance with hit rate feedback exacted a higher cognitive cost manifested by higher decision response time. The beneficial effect of hit rate probability feedback on timing decisions is partially moderated by task difficulty. Performance feedback mode should be judiciously chosen in relation to task difficulty for optimal performance in tasks involving timing decisions.
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
[Levels and distribution of short chain chlorinated paraffins in seafood from Dalian, China].
Yu, Jun-Chao; Wang, Thanh; Wang, Ya-Wei; Meng, Mei; Chen, Ru; Jiang, Gui-Bin
2014-05-01
Seafood samples were collected from Dalian, China to study the accumulation and distribution characteristics of short chain chlorinated paraffins (SCCPs) by GC/ECNI-LRMS. Sum of SCCPs (dry weight) were in the range of 77-8 250 ng.g-1, with the lowest value in Scapharca subcrenata and highest concentration in Neptunea cumingi. The concentrations of sum of SCCPs (dry weight) in fish, shrimp/crab and shellfish were in the ranges of 100-3 510, 394-5 440, and 77-8 250 ng.g-1 , respectively. Overall, the C10 and C11 homologues were the most predominant carbon groups of SCCPs in seafood from this area,and a relatively higher proportion of C12-13 was observed in seafood with higher concentrations of sum of SCCPs . With regard to chlorine content, Cl1,, CI8 and CI6 were the major groups. Significant correlations were found among concentrations of different SCCP homologues (except C1, vs. Cl10 ) , which indicated that they might share the same sources and/or have similar accumulation, migration and transformation processes.
On the Existence of Simultaneous Edge Disjoint Realizations of Degree Sequences with ’Few’ Edges
1975-08-01
constructing graphs and digraphs with given valences and factors. Discrete Math . 6 (1973) 79-88. 3. M. Keren, Realization of a sun of sequences by a sum...appear. 5. S. Kundu, The k factor conjecture is true. Discrete Math . 6 (1973) 367-376. 6. S. Kundu, Disjoint representation of tree realizable
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
Narasimhalu, Kaavya; Lee, June; Auchus, Alexander P; Chen, Christopher P L H
2008-01-01
Previous work combining the Mini-Mental State Examination (MMSE) and Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) has been conducted in western populations. We ascertained, in an Asian population, (1) the best method of combining the tests, (2) the effects of educational level, and (3) the effect of different dementia etiologies. Data from 576 patients were analyzed (407 nondemented controls, 87 Alzheimer's disease and 82 vascular dementia patients). Sensitivity, specificity and AUC values were obtained using three methods, the 'And' rule, the 'Or' rule, and the 'weighted sum' method. The 'weighted sum' rule had statistically superior AUC and specificity results, while the 'Or' rule had the best sensitivity results. The IQCODE outperformed the MMSE in all analyses. Patients with no education benefited more from combined tests. There was no difference between Alzheimer's disease and vascular dementia populations in the predictive value of any of the combined methods. We recommend that the IQCODE be used to supplement the MMSE whenever available and that the 'weighted sum' method be used to combine the MMSE and the IQCODE, particularly in populations with low education. As the study population selected may not be representative of the general population, further studies are required before generalization to nonclinical samples. (c) 2007 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Araneda, Bernardo
2018-04-01
We present weighted covariant derivatives and wave operators for perturbations of certain algebraically special Einstein spacetimes in arbitrary dimensions, under which the Teukolsky and related equations become weighted wave equations. We show that the higher dimensional generalization of the principal null directions are weighted conformal Killing vectors with respect to the modified covariant derivative. We also introduce a modified Laplace–de Rham-like operator acting on tensor-valued differential forms, and show that the wave-like equations are, at the linear level, appropriate projections off shell of this operator acting on the curvature tensor; the projection tensors being made out of weighted conformal Killing–Yano tensors. We give off shell operator identities that map the Einstein and Maxwell equations into weighted scalar equations, and using adjoint operators we construct solutions of the original field equations in a compact form from solutions of the wave-like equations. We study the extreme and zero boost weight cases; extreme boost corresponding to perturbations of Kundt spacetimes (which includes near horizon geometries of extreme black holes), and zero boost to static black holes in arbitrary dimensions. In 4D our results apply to Einstein spacetimes of Petrov type D and make use of weighted Killing spinors.
Code of Federal Regulations, 2013 CFR
2013-07-01
... sandwich. This is referred to as the 24 hr weight. (C). 13. Bake sandwich at 110 degrees Celsius for 1 hour.... Record post bake sandwich weight. (D). Procedure B 1. Zero electronic balance. 2. Place two pieces of... the sandwich. This is referred to as the 24 hr weight. (C). 15. Bake sandwich at 110 degrees Celsius...
Code of Federal Regulations, 2010 CFR
2010-07-01
... sandwich. This is referred to as the 24 hr weight. (C). 13. Bake sandwich at 110 degrees Celsius for 1 hour.... Record post bake sandwich weight. (D). Procedure B 1. Zero electronic balance. 2. Place two pieces of... the sandwich. This is referred to as the 24 hr weight. (C). 15. Bake sandwich at 110 degrees Celsius...
Code of Federal Regulations, 2011 CFR
2011-07-01
... sandwich. This is referred to as the 24 hr weight. (C). 13. Bake sandwich at 110 degrees Celsius for 1 hour.... Record post bake sandwich weight. (D). Procedure B 1. Zero electronic balance. 2. Place two pieces of... the sandwich. This is referred to as the 24 hr weight. (C). 15. Bake sandwich at 110 degrees Celsius...
Effects of Preseason Training on the Sleep Characteristics of Professional Rugby League Players.
Thornton, Heidi R; Delaney, Jace A; Duthie, Grant M; Dascombe, Ben J
2018-02-01
To investigate the influence of daily and exponentially weighted moving training loads on subsequent nighttime sleep. Sleep of 14 professional rugby league athletes competing in the National Rugby League was recorded using wristwatch actigraphy. Physical demands were quantified using GPS technology, including total distance, high-speed distance, acceleration/deceleration load (SumAccDec; AU), and session rating of perceived exertion (AU). Linear mixed models determined effects of acute (daily) and subacute (3- and 7-d) exponentially weighted moving averages (EWMA) on sleep. Higher daily SumAccDec was associated with increased sleep efficiency (effect-size correlation; ES = 0.15; ±0.09) and sleep duration (ES = 0.12; ±0.09). Greater 3-d EWMA SumAccDec was associated with increased sleep efficiency (ES = 0.14; ±0.09) and an earlier bedtime (ES = 0.14; ±0.09). An increase in 7-d EWMA SumAccDec was associated with heightened sleep efficiency (ES = 0.15; ±0.09) and earlier bedtimes (ES = 0.15; ±0.09). The direction of the associations between training loads and sleep varied, but the strongest relationships showed that higher training loads increased various measures of sleep. Practitioners should be aware of the increased requirement for sleep during intensified training periods, using this information in the planning and implementation of training and individualized recovery modalities.
Photonuclear sum rules and the tetrahedral configuration of He4
NASA Astrophysics Data System (ADS)
Gazit, Doron; Barnea, Nir; Bacca, Sonia; Leidemann, Winfried; Orlandini, Giuseppina
2006-12-01
Three well-known photonuclear sum rules (SR), i.e., the Thomas-Reiche-Kuhn, the bremsstrahlungs and the polarizability SR are calculated for He4 with the realistic nucleon-nucleon potential Argonne V18 and the three-nucleon force Urbana IX. The relation between these sum rules and the corresponding energy weighted integrals of the cross section is discussed. Two additional equivalences for the bremsstrahlungs SR are given, which connect it to the proton-neutron and neutron-neutron distances. Using them, together with our result for the bremsstrahlungs SR, we find a deviation from the tetrahedral symmetry of the spatial configuration of He4. The possibility to access this deviation experimentally is discussed.
Fixing Stellarator Magnetic Surfaces
NASA Astrophysics Data System (ADS)
Hanson, James D.
1999-11-01
Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.
Racial athletic stereotype confirmation in college football recruiting.
Thomas, Grant; Good, Jessica J; Gross, Alexi R
2015-01-01
The present study tested real-world racial stereotype use in the context of college athletic recruiting. Stereotype confirmation suggests that observers use stereotypes as hypotheses and interpret relevant evidence in a biased way that confirms their stereotypes. Shifting standards suggest that the evaluative standard to which we hold a target changes as a function of their group membership. We examined whether stereotype confirmation and shifting standards effects would be seen in college football coaches during recruiting. College football coaches evaluated a Black or White player on several attributes and made both zero- and non-zero-sum allocations. Results suggested that coaches used the evidence presented to develop biased subjective evaluations of the players based on race while still maintaining equivalent objective evaluations. Coaches also allocated greater overall resources to the Black recruit than the White recruit.
A space-efficient algorithm for local similarities.
Huang, X Q; Hardison, R C; Miller, W
1990-10-01
Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.
Zero frequency modes of the Maclaurin spheroids
NASA Astrophysics Data System (ADS)
Baumgart, D.; Friedman, J. L.
1986-05-01
The location of all zero-frequency modes of oscillation along the Maclaurin sequence are found for modes corresponding to oblate spheroidal harmonics with indices (l,m) where l less than 6 (equivalently, for modes described by Lagrangian displacements whose components in Cartesian coordinates are polynomials of degree less than or equal to 5). These points of zero frequency mark the onset of instability in each mode in the context of general relativity, or when a gravitational radiation reaction term is adjointed to the Newtonian theory.
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
Optical implementation of inner product neural associative memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1995-01-01
An optical implementation of an inner-product neural associative memory is realized with a first spatial light modulator for entering an initial two-dimensional N-tuple vector and for entering a thresholded output vector image after each iteration until convergence is reached, and a second spatial light modulator for entering M weighted vectors of inner-product scalars multiplied with each of the M stored vectors, where the inner-product scalars are produced by multiplication of the initial input vector in the first iterative cycle (and thresholded vectors in subsequent iterative cycles) with each of the M stored vectors, and the weighted vectors are produced by multiplication of the scalars with corresponding ones of the stored vectors. A Hughes liquid crystal light valve is used for the dual function of summing the weighted vectors and thresholding the sum vector. The thresholded vector is then entered through the first spatial light modulator for reiteration of the process cycle until convergence is reached.
NASA Astrophysics Data System (ADS)
Crnomarkovic, Nenad; Belosevic, Srdjan; Tomanovic, Ivan; Milicevic, Aleksandar
2017-12-01
The effects of the number of significant figures (NSF) in the interpolation polynomial coefficients (IPCs) of the weighted sum of gray gases model (WSGM) on results of numerical investigations and WSGM optimization were investigated. The investigation was conducted using numerical simulations of the processes inside a pulverized coal-fired furnace. The radiative properties of the gas phase were determined using the simple gray gas model (SG), two-term WSGM (W2), and three-term WSGM (W3). Ten sets of the IPCs with the same NSF were formed for every weighting coefficient in both W2 and W3. The average and maximal relative difference values of the flame temperatures, wall temperatures, and wall heat fluxes were determined. The investigation showed that the results of numerical investigations were affected by the NSF unless it exceeded certain value. The increase in the NSF did not necessarily lead to WSGM optimization. The combination of the NSF (CNSF) was the necessary requirement for WSGM optimization.
Evolving cell models for systems and synthetic biology.
Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio
2010-03-01
This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.
Bi-orthogonal Symbol Mapping and Detection in Optical CDMA Communication System
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang
2017-12-01
In this paper, the bi-orthogonal symbol mapping and detection scheme is investigated in time-spreading wavelength-hopping optical CDMA communication system. The carrier-hopping prime code is exploited as signature sequence, whose put-of-phase autocorrelation is zero. Based on the orthogonality of carrier-hopping prime code, the equal weight orthogonal signaling scheme can be constructed, and the proposed scheme using bi-orthogonal symbol mapping and detection can be developed. The transmitted binary data bits are mapped into corresponding bi-orthogonal symbols, where the orthogonal matrix code and its complement are utilized. In the receiver, the received bi-orthogonal data symbol is fed into the maximum likelihood decoder for detection. Under such symbol mapping and detection, the proposed scheme can greatly enlarge the Euclidean distance; hence, the system performance can be drastically improved.
Sequence quality analysis tool for HIV type 1 protease and reverse transcriptase.
Delong, Allison K; Wu, Mingham; Bennett, Diane; Parkin, Neil; Wu, Zhijin; Hogan, Joseph W; Kantor, Rami
2012-08-01
Access to antiretroviral therapy is increasing globally and drug resistance evolution is anticipated. Currently, protease (PR) and reverse transcriptase (RT) sequence generation is increasing, including the use of in-house sequencing assays, and quality assessment prior to sequence analysis is essential. We created a computational HIV PR/RT Sequence Quality Analysis Tool (SQUAT) that runs in the R statistical environment. Sequence quality thresholds are calculated from a large dataset (46,802 PR and 44,432 RT sequences) from the published literature ( http://hivdb.Stanford.edu ). Nucleic acid sequences are read into SQUAT, identified, aligned, and translated. Nucleic acid sequences are flagged if with >five 1-2-base insertions; >one 3-base insertion; >one deletion; >six PR or >18 RT ambiguous bases; >three consecutive PR or >four RT nucleic acid mutations; >zero stop codons; >three PR or >six RT ambiguous amino acids; >three consecutive PR or >four RT amino acid mutations; >zero unique amino acids; or <0.5% or >15% genetic distance from another submitted sequence. Thresholds are user modifiable. SQUAT output includes a summary report with detailed comments for troubleshooting of flagged sequences, histograms of pairwise genetic distances, neighbor joining phylogenetic trees, and aligned nucleic and amino acid sequences. SQUAT is a stand-alone, free, web-independent tool to ensure use of high-quality HIV PR/RT sequences in interpretation and reporting of drug resistance, while increasing awareness and expertise and facilitating troubleshooting of potentially problematic sequences.
An iterative inversion of weighted Radon transforms along hyperplanes
NASA Astrophysics Data System (ADS)
Goncharov, F. O.
2017-12-01
We propose iterative inversion algorithms for weighted Radon transforms R W along hyperplanes in {R}3 . More precisely, expanding the weight W=W(x, θ), x\\in {R}^3, θ\\in {S}^2, into the series of spherical harmonics in θ and assuming that the zero order term w0, 0(x)\
Arlaud, G J; Gagnon, J; Porter, R R
1982-01-01
1. The a- and b-chains of reduced and alkylated human complement subcomponent C1r were separated by high-pressure gel-permeation chromatography and isolated in good yield and in pure form. 2. CNBr cleavage of C1r b-chain yielded eight major peptides, which were purified by gel filtration and high-pressure reversed-phase chromatography. As determined from the sum of their amino acid compositions, these peptides accounted for a minimum molecular weight of 28 000, close to the value 29 100 calculated from the whole b-chain. 3. N-Terminal sequence determinations of C1r b-chain and its CNBr-cleavage peptides allowed the identification of about two-thirds of the amino acids of C1r b-chain. From our results, and on the basis of homology with other serine proteinases, an alignment of the eight CNBr-cleavage peptides from C1r b-chain is proposed. 4. The residues forming the 'charge-relay' system of the active site of serine proteinases (His-57, Asp-102 and Ser-195 in the chymotrypsinogen numbering) are found in the corresponding regions of C1r b-chain, and the amino acid sequence around these residues has been determined. 5. The N-terminal sequence of C1r b-chain has been extended to residue 60 and reveals that C1r b-chain lacks the 'histidine loop', a disulphide bond that is present in all other known serine proteinases.
Implementation and Performance Analysis of Parallel Assignment Algorithms on a Hypercube Computer.
1987-12-01
coupled pro- cessors because of the degree of interaction between processors imposed by the global memory [HwB84]. Another sub-class of MIMD... interaction between the individual processors [MuA87]. Many of the commercial MIMD computers available today are loosely coupled [HwB84]. 2.1.3 The Hypercube...Alpha-beta is a method usually employed in the solution of two-person zero-sum games like chess and checkers [Qui87]. The ha sic approach of the alpha
Influence: U. S. National Interests and the Republic of the Philippines.
1981-12-01
and fear of zero-sum phenomenon has had a tremendous impact on the nature of global economics as well as the American concept of abundance. Mass media...castigation, and a host of other variables in terms of their impact on our overall capability in the future. The result was an ouster which will soon, if not...through either a threat or a promise, on another actor. The latter two, show a kind of environmental effect which the actors choice will stimulate
On a cost functional for H2/H(infinity) minimization
NASA Technical Reports Server (NTRS)
Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis
1990-01-01
A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.
Kunhibava, Sherin
2011-03-01
Gambling and speculation which leads to zero-sum outcomes are prohibited in Islamic finance and condemned in conventional finance. This article explores the reasons for the similarity of objections towards gambling and speculation. Three probable reasons are explored namely the concept of stewardship in conventional thought and the concept of khalifa in Islam, Christianity and morality's influence on conventional law and finance and the concept of ethics of sacrifice and ethics of tolerance.
Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control
2015-11-10
the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the
The influence of lay-up and thickness on composite impact damage and compression strength
NASA Technical Reports Server (NTRS)
Guynn, E. G.; Obrien, T. K.
1985-01-01
The effects of composite stacking sequence, thickness, and percentage of zero-degree plies on the size, shape, and distribution of delamination through the laminate thickness and on residual compression strength following impact were studied. Graphite/epoxy laminates were impacted with an 0.5 inch diameter aluminum sphere at a specific low or high velocity. Impact damage was measured nondestructively by ultrasonic C-scans and X-radiography and destructively by the deply technique, and compression strength tests were performed. It was found that differences in compression failure strain due to stacking sequence were small, while laminates with very low percentages of zero-degree plies had similar failure loads but higher failure strains than laminates with higher percentages of zero-degree plies. Failure strain did not correlate with planar impact damage area, and delaminations in impact regions were associated with matrix cracking.
Enforcing realizability in explicit multi-component species transport
McDermott, Randall J.; Floyd, Jason E.
2015-01-01
We propose a strategy to guarantee realizability of species mass fractions in explicit time integration of the partial differential equations governing fire dynamics, which is a multi-component transport problem (realizability requires all mass fractions are greater than or equal to zero and that the sum equals unity). For a mixture of n species, the conventional strategy is to solve for n − 1 species mass fractions and to obtain the nth (or “background”) species mass fraction from one minus the sum of the others. The numerical difficulties inherent in the background species approach are discussed and the potential for realizability violations is illustrated. The new strategy solves all n species transport equations and obtains density from the sum of the species mass densities. To guarantee realizability the species mass densities must remain positive (semidefinite). A scalar boundedness correction is proposed that is based on a minimal diffusion operator. The overall scheme is implemented in a publicly available large-eddy simulation code called the Fire Dynamics Simulator. A set of test cases is presented to verify that the new strategy enforces realizability, does not generate spurious mass, and maintains second-order accuracy for transport. PMID:26692634
Obtaining orthotropic elasticity tensor using entries zeroing method.
NASA Astrophysics Data System (ADS)
Gierlach, Bartosz; Danek, Tomasz
2017-04-01
A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal rotation. Computations were parallelized with OpenMP to decrease computational time what enables different tensors to be processed by different threads. As a result the distributions of rotated tensor entries values were obtained. For the entries which were to be zeroed we can observe almost normal distributions having mean equal to zero or sum of two normal distributions having inverse means. Non-zero entries represent different distributions with two or three maxima. Analysis of obtained results shows that described method produces consistent values of quaternions used to rotate tensors. Despite of less complex target function in a process of optimization in comparison to common approach, entries zeroing method provides results which can be applied to obtain an orthotropic tensor with good reliability. Modification of the method can produce also a tool for obtaining effective tensors belonging to another symmetry classes. This research was supported by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.
El Sanharawi, Imane; Tzarouchi, Loukia; Cardoen, Liesbeth; Martinerie, Laetitia; Leger, Juliane; Carel, Jean-Claude; Elmaleh-Berges, Monique; Alison, Marianne
2017-05-01
In anterior pituitary deficiency, patients with non visible pituitary stalk have more often multiple deficiencies and persistent deficiency than patients with visible pituitary stalk. To compare the diagnostic value of a high-resolution heavily T2-weighted sequence to 1.5-mm-thick unenhanced and contrast-enhanced sagittal T1-weighted sequences to assess the presence of the pituitary stalk in children with ectopic posterior pituitary gland. We retrospectively evaluated the MRI data of 14 children diagnosed with ectopic posterior pituitary gland between 2010 and 2014. We evaluated the presence of a pituitary stalk using a sagittal high-resolution heavily T2-weighted sequence and a 1.5-mm sagittal T1-weighted turbo spin-echo sequence before and after contrast medium administration. A pituitary stalk was present on at least one of the sequences in 10 of the 14 children (71%). T2-weighted sequence depicted the pituitary stalk in all 10 children, whereas the 1.5-mm-thick T1-weighted sequence depicted 2/10 (20%) before contrast injection and 8/10 (80%) after contrast injection (P=0.007). Compared with 1.5-mm-thick contrast-enhanced T1-weighted sequences, high-resolution heavily T2-weighted sequence demonstrates better sensitivity in detecting the pituitary stalk in children with ectopic posterior pituitary gland, suggesting that contrast injection is unnecessary to assess the presence of a pituitary stalk in this setting.
Comparison of Dixon Sequences for Estimation of Percent Breast Fibroglandular Tissue
Ledger, Araminta E. W.; Scurr, Erica D.; Hughes, Julie; Macdonald, Alison; Wallace, Toni; Thomas, Karen; Wilson, Robin; Leach, Martin O.; Schmidt, Maria A.
2016-01-01
Objectives To evaluate sources of error in the Magnetic Resonance Imaging (MRI) measurement of percent fibroglandular tissue (%FGT) using two-point Dixon sequences for fat-water separation. Methods Ten female volunteers (median age: 31 yrs, range: 23–50 yrs) gave informed consent following Research Ethics Committee approval. Each volunteer was scanned twice following repositioning to enable an estimation of measurement repeatability from high-resolution gradient-echo (GRE) proton-density (PD)-weighted Dixon sequences. Differences in measures of %FGT attributable to resolution, T1 weighting and sequence type were assessed by comparison of this Dixon sequence with low-resolution GRE PD-weighted Dixon data, and against gradient-echo (GRE) or spin-echo (SE) based T1-weighted Dixon datasets, respectively. Results %FGT measurement from high-resolution PD-weighted Dixon sequences had a coefficient of repeatability of ±4.3%. There was no significant difference in %FGT between high-resolution and low-resolution PD-weighted data. Values of %FGT from GRE and SE T1-weighted data were strongly correlated with that derived from PD-weighted data (r = 0.995 and 0.96, respectively). However, both sequences exhibited higher mean %FGT by 2.9% (p < 0.0001) and 12.6% (p < 0.0001), respectively, in comparison with PD-weighted data; the increase in %FGT from the SE T1-weighted sequence was significantly larger at lower breast densities. Conclusion Although measurement of %FGT at low resolution is feasible, T1 weighting and sequence type impact on the accuracy of Dixon-based %FGT measurements; Dixon MRI protocols for %FGT measurement should be carefully considered, particularly for longitudinal or multi-centre studies. PMID:27011312
Gutierrez, Shandra; Descamps, Benedicte; Vanhove, Christian
2015-01-01
Computed tomography (CT) is the standard imaging modality in radiation therapy treatment planning (RTP). However, magnetic resonance (MR) imaging provides superior soft tissue contrast, increasing the precision of target volume selection. We present MR-only based RTP for a rat brain on a small animal radiation research platform (SARRP) using probabilistic voxel classification with multiple MR sequences. Six rat heads were imaged, each with one CT and five MR sequences. The MR sequences were: T1-weighted, T2-weighted, zero-echo time (ZTE), and two ultra-short echo time sequences with 20 μs (UTE1) and 2 ms (UTE2) echo times. CT data were manually segmented into air, soft tissue, and bone to obtain the RTP reference. Bias field corrected MR images were automatically segmented into the same tissue classes using a fuzzy c-means segmentation algorithm with multiple images as input. Similarities between segmented CT and automatic segmented MR (ASMR) images were evaluated using Dice coefficient. Three ASMR images with high similarity index were used for further RTP. Three beam arrangements were investigated. Dose distributions were compared by analysing dose volume histograms. The highest Dice coefficients were obtained for the ZTE-UTE2 combination and for the T1-UTE1-T2 combination when ZTE was unavailable. Both combinations, along with UTE1-UTE2, often used to generate ASMR images, were used for further RTP. Using 1 beam, MR based RTP underestimated the dose to be delivered to the target (range: 1.4%-7.6%). When more complex beam configurations were used, the calculated dose using the ZTE-UTE2 combination was the most accurate, with 0.7% deviation from CT, compared to 0.8% for T1-UTE1-T2 and 1.7% for UTE1-UTE2. The presented MR-only based workflow for RTP on a SARRP enables both accurate organ delineation and dose calculations using multiple MR sequences. This method can be useful in longitudinal studies where CT's cumulative radiation dose might contribute to the total dose.
Gutierrez, Shandra; Descamps, Benedicte; Vanhove, Christian
2015-01-01
Computed tomography (CT) is the standard imaging modality in radiation therapy treatment planning (RTP). However, magnetic resonance (MR) imaging provides superior soft tissue contrast, increasing the precision of target volume selection. We present MR-only based RTP for a rat brain on a small animal radiation research platform (SARRP) using probabilistic voxel classification with multiple MR sequences. Six rat heads were imaged, each with one CT and five MR sequences. The MR sequences were: T1-weighted, T2-weighted, zero-echo time (ZTE), and two ultra-short echo time sequences with 20 μs (UTE1) and 2 ms (UTE2) echo times. CT data were manually segmented into air, soft tissue, and bone to obtain the RTP reference. Bias field corrected MR images were automatically segmented into the same tissue classes using a fuzzy c-means segmentation algorithm with multiple images as input. Similarities between segmented CT and automatic segmented MR (ASMR) images were evaluated using Dice coefficient. Three ASMR images with high similarity index were used for further RTP. Three beam arrangements were investigated. Dose distributions were compared by analysing dose volume histograms. The highest Dice coefficients were obtained for the ZTE-UTE2 combination and for the T1-UTE1-T2 combination when ZTE was unavailable. Both combinations, along with UTE1-UTE2, often used to generate ASMR images, were used for further RTP. Using 1 beam, MR based RTP underestimated the dose to be delivered to the target (range: 1.4%-7.6%). When more complex beam configurations were used, the calculated dose using the ZTE-UTE2 combination was the most accurate, with 0.7% deviation from CT, compared to 0.8% for T1-UTE1-T2 and 1.7% for UTE1-UTE2. The presented MR-only based workflow for RTP on a SARRP enables both accurate organ delineation and dose calculations using multiple MR sequences. This method can be useful in longitudinal studies where CT’s cumulative radiation dose might contribute to the total dose. PMID:26633302
Bjelica, Dusko; Idrizovic, Kemal; Popovic, Stevo; Sisic, Nedim; Sekulic, Damir; Ostojic, Ljerka; Spasic, Miodrag; Zenic, Natasa
2016-01-01
Substance use and misuse (SUM) in adolescence is a significant public health problem and the extent to which adolescents exhibit SUM behaviors differs across ethnicity. This study aimed to explore the ethnicity-specific and gender-specific associations among sports factors, familial factors, and personal satisfaction with physical appearance (i.e., covariates) and SUM in a sample of adolescents from Federation of Bosnia and Herzegovina. In this cross-sectional study the participants were 1742 adolescents (17–18 years of age) from Bosnia and Herzegovina who were in their last year of high school education (high school seniors). The sample comprised 772 Croatian (558 females) and 970 Bosniak (485 females) adolescents. Variables were collected using a previously developed and validated questionnaire that included questions on SUM (alcohol drinking, cigarette smoking, and consumption of other drugs), sport factors, parental education, socioeconomic status, and satisfaction with physical appearance and body weight. The consumption of cigarettes remains high (37% of adolescents smoke cigarettes), with a higher prevalence among Croatians. Harmful drinking is also alarming (evidenced in 28.4% of adolescents). The consumption of illicit drugs remains low with 5.7% of adolescents who consume drugs, with a higher prevalence among Bosniaks. A higher likelihood of engaging in SUM is found among children who quit sports (for smoking and drinking), boys who perceive themselves to be good looking (for smoking), and girls who are not satisfied with their body weight (for smoking). Higher maternal education is systematically found to be associated with greater SUM in Bosniak girls. Information on the associations presented herein could be discretely disseminated as a part of regular school administrative functions. The results warrant future prospective studies that more precisely identify the causality among certain variables. PMID:27690078
NASA Astrophysics Data System (ADS)
Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia
2018-04-01
In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.
NASA Astrophysics Data System (ADS)
Zhao, Yumin
1997-07-01
By the techniques of the Wick theorem for coupled clusters, the no-energy-weighted electromagnetic sum-rule calculations are presented in the sdg neutron-proton interacting boson model, the nuclear pair shell model and the fermion-dynamical symmetry model. The project supported by Development Project Foundation of China, National Natural Science Foundation of China, Doctoral Education Fund of National Education Committee, Fundamental Research Fund of Southeast University
DNA unzipping phase diagram calculated via replica theory.
Roland, C Brian; Hatch, Kristi Adamson; Prentiss, Mara; Shakhnovich, Eugene I
2009-05-01
We show how single-molecule unzipping experiments can provide strong evidence that the zero-force melting transition of long molecules of natural dsDNA should be classified as a phase transition of the higher-order type (continuous). Toward this end, we study a statistical-mechanics model for the fluctuating structure of a long molecule of dsDNA, and compute the equilibrium phase diagram for the experiment in which the molecule is unzipped under applied force. We consider a perfect-matching dsDNA model, in which the loops are volume-excluding chains with arbitrary loop exponent c . We include stacking interactions, hydrogen bonds, and main-chain entropy. We include sequence heterogeneity at the level of random sequences; in particular, there is no correlation in the base-pairing (bp) energy from one sequence position to the next. We present heuristic arguments to demonstrate that the low-temperature macrostate does not exhibit degenerate ergodicity breaking. We use this claim to understand the results of our replica-theoretic calculation of the equilibrium properties of the system. As a function of temperature, we obtain the minimal force at which the molecule separates completely. This critical-force curve is a line in the temperature-force phase diagram that marks the regions where the molecule exists primarily as a double helix versus the region where the molecule exists as two separate strands. We compare our random-sequence model to magnetic tweezer experiments performed on the 48 502 bp genome of bacteriophage lambda . We find good agreement with the experimental data, which is restricted to temperatures between 24 and 50 degrees C . At higher temperatures, the critical-force curve of our random-sequence model is very different for that of the homogeneous-sequence version of our model. For both sequence models, the critical force falls to zero at the melting temperature T_{c} like |T-T_{c}|;{alpha} . For the homogeneous-sequence model, alpha=1/2 almost exactly, while for the random-sequence model, alpha approximately 0.9 . Importantly, the shape of the critical-force curve is connected, via our theory, to the manner in which the helix fraction falls to zero at T_{c} . The helix fraction is the property that is used to classify the melting transition as a type of phase transition. In our calculation, the shape of the critical-force curve holds strong evidence that the zero-force melting transition of long natural dsDNA should be classified as a higher-order (continuous) phase transition. Specifically, the order is 3rd or greater.
Code of Federal Regulations, 2010 CFR
2010-07-01
... zero and span settings of the smokemeter. (If a recorder is used, a chart speed of approximately one... collection, it shall be run at a minimum chart speed of one inch per minute during the idle mode and... zero and full scale response may be rechecked and reset during the idle mode of each test sequence. (v...
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2012 CFR
2012-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2010 CFR
2010-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2014 CFR
2014-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2013 CFR
2013-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2011 CFR
2011-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
Simple data-smoothing and noise-suppression technique
NASA Technical Reports Server (NTRS)
Duty, R. L.
1970-01-01
Algorithm, based on the Borel method of summing divergent sequences, is used for smoothing noisy data where knowledge of frequency content is not required. Technique's effectiveness is demonstrated by a series of graphs.
Analog hardware for delta-backpropagation neural networks
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P. (Inventor)
1992-01-01
This is a fully parallel analog backpropagation learning processor which comprises a plurality of programmable resistive memory elements serving as synapse connections whose values can be weighted during learning with buffer amplifiers, summing circuits, and sample-and-hold circuits arranged in a plurality of neuron layers in accordance with delta-backpropagation algorithms modified so as to control weight changes due to circuit drift.
Least-Squares Analysis of Data with Uncertainty in "y" and "x": Algorithms in Excel and KaleidaGraph
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2018-01-01
For the least-squares analysis of data having multiple uncertain variables, the generally accepted best solution comes from minimizing the sum of weighted squared residuals over all uncertain variables, with, for example, weights in x[subscript i] taken as inversely proportional to the variance [delta][subscript xi][superscript 2]. A complication…
How Can "Weightless" Astronauts Be Weighed?
ERIC Educational Resources Information Center
Carnicer, Jesus; Reyes, Francisco; Guisasola, Jenaro
2012-01-01
In introductory physics courses, within the context of studying Newton's laws, it is common to consider the problem of a body's "weight" when it is in free fall. The solution shows that the "weight" is zero and this leads to a discussion of the concept of weight. There are permanent free-fall situations such as astronauts in a spacecraft orbiting…
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Karniel, Amir; Avraham, Guy; Peles, Bat-Chen; Levy-Tzedek, Shelly; Nisky, Ilana
2010-01-01
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake. PMID:21206462
Fusion of classifiers for REIS-based detection of suspicious breast lesions
NASA Astrophysics Data System (ADS)
Lederman, Dror; Wang, Xingwei; Zheng, Bin; Sumkin, Jules H.; Tublin, Mitchell; Gur, David
2011-03-01
After developing a multi-probe resonance-frequency electrical impedance spectroscopy (REIS) system aimed at detecting women with breast abnormalities that may indicate a developing breast cancer, we have been conducting a prospective clinical study to explore the feasibility of applying this REIS system to classify younger women (< 50 years old) into two groups of "higher-than-average risk" and "average risk" of having or developing breast cancer. The system comprises one central probe placed in contact with the nipple, and six additional probes uniformly distributed along an outside circle to be placed in contact with six points on the outer breast skin surface. In this preliminary study, we selected an initial set of 174 examinations on participants that have completed REIS examinations and have clinical status verification. Among these, 66 examinations were recommended for biopsy due to findings of a highly suspicious breast lesion ("positives"), and 108 were determined as negative during imaging based procedures ("negatives"). A set of REIS-based features, extracted using a mirror-matched approach, was computed and fed into five machine learning classifiers. A genetic algorithm was used to select an optimal subset of features for each of the five classifiers. Three fusion rules, namely sum rule, weighted sum rule and weighted median rule, were used to combine the results of the classifiers. Performance evaluation was performed using a leave-one-case-out cross-validation method. The results indicated that REIS may provide a new technology to identify younger women with higher than average risk of having or developing breast cancer. Furthermore, it was shown that fusion rule, such as a weighted median fusion rule and a weighted sum fusion rule may improve performance as compared with the highest performing single classifier.
Mäkinen, Mauno; Marttunen, Mauri; Komulainen, Erkki; Terevnikov, Viacheslav; Puukko-Viertomies, Leena-Riitta; Aalberg, Veikko; Lindberg, Nina
2015-01-01
The proportion of overweight and obese youths is high. The present study aimed to investigate the development of self-image and its components during a one-year follow-up among non-referred adolescents with excess and normal weight. Furthermore, we separately analyzed the data for girls and boys. Altogether 86 8(th) grades (41 girls and 45 boys) with a relative weight of 26% or more above the median and 91 controls (43 girls and 48 boys) with normal weight participated the follow-up. The Offer Self-Image Questionnaire, Revised (OSIQ-R) was used to assess self-image at baseline and on follow-up. In the OSIQ-R, a low total raw score implies positive adjustment, while a high raw score implies poor adjustment and a negative self-image. The study design was doubly correlated (pairs and time), and a linear mixed model was used in the statistical analysis. In OSIQ-R total scores, a comparative improvement was observed in girls with normal weight. Among these girls, significant change scores compared to zero were seen in impulse control, social functioning, vocational attitudes, self-confidence, self-reliance, body image, sexuality, and ethical values. In girls with excess weight, none of the change scores compared to zero were statistically significant. When the girls with normal and excess weight were compared, the difference in change scores was largest in sexuality and vocational attitudes. Change scores compared to zero were significant in sexuality and idealism for boys with excess weight, and in impulse control, mental health, self-reliance, and sexuality for normal weight boys. When the boys with excess and normal weight were compared, no statistically significant differences emerged in change scores. In mid-adolescent girls, the influence of overweight and obesity on the development of self-image is substantial. Weight management programs directed at overweight adolescent girls should include psychological interventions aiming to diminish self-image distress, especially that associated with feelings, attitudes, and behavior towards the opposite sex, as well as future career plans.
A Person Stands on a Balance in an Elevator: What Happens When the Elevator Starts to Fall?
ERIC Educational Resources Information Center
Balukovic, Jasmina; Slisko, Josip; Cruz, Adrián Corona
2018-01-01
Physics textbook authors commonly introduce the concept of weightlessness (apparent or real) through a "thought experiment" in which a person weighs herself or himself in an elevator. When the elevator falls freely, the spring balance should show zero weight. There is an unresolved controversy about how this "zero reading"…
Micro-Sugar-Snap and -Wire-Cut Cookie Baking with Trans- and Zero-Trans-Fat Shortenings
USDA-ARS?s Scientific Manuscript database
The effect of trans- and zero-trans-fat shortenings on cookie-baking performance was evaluated, using the two AACC micro-cookie-baking methods. Regardless of fat type, sugar-snap cookies made with a given flour were larger in diameter, smaller in height, and greater in weight loss during baking tha...
NASA Astrophysics Data System (ADS)
Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.
2016-05-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Nodal weighting factor method for ex-core fast neutron fluence evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, R. T.
The nodal weighting factor method is developed for evaluating ex-core fast neutron flux in a nuclear reactor by utilizing adjoint neutron flux, a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV, the unit fission source, and relative assembly nodal powers. The method determines each nodal weighting factor for ex-core neutron fast flux evaluation by solving the steady-state adjoint neutron transport equation with a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV as the adjoint source, by integrating the unit fission source with a typical fission spectrum to the solved adjointmore » flux over all energies, all angles and given nodal volume, and by dividing it with the sum of all nodal weighting factors, which is a normalization factor. Then, the fast neutron flux can be obtained by summing the various relative nodal powers times the corresponding nodal weighting factors of the adjacent significantly contributed peripheral assembly nodes and times a proper fast neutron attenuation coefficient over an operating period. A generic set of nodal weighting factors can be used to evaluate neutron fluence at the same location for similar core design and fuel cycles, but the set of nodal weighting factors needs to be re-calibrated for a transition-fuel-cycle. This newly developed nodal weighting factor method should be a useful and simplified tool for evaluating fast neutron fluence at selected locations of interest in ex-core components of contemporary nuclear power reactors. (authors)« less
Complex networks in the Euclidean space of communicability distances
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
An assessment of some non-gray global radiation models in enclosures
NASA Astrophysics Data System (ADS)
Meulemans, J.
2016-01-01
The accuracy of several non-gray global gas/soot radiation models, namely the Wide-Band Correlated-K (WBCK) model, the Spectral Line Weighted-sum-of-gray-gases model with one optimized gray gas (SLW-1), the (non-gray) Weighted-Sum-of-Gray-Gases (WSGG) model with different sets of coefficients (Smith et al., Soufiani and Djavdan, Taylor and Foster) was assessed on several test cases from the literature. Non-isothermal (or isothermal) participating media containing non-homogeneous (or homogeneous) mixtures of water vapor, carbon dioxide and soot in one-dimensional planar enclosures and multi-dimensional rectangular enclosures were investigated. For all the considered test cases, a benchmark solution (LBL or SNB) was used in order to compute the relative error of each model on the predicted radiative source term and the wall net radiative heat flux.
Quantum vacuum effects from boundaries of designer potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konopka, Tomasz
2009-04-15
Vacuum energy in quantum field theory, being the sum of zero-point energies of all field modes, is formally infinite but yet, after regularization or renormalization, can give rise to finite observable effects. One way of understanding how these effects arise is to compute the vacuum energy in an idealized system such as a large cavity divided into disjoint regions by pistons. In this paper, this type of calculation is carried out for situations where the potential affecting a field is not the same in all regions of the cavity. It is shown that the observable parts of the vacuum energymore » in such potentials do not fall off to zero as the region where the potential is nontrivial becomes large. This unusual behavior might be interesting for tests involving quantum vacuum effects and for studies on the relation between vacuum energy in quantum field theory and geometry.« less
Entanglement of Distillation for Lattice Gauge Theories.
Van Acoleyen, Karel; Bultinck, Nick; Haegeman, Jutho; Marien, Michael; Scholz, Volkher B; Verstraete, Frank
2016-09-23
We study the entanglement structure of lattice gauge theories from the local operational point of view, and, similar to Soni and Trivedi [J. High Energy Phys. 1 (2016) 1], we show that the usual entanglement entropy for a spatial bipartition can be written as the sum of an undistillable gauge part and of another part corresponding to the local operations and classical communication distillable entanglement, which is obtained by depolarizing the local superselection sectors. We demonstrate that the distillable entanglement is zero for pure Abelian gauge theories at zero gauge coupling, while it is in general nonzero for the non-Abelian case. We also consider gauge theories with matter, and show in a perturbative approach how area laws-including a topological correction-emerge for the distillable entanglement. Finally, we also discuss the entanglement entropy of gauge fixed states and show that it has no relation to the physical distillable entropy.
An evaluation of ozone exposure metrics for a seasonally drought-stressed ponderosa pine ecosystem.
Panek, Jeanne A; Kurpius, Meredith R; Goldstein, Allen H
2002-01-01
Ozone stress has become an increasingly significant factor in cases of forest decline reported throughout the world. Current metrics to estimate ozone exposure for forest trees are derived from atmospheric concentrations and assume that the forest is physiologically active at all times of the growing season. This may be inaccurate in regions with a Mediterranean climate, such as California and the Pacific Northwest, where peak physiological activity occurs early in the season to take advantage of high soil moisture and does not correspond to peak ozone concentrations. It may also misrepresent ecosystems experiencing non-average climate conditions such as drought years. We compared direct measurements of ozone flux into a ponderosa pine canopy with a suite of the most common ozone exposure metrics to determine which best correlated with actual ozone uptake by the forest. Of the metrics we assessed, SUM0 (the sum of all daytime ozone concentrations > 0) best corresponded to ozone uptake by ponderosa pine, however the correlation was only strong at times when the stomata were unconstrained by site moisture conditions. In the early growing season (May and June). SUM0 was an adequate metric for forest ozone exposure. Later in the season, when stomatal conductance was limited by drought. SUM0 overestimated ozone uptake. A better metric for seasonally drought-stressed forests would be one that incorporates forest physiological activity, either through mechanistic modeling, by weighting ozone concentrations by stomatal conductance, or by weighting concentrations by site moisture conditions.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
Weight distributions for turbo codes using random and nonrandom permutations
NASA Technical Reports Server (NTRS)
Dolinar, S.; Divsalar, D.
1995-01-01
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Wang, Zhong; Zeng, Ximin; Mo, Yiming; Smith, Katie; Guo, Yuming; Lin, Jun
2012-12-01
Antibiotic growth promoters (AGPs) have been used as feed additives to improve average body weight gain and feed efficiency in food animals for more than 5 decades. However, there is a worldwide trend to limit AGP use to protect food safety and public health, which raises an urgent need to discover effective alternatives to AGPs. The growth-promoting effect of AGPs has been shown to be highly correlated with the decreased activity of intestinal bile salt hydrolase (BSH), an enzyme that is produced by various gut microflora and involved in host lipid metabolism. Thus, BSH inhibitors are likely promising feed additives to AGPs to improve animal growth performance. In this study, the genome of Lactobacillus salivarius NRRL B-30514, a BSH-producing strain isolated from chicken, was sequenced by a 454 GS FLX sequencer. A BSH gene identified by genome analysis was cloned and expressed in an Escherichia coli expression system for enzymatic analyses. The BSH displayed efficient hydrolysis activity for both glycoconjugated and tauroconjugated bile salts, with slightly higher catalytic efficiencies (k(cat)/K(m)) on glycoconjugated bile salts. The optimal pH and temperature for the BSH activity were 5.5 and 41°C, respectively. Examination of a panel of dietary compounds using the purified BSH identified some potent BSH inhibitors, in which copper and zinc have been recently demonstrated to promote feed digestion and body weight gain in different food animals. In sum, this study identified and characterized a BSH with broad substrate specificity from a chicken L. salivarius strain and established a solid platform for us to discover novel BSH inhibitors, the promising feed additives to replace AGPs for enhancing the productivity and sustainability of food animals.
Schrempft, Stephanie; van Jaarsveld, Cornelia H M; Fisher, Abigail; Wardle, Jane
2015-01-01
The home environment is thought to play a key role in early weight trajectories, although direct evidence is limited. There is general agreement that multiple factors exert small individual effects on weight-related outcomes, so use of composite measures could demonstrate stronger effects. This study therefore examined whether composite measures reflecting the 'obesogenic' home environment are associated with diet, physical activity, TV viewing, and BMI in preschool children. Families from the Gemini cohort (n = 1096) completed a telephone interview (Home Environment Interview; HEI) when their children were 4 years old. Diet, physical activity, and TV viewing were reported at interview. Child height and weight measurements were taken by the parents (using standard scales and height charts) and reported at interview. Responses to the HEI were standardized and summed to create four composite scores representing the food (sum of 21 variables), activity (sum of 6 variables), media (sum of 5 variables), and overall (food composite/21 + activity composite/6 + media composite/5) home environments. These were categorized into 'obesogenic risk' tertiles. Children in 'higher-risk' food environments consumed less fruit (OR; 95% CI = 0.39; 0.27-0.57) and vegetables (0.47; 0.34-0.64), and more energy-dense snacks (3.48; 2.16-5.62) and sweetened drinks (3.49; 2.10-5.81) than children in 'lower-risk' food environments. Children in 'higher-risk' activity environments were less physically active (0.43; 0.32-0.59) than children in 'lower-risk' activity environments. Children in 'higher-risk' media environments watched more TV (3.51; 2.48-4.96) than children in 'lower-risk' media environments. Neither the individual nor the overall composite measures were associated with BMI. Composite measures of the obesogenic home environment were associated as expected with diet, physical activity, and TV viewing. Associations with BMI were not apparent at this age.
Optical Oversampled Analog-to-Digital Conversion
1992-06-29
hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions
Majorana spin in magnetic atomic chain systems
NASA Astrophysics Data System (ADS)
Li, Jian; Jeon, Sangjun; Xie, Yonglong; Yazdani, Ali; Bernevig, B. Andrei
2018-03-01
In this paper, we establish that Majorana zero modes emerging from a topological band structure of a chain of magnetic atoms embedded in a superconductor can be distinguished from trivial localized zero energy states that may accidentally form in this system using spin-resolved measurements. To demonstrate this key Majorana diagnostics, we study the spin composition of magnetic impurity induced in-gap Shiba states in a superconductor using a hybrid model. By examining the spin and spectral densities in the context of the Bogoliubov-de Gennes (BdG) particle-hole symmetry, we derive a sum rule that relates the spin densities of localized Shiba states with those in the normal state without superconductivity. Extending our investigations to a ferromagnetic chain of magnetic impurities, we identify key features of the spin properties of the extended Shiba state bands, as well as those associated with a localized Majorana end mode when the effect of spin-orbit interaction is included. We then formulate a phenomenological theory for the measurement of the local spin densities with spin-polarized scanning tunneling microscopy (STM) techniques. By combining the calculated spin densities and the measurement theory, we show that spin-polarized STM measurements can reveal a sharp contrast in spin polarization between an accidental-zero-energy trivial Shiba state and a Majorana zero mode in a topological superconducting phase in atomic chains. We further confirm our results with numerical simulations that address generic parameter settings.
Smisc - A collection of miscellaneous functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landon Sego, PNNL
2015-08-31
A collection of functions for statistical computing and data manipulation. These include routines for rapidly aggregating heterogeneous matrices, manipulating file names, loading R objects, sourcing multiple R files, formatting datetimes, multi-core parallel computing, stream editing, specialized plotting, etc. Smisc-package A collection of miscellaneous functions allMissing Identifies missing rows or columns in a data frame or matrix as.numericSilent Silent wrapper for coercing a vector to numeric comboList Produces all possible combinations of a set of linear model predictors cumMax Computes the maximum of the vector up to the current index cumsumNA Computes the cummulative sum of a vector without propogating NAsmore » d2binom Probability functions for the sum of two independent binomials dataIn A flexible way to import data into R. dbb The Beta-Binomial Distribution df2list Row-wise conversion of a data frame to a list dfplapply Parallelized single row processing of a data frame dframeEquiv Examines the equivalence of two dataframes or matrices dkbinom Probability functions for the sum of k independent binomials factor2character Converts all factor variables in a dataframe to character variables findDepMat Identify linearly dependent rows or columns in a matrix formatDT Converts date or datetime strings into alternate formats getExtension Filename manipulations: remove the extension or path, extract the extension or path getPath Filename manipulations: remove the extension or path, extract the extension or path grabLast Filename manipulations: remove the extension or path, extract the extension or path ifelse1 Non-vectorized version of ifelse integ Simple numerical integration routine interactionPlot Two-way Interaction Plot with Error Bar linearMap Linear mapping of a numerical vector or scalar list2df Convert a list to a data frame loadObject Loads and returns the object(s) in an ".Rdata" file more Display the contents of a file to the R terminal movAvg2 Calculate the moving average using a 2-sided window openDevice Opens a graphics device based on the filename extension p2binom Probability functions for the sum of two independent binomials padZero Pad a vector of numbers with zeros parseJob Parses a collection of elements into (almost) equal sized groups pbb The Beta-Binomial Distribution pcbinom A continuous version of the binomial cdf pkbinom Probability functions for the sum of k independent binomials plapply Simple parallelization of lapply plotFun Plot one or more functions on a single plot PowerData An example of power data pvar Prints the name and value of one or more objects qbb The Beta-Binomial Distribution rbb And numerous others (space limits reporting).« less
Kirchgesner, T; Perlepe, V; Michoux, N; Larbi, A; Vande Berg, B
2018-01-01
To compare the effectiveness of fat suppression and the image quality of the Dixon method with those of the chemical shift-selective (CHESS) technique in hands of normal subjects at non-enhanced three-dimensional (3D) T1-weighted MR imaging. Both hands of 14 healthy volunteers were imaged with 3D fast spoiled gradient echo (FSPGR) T1-weighted Dixon, 3D FSPGR T1-weighted CHESS and 3D T1-weighted fast spin echo (FSE) CHESS sequences in a 1.5T MR scanner. Three radiologists scored the effectiveness of fat suppression in bone marrow (EFS BM ) and soft tissues (EFS ST ) in 20 joints per subject. One radiologist measured the signal-to-noise ratio (SNR) in 10 bones per subject. Statistical analysis used two-way ANOVA with random effects (P<0.0083), paired t-test (P<0.05) and observed agreement to assess differences in effectiveness of fat suppression, differences in SNR and interobserver agreement. EFS BM was statistically significantly higher for the 3D FSPGR T1-weighted Dixon than for the 3D FSPGR T1-weighted CHESS sequence and the 3D FSE T1-weighted CHESS sequence (P<0.0001). EFS ST was statistically significantly higher for the 3D FSPGR T1-weighted Dixon than for the 3D FSPGR T1-weighted CHESS sequence (P<0.0011) and for the 3D FSE T1-weighted CHESS sequence in the axial plane (P=0.0028). Mean SNR was statistically significantly higher for 3D FSPGR T1-weighted Dixon sequence than for 3D FSPGR T1-weighted CHESS and 3D FSE T1-weighted CHESS sequences (P<0.0001). The Dixon method yields more effective fat suppression and higher SNR than the CHESS technique at 3D T1-weighted MR imaging of the hands. Copyright © 2017 Éditions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Optimal trajectories for hypersonic launch vehicles
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas
1994-01-01
In this paper, we derive a near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optical flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. Because liquid hydrogen fueled hypersonic aircraft are volume sensitive, as well as weight sensitive, the cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize gross take-off weight for a given payload mass and volume in orbit.
Paparo, M.; Benko, J. M.; Hareter, M.; ...
2016-05-11
In this study, a sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT. We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequencesmore » (echelle ridges) were found in the 5–21 d –1 region where the pairs of the sequences are shifted (between 0.5 and 0.59 d –1) by twice the value of the estimated rotational splitting frequency (0.269 d –1). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d –1) are in better agreement with the sum of a possible 1.710 d –1 large separation and two or one times, respectively, the value of the rotational frequency.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paparo, M.; Benko, J. M.; Hareter, M.
In this study, a sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT. We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequencesmore » (echelle ridges) were found in the 5–21 d –1 region where the pairs of the sequences are shifted (between 0.5 and 0.59 d –1) by twice the value of the estimated rotational splitting frequency (0.269 d –1). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d –1) are in better agreement with the sum of a possible 1.710 d –1 large separation and two or one times, respectively, the value of the rotational frequency.« less
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Meta-analyses of workplace physical activity and dietary behaviour interventions on weight outcomes.
Verweij, L M; Coffeng, J; van Mechelen, W; Proper, K I
2011-06-01
This meta-analytic review critically examines the effectiveness of workplace interventions targeting physical activity, dietary behaviour or both on weight outcomes. Data could be extracted from 22 studies published between 1980 and November 2009 for meta-analyses. The GRADE approach was used to determine the level of evidence for each pooled outcome measure. Results show moderate quality of evidence that workplace physical activity and dietary behaviour interventions significantly reduce body weight (nine studies; mean difference [MD]-1.19 kg [95% CI -1.64 to -0.74]), body mass index (BMI) (11 studies; MD -0.34 kg m⁻² [95% CI -0.46 to -0.22]) and body fat percentage calculated from sum of skin-folds (three studies; MD -1.12% [95% CI -1.86 to -0.38]). There is low quality of evidence that workplace physical activity interventions significantly reduce body weight and BMI. Effects on percentage body fat calculated from bioelectrical impedance or hydrostatic weighing, waist circumference, sum of skin-folds and waist-hip ratio could not be investigated properly because of a lack of studies. Subgroup analyses showed a greater reduction in body weight of physical activity and diet interventions containing an environmental component. As the clinical relevance of the pooled effects may be substantial on a population level, we recommend workplace physical activity and dietary behaviour interventions, including an environment component, in order to prevent weight gain. © 2010 The Authors. obesity reviews © 2010 International Association for the Study of Obesity.
Persistence Probabilities of Two-Sided (Integrated) Sums of Correlated Stationary Gaussian Sequences
NASA Astrophysics Data System (ADS)
Aurzada, Frank; Buck, Micha
2018-02-01
We study the persistence probability for some two-sided, discrete-time Gaussian sequences that are discrete-time analogues of fractional Brownian motion and integrated fractional Brownian motion, respectively. Our results extend the corresponding ones in continuous time in Molchan (Commun Math Phys 205(1):97-111, 1999) and Molchan (J Stat Phys 167(6):1546-1554, 2017) to a wide class of discrete-time processes.
DrImpute: imputing dropout events in single cell RNA sequencing data.
Gong, Wuming; Kwak, Il-Youp; Pota, Pruthvi; Koyano-Nakagawa, Naoko; Garry, Daniel J
2018-06-08
The single cell RNA sequencing (scRNA-seq) technique begin a new era by allowing the observation of gene expression at the single cell level. However, there is also a large amount of technical and biological noise. Because of the low number of RNA transcriptomes and the stochastic nature of the gene expression pattern, there is a high chance of missing nonzero entries as zero, which are called dropout events. We develop DrImpute to impute dropout events in scRNA-seq data. We show that DrImpute has significantly better performance on the separation of the dropout zeros from true zeros than existing imputation algorithms. We also demonstrate that DrImpute can significantly improve the performance of existing tools for clustering, visualization and lineage reconstruction of nine published scRNA-seq datasets. DrImpute can serve as a very useful addition to the currently existing statistical tools for single cell RNA-seq analysis. DrImpute is implemented in R and is available at https://github.com/gongx030/DrImpute .
Energy balance and the composition of weight loss during prolonged space flight
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1982-01-01
Integrated metabolic balance analysis, Skylab integrated metabolic balance analysis and computer simulation of fluid-electrolyte responses to zero-g, overall mission weight and tissue losses, energy balance, diet and exercise, continuous changes, electrolyte losses, caloric and exercise requirements, and body composition are discussed.
NASA Astrophysics Data System (ADS)
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
Use of a large time-compensated scintillation detector in neutron time-of-flight measurements
Goodman, Charles D.
1979-01-01
A scintillator for neutron time-of-flight measurements is positioned at a desired angle with respect to the neutron beam, and as a function of the energy thereof, such that the sum of the transit times of the neutrons and photons in the scintillator are substantially independent of the points of scintillations within the scintillator. Extrapolated zero timing is employed rather than the usual constant fraction timing. As a result, a substantially larger scintillator can be employed that substantially increases the data rate and shortens the experiment time.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline coefficients by solving (coupled) triangular matrix equations with a forward substitution algorithm. Fast computation of convolution integrals as weighted sums of spline coefficients, with weights derived from user-given convolution kernels. Restrictions: Accuracy and speed are determined by the density of the evolution grid. Running time: Less than 10 ms on a 2 GHz Intel Core 2 Duo processor to evolve the gluon density and 12 quark densities at next-to-next-to-leading order over a large kinematic range.
Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu
2015-09-21
Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less
Yang, Huayun; Zhou, Shanshan; Li, Weidong; Liu, Qi; Tu, Yunjie
2015-10-01
Sediment samples were analyzed to comprehensively characterize the concentrations, distribution, possible sources and potential biological risk of organochlorine pesticides in Qiandao Lake, China. Concentrations of sumHCH and sumDDT in sediments ranged from 0.03 to 5.75 ng/g dry weight and not detected to 14.39 ng/g dry weight. The predominant β-HCH and the α-HCH/γ-HCH ratios indicated that the residues of HCHs were derived not only from historical technical HCH use but also from additional usage of lindane. Ratios of o,p'-DDT/p,p'-DDT and DDD/DDE suggested that both dicofol-type DDT and technical DDT applications may be present in most study areas. Additionally, based on two sediment quality guidelines, γ-HCH, o,p'-DDT and p,p'-DDT could be the main organochlorine pesticides species of ecotoxicological concern in Qiandao Lake.
Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi
2017-10-05
We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jiang, G.; Wong, C. Y.; Lin, S. C. F.; Rahman, M. A.; Ren, T. R.; Kwok, Ngaiming; Shi, Haiyan; Yu, Ying-Hao; Wu, Tonghai
2015-04-01
The enhancement of image contrast and preservation of image brightness are two important but conflicting objectives in image restoration. Previous attempts based on linear histogram equalization had achieved contrast enhancement, but exact preservation of brightness was not accomplished. A new perspective is taken here to provide balanced performance of contrast enhancement and brightness preservation simultaneously by casting the quest of such solution to an optimization problem. Specifically, the non-linear gamma correction method is adopted to enhance the contrast, while a weighted sum approach is employed for brightness preservation. In addition, the efficient golden search algorithm is exploited to determine the required optimal parameters to produce the enhanced images. Experiments are conducted on natural colour images captured under various indoor, outdoor and illumination conditions. Results have shown that the proposed method outperforms currently available methods in contrast to enhancement and brightness preservation.
Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai
2017-03-01
This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.
Probabilistic model of bridge vehicle loads in port area based on in-situ load testing
NASA Astrophysics Data System (ADS)
Deng, Ming; Wang, Lei; Zhang, Jianren; Wang, Rei; Yan, Yanhong
2017-11-01
Vehicle load is an important factor affecting the safety and usability of bridges. An statistical analysis is carried out in this paper to investigate the vehicle load data of Tianjin Haibin highway in Tianjin port of China, which are collected by the Weigh-in- Motion (WIM) system. Following this, the effect of the vehicle load on test bridge is calculated, and then compared with the calculation result according to HL-93(AASHTO LRFD). Results show that the overall vehicle load follows a distribution with a weighted sum of four normal distributions. The maximum vehicle load during the design reference period follows a type I extremum distribution. The vehicle load effect also follows a weighted sum of four normal distributions, and the standard value of the vehicle load is recommended as 1.8 times that of the calculated value according to HL-93.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubrovsky, V. G.; Topovsky, A. V.
New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less
Steiner, S; Vogl, T J; Fischer, P; Steger, W; Neuhaus, P; Keck, H
1995-08-01
The aim of our study was to evaluate a T2-weighted turbo-spinecho sequence in comparison to a T2-weighted spinecho sequence in imaging focal liver lesions. In our study 35 patients with suspected focal liver lesions were examined. Standardised imaging protocol included a conventional T2-weighted SE sequence (TR/TE = 2000/90/45, acquisition time = 10.20) as well as a T2-weighted TSE sequence (TR/TE = 4700/90, acquisition time = 6.33). Calculation of S/N and C/N ratio as a basis of quantitative evaluation was done using standard methods. A diagnostic score was implemented to enable qualitative assessment. In 7% (n = 2) the TSE sequence enabled detection of further liver lesions showing a size of less than 1 cm in diameter. Comparing anatomical details the TSE sequence was superior. S/N and C/N ratio of anatomic and pathologic structures of the TSE sequence were higher compared to results of the SE sequence. Our results indicate that the T2-weighted turbo-spinecho sequence is well appropriate for imaging focal liver lesions, and leads to reduction of imaging time.
Hardware Implementation of a Bilateral Subtraction Filter
NASA Technical Reports Server (NTRS)
Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven
2009-01-01
A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.
van Leersum, M; Schweitzer, M E; Gannon, F; Finkel, G; Vinitski, S; Mitchell, D G
1996-11-01
To develop MR criteria for grades of chondromalacia patellae and to assess the accuracy of these grades. Fat-suppressed T2-weighted double-echo, fat-suppressed T2-weighted fast spin echo, fat-suppressed T1-weighted, and gradient echo sequences were performed at 1.5 T for the evaluation of chondromalacia. A total of 1000 MR, 200 histologic, and 200 surface locations were graded for chondromalacia and statistically compared. Compared with gross inspection as well as with histology the most accurate sequences were fat-suppressed T2-weighted conventional spin echo and fat suppressed T2-weighted fast spin echo, although the T1-weighted and proton density images also correlated well. The most accurate MR criteria applied to the severe grades of chondromalacia, with less accurate results for lesser grades. This study demonstrates that fat-suppressed routine T2-weighted and fast spin echo T2-weighted sequences seem to be more accurate than proton density, T1-weighted, and gradient echo sequences in grading chondromalacia. Good histologic and macroscopic correlation was seen in more severe grades of chondromalacia, but problems remain for the early grades in all sequences studied.
ZERO WASTE BIODIESEL: USING GLYCERIN AND BIOMASS TO CREATE RENEWABLE ENERGY
The procedure for the creation of pellets is fairly mundane, however crucial, in order to create a standard and repeatable process. The pellets biomass material are mixed by weight ratio, and blended to a consistent particulate size. The glycerin to biomass ratio by weight is ...
1993-02-01
the relative cost effectiveness of Ada and C++ [10]. (An overview of the Air Force report is given in Appendix D.) Surprisingly, the study deter- mined ...support; 5 = excellent support), followed by a total score, a weighted sum of the rankings based on weights deter- mined by an expert panel: Category...International Conference Location: Britannia International Hotel, London Sponsor. Ada Language UK, Ltd. POC: Helen Byard, Administrator, Ada UK, P.O. 322, York
Optimal trajectories for hypersonic launch vehicles
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas
1992-01-01
In this paper, we derive a near-optimal guidance law for the ascent trajectory from Earth surface to Earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optimal flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. The performance objective is a weighted sum of fuel mass and volume, with the weighting factor selected to give minimum gross take-off weight for a specific payload mass and volume.
Diagnosis of skin cancer using image processing
NASA Astrophysics Data System (ADS)
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel
2014-10-01
In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.
Finite-width Laplacian sum rules for 2++ tensor glueball in the instanton vacuum model
NASA Astrophysics Data System (ADS)
Chen, Junlong; Liu, Jueping
2017-01-01
The more carefully defined and more appropriate 2++ tensor glueball current is a S Uc(3 ) gauge-invariant, symmetric, traceless, and conserved Lorentz-irreducible tensor. After Lorentz decomposition, the invariant amplitude of the correlation function is abstracted and calculated based on the semiclassical expansion for quantum chromodynamics (QCD) in the instanton liquid background. In addition to taking the perturbative contribution into account, we calculate the contribution arising from the interaction (or the interference) between instantons and the quantum gluon fields, which is infrared free. Instead of the usual zero-width approximation for the resonances, the Breit-Wigner form with a correct threshold behavior for the spectral function of the finite-width three resonances is adopted. The properties of the 2++ tensor glueball are investigated via a family of the QCD Laplacian sum rules for the invariant amplitude. The values of the mass, decay width, and coupling constants for the 2++ resonance in which the glueball fraction is dominant are obtained.
Briggs, Andrew H; Ades, A E; Price, Martin J
2003-01-01
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
A formulation and analysis of combat games
NASA Technical Reports Server (NTRS)
Heymann, M.; Ardema, M. D.; Rajan, N.
1984-01-01
Combat which is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives is outlined. A target set is associated with each opponent in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously, or in neither, a joint capture or a draw, respectively, occurs. Resolution of the encounter is formulated as a combat game; as a pair of competing event constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero sum differential game. Otherwise the optimal strategies are computed from a resulting nonzero sum game. Since optimal combat strategies may frequently not exist, approximate or delta combat games are also formulated leading to approximate or delta optimal strategies. The turret game is used to illustrate combat games. This game is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit evasion games.
Jensen, Morten B; Guldberg, Trine L; Harbøll, Anja; Lukacova, Slávka; Kallehauge, Jesper F
2017-11-01
The clinical target volume (CTV) in radiotherapy is routinely based on gadolinium contrast enhanced T1 weighted (T1w + Gd) and T2 weighted fluid attenuated inversion recovery (T2w FLAIR) magnetic resonance imaging (MRI) sequences which have been shown to over- or underestimate the microscopic tumor cell spread. Gliomas favor spread along the white matter fiber tracts. Tumor growth models incorporating the MRI diffusion tensors (DTI) allow to account more consistently for the glioma growth. The aim of the study was to investigate the potential of a DTI driven growth model to improve target definition in glioblastoma (GBM). Eleven GBM patients were scanned using T1w, T2w FLAIR, T1w + Gd and DTI. The brain was segmented into white matter, gray matter and cerebrospinal fluid. The Fisher-Kolmogorov growth model was used assuming uniform proliferation and a difference in white and gray matter diffusion of a ratio of 10. The tensor directionality was tested using an anisotropy weighting parameter set to zero (γ0) and twenty (γ20). The volumetric comparison was performed using Hausdorff distance, Dice similarity coefficient (DSC) and surface area. The median of the standard CTV (CTVstandard) was 180 cm 3 . The median surface area of CTVstandard was 211 cm 2 . The median surface area of respective CTV γ0 and CTV γ20 significantly increased to 338 and 376 cm 2 , respectively. The Hausdorff distance was greater than zero and significantly increased for both CTV γ0 and CTV γ20 with respective median of 18.7 and 25.2 mm. The DSC for both CTV γ0 and CTV γ20 were significantly below one with respective median of 0.74 and 0.72, which means that 74 and 72% of CTVstandard were included in CTV γ0 and CTV γ20, respectively. DTI driven growth models result in CTVs with a significantly increased surface area, a significantly increased Hausdorff distance and decreased overlap between the standard and model derived volume.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Orthogonal Polynomials on the Unit Circle with Fibonacci Verblunsky Coefficients, II. Applications
NASA Astrophysics Data System (ADS)
Damanik, David; Munger, Paul; Yessen, William N.
2013-10-01
We consider CMV matrices with Verblunsky coefficients determined in an appropriate way by the Fibonacci sequence and present two applications of the spectral theory of such matrices to problems in mathematical physics. In our first application we estimate the spreading rates of quantum walks on the line with time-independent coins following the Fibonacci sequence. The estimates we obtain are explicit in terms of the parameters of the system. In our second application, we establish a connection between the classical nearest neighbor Ising model on the one-dimensional lattice in the complex magnetic field regime, and CMV operators. In particular, given a sequence of nearest-neighbor interaction couplings, we construct a sequence of Verblunsky coefficients, such that the support of the Lee-Yang zeros of the partition function for the Ising model in the thermodynamic limit coincides with the essential spectrum of the CMV matrix with the constructed Verblunsky coefficients. Under certain technical conditions, we also show that the zeros distribution measure coincides with the density of states measure for the CMV matrix.
Aatsinki, Anna-Katariina; Uusitupa, Henna-Maria; Munukka, Eveliina; Pesonen, Henri; Rintala, Anniina; Pietilä, Sami; Lahti, Leo; Eerola, Erkki; Karlsson, Linnea; Karlsson, Hasse
2018-05-14
Pregnancy is a time of numerous hormonal, metabolic, and immunological changes for both the mother and the fetus. Furthermore, maternal gut microbiota composition (GMC) is altered during pregnancy. One major factor affecting GMC in pregnant and nonpregnant populations is obesity. The aim was to analyze associations between maternal overweight/obesity, as well as gestational weight gain (GWG) and GMC. Moreover, the modifying effect of depression and anxiety symptom scores on weight and GMC were investigated. Study included 46 women from the FinnBrain Birth Cohort study, of which 36 were normal weight, and 11 overweight or obese according to their prepregnancy body mass index (BMI). Stool samples were collected in gestational week 24, and the GMC was sequenced with Illumina MiSeq approach. Hierarchical clustering was executed to illuminate group formation according to the GMC. The population was divided according to Firmicutes and Bacteroidetes dominance. Symptoms of depression, general anxiety, and pregnancy-related anxiety were measured by using standardized questionnaires. Excessive GWG was associated with distinct GMC in mid-pregnancy as measured by hierarchical clustering and grouping according to Firmicutes or Bacteroidetes dominance, with Bacteroidetes being prominent and Firmicutes being less prominent in the GMC among those with increased GWG. Reduced alpha diversity was observed among the Bacteroidetes-dominated subjects. There were no zero-order effects between the abundances of bacterial genera or phyla, alpha or beta diversity, and prepregnancy BMI or GWG. Bacteroidetes-dominated GMC in mid-pregnancy is associated with increased GWG and reduced alpha diversity.
Lou, Ping; Lee, Jin Yong
2009-04-14
For a simple modified Poisson-Boltzmann (SMPB) theory, taking into account the finite ionic size, we have derived the exact analytic expression for the contact values of the difference profile of the counterion and co-ion, as well as of the sum (density) and product profiles, near a charged planar electrode that is immersed in a binary symmetric electrolyte. In the zero ionic size or dilute limit, these contact values reduce to the contact values of the Poisson-Boltzmann (PB) theory. The analytic results of the SMPB theory, for the difference, sum, and product profiles were compared with the results of the Monte-Carlo (MC) simulations [ Bhuiyan, L. B.; Outhwaite, C. W.; Henderson, D. J. Electroanal. Chem. 2007, 607, 54 ; Bhuiyan, L. B.; Henderson, D. J. Chem. Phys. 2008, 128, 117101 ], as well as of the PB theory. In general, the analytic expression of the SMPB theory gives better agreement with the MC data than the PB theory does. For the difference profile, as the electrode charge increases, the result of the PB theory departs from the MC data, but the SMPB theory still reproduces the MC data quite well, which indicates the importance of including steric effects in modeling diffuse layer properties. As for the product profile, (i) it drops to zero as the electrode charge approaches infinity; (ii) the speed of the drop increases with the ionic size, and these behaviors are in contrast with the predictions of the PB theory, where the product is identically 1.
Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI
NASA Technical Reports Server (NTRS)
Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.
1985-01-01
The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
Parallel-Connected Photovoltaic Inverters: Zero Frequency Sequence Harmonic Analysis and Solution
NASA Astrophysics Data System (ADS)
Carmeli, Maria Stefania; Mauri, Marco; Frosio, Luisa; Bezzolato, Alberto; Marchegiani, Gabriele
2013-05-01
High-power photovoltaic (PV) plants are usually constituted of the connection of different PV subfields, each of them with its interface transformer. Different solutions have been studied to improve the efficiency of the whole generation system. In particular, transformerless configurations are the more attractive one from efficiency and costs point of view. This paper focuses on transformerless PV configurations characterised by the parallel connection of interface inverters. The problem of zero sequence current due to both the parallel connection and the presence of undesirable parasitic earth capacitances is considered and a solution, which consists of the synchronisation of pulse-width modulation triangular carrier, is proposed and theoretically analysed. The theoretical analysis has been validated through simulation and experimental results.
Analog Delta-Back-Propagation Neural-Network Circuitry
NASA Technical Reports Server (NTRS)
Eberhart, Silvio
1990-01-01
Changes in synapse weights due to circuit drifts suppressed. Proposed fully parallel analog version of electronic neural-network processor based on delta-back-propagation algorithm. Processor able to "learn" when provided with suitable combinations of inputs and enforced outputs. Includes programmable resistive memory elements (corresponding to synapses), conductances (synapse weights) adjusted during learning. Buffer amplifiers, summing circuits, and sample-and-hold circuits arranged in layers of electronic neurons in accordance with delta-back-propagation algorithm.
Catenacci, Victoria A.; Pan, Zhaoxing; Ostendorf, Danielle; Brannon, Sarah; Gozansky, Wendolyn S.; Mattson, Mark P.; Martin, Bronwen; MacLean, Paul S.; Melanson, Edward L.; Donahoo, William Troy
2016-01-01
Objective To evaluate the safety and tolerability of alternate-day fasting (ADF) and to compare changes in weight, body composition, lipids, and insulin sensitivity index (Si) to those produced by a standard weight loss diet, moderate daily caloric restriction (CR). Methods Adults with obesity (BMI ≥30 kg/m2, age 18-55) were randomized to either zero-calorie ADF (n=14) or CR (-400 kcal/day, n=12) for 8 weeks. Outcomes were measured at the end of the 8-week intervention and after 24 weeks of unsupervised follow-up. Results No adverse effects were attributed to ADF and 93% completed the 8-week ADF protocol. At 8 weeks, ADF achieved a 376 kcal/day greater energy deficit, however there were no significant between-group differences in change in weight (mean±SE; ADF -8.2±0.9 kg, CR -7.1±1.0 kg), body composition, lipids, or Si. After 24 weeks of unsupervised follow-up, there were no significant differences in weight regain, however changes from baseline in % fat mass and lean mass were more favorable in ADF. Conclusions ADF is a safe and tolerable approach to weight loss. ADF produced similar changes in weight, body composition, lipids and Si at 8 weeks and did not appear to increase risk for weight regain 24 weeks after completing the intervention. PMID:27569118
21 CFR 556.720 - Tetracycline.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS... body weight per day. (b) Tolerances. Tolerances are established for the sum of tetracycline residues in... liver, and 12 ppm in fat and kidney. [63 FR 57246, Oct. 27, 1998] ...
Weight and cost forecasting for advanced manned space vehicles
NASA Technical Reports Server (NTRS)
Williams, Raymond
1989-01-01
A mass and cost estimating computerized methology for predicting advanced manned space vehicle weights and costs was developed. The user friendly methology designated MERCER (Mass Estimating Relationship/Cost Estimating Relationship) organizes the predictive process according to major vehicle subsystem levels. Design, development, test, evaluation, and flight hardware cost forecasting is treated by the study. This methodology consists of a complete set of mass estimating relationships (MERs) which serve as the control components for the model and cost estimating relationships (CERs) which use MER output as input. To develop this model, numerous MER and CER studies were surveyed and modified where required. Additionally, relationships were regressed from raw data to accommodate the methology. The models and formulations which estimated the cost of historical vehicles to within 20 percent of the actual cost were selected. The result of the research, along with components of the MERCER Program, are reported. On the basis of the analysis, the following conclusions were established: (1) The cost of a spacecraft is best estimated by summing the cost of individual subsystems; (2) No one cost equation can be used for forecasting the cost of all spacecraft; (3) Spacecraft cost is highly correlated with its mass; (4) No study surveyed contained sufficient formulations to autonomously forecast the cost and weight of the entire advanced manned vehicle spacecraft program; (5) No user friendly program was found that linked MERs with CERs to produce spacecraft cost; and (6) The group accumulation weight estimation method (summing the estimated weights of the various subsystems) proved to be a useful method for finding total weight and cost of a spacecraft.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Jiang, Yulei; Antic, Tatjana; Giger, Maryellen L.; Eggener, Scott; Oto, Aytekin
2013-02-01
The purpose of this study was to study T2-weighted magnetic resonance (MR) image texture features and diffusionweighted (DW) MR image features in distinguishing prostate cancer (PCa) from normal tissue. We collected two image datasets: 23 PCa patients (25 PCa and 23 normal tissue regions of interest [ROIs]) imaged with Philips MR scanners, and 30 PCa patients (41 PCa and 26 normal tissue ROIs) imaged with GE MR scanners. A radiologist drew ROIs manually via consensus histology-MR correlation conference with a pathologist. A number of T2-weighted texture features and apparent diffusion coefficient (ADC) features were investigated, and linear discriminant analysis (LDA) was used to combine select strong image features. Area under the receiver operating characteristic (ROC) curve (AUC) was used to characterize feature effectiveness in distinguishing PCa from normal tissue ROIs. Of the features studied, ADC 10th percentile, ADC average, and T2-weighted sum average yielded AUC values (+/-standard error) of 0.95+/-0.03, 0.94+/-0.03, and 0.85+/-0.05 on the Phillips images, and 0.91+/-0.04, 0.89+/-0.04, and 0.70+/-0.06 on the GE images, respectively. The three-feature combination yielded AUC values of 0.94+/-0.03 and 0.89+/-0.04 on the Phillips and GE images, respectively. ADC 10th percentile, ADC average, and T2-weighted sum average, are effective in distinguishing PCa from normal tissue, and appear robust in images acquired from Phillips and GE MR scanners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horton, Megan K., E-mail: megan.horton@mssm.edu; Blount, Benjamin C.; Valentin-Blasini, Liza
Background: Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy Objectives: We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New Yorkmore » City using weighted quantile sum (WQS) regression. Methods: We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (±2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Results: Individual analyte concentrations in urine were significantly correlated (Spearman's r 0.4–0.5, p<0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Conclusions: Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. - Highlights: • Perchlorate, nitrate, thiocyanate and iodide measured in maternal urine. • Thyroid function (TSH and Free T4) measured in maternal blood. • Weighted quantile sum (WQS) regression examined complex mixture effect. • WQS identified an inverse association between the exposure mixture and maternal TSH. • Perchlorate indicated as the ‘bad actor’ of the mixture.« less
Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis.
Nieves, Jeri W; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J Americo M; Sorenson, Eric J; D'Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi
2016-12-01
There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale-Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5-68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of "good" micronutrients and "good" food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paparó, M.; Benkő, J. M.; Hareter, M.
A sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT . We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequences (echelle ridges)more » were found in the 5–21 d{sup −1} region where the pairs of the sequences are shifted (between 0.5 and 0.59 d{sup −1}) by twice the value of the estimated rotational splitting frequency (0.269 d{sup −1}). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d{sup −1}) are in better agreement with the sum of a possible 1.710 d{sup −1} large separation and two or one times, respectively, the value of the rotational frequency.« less
Parameter Set Cloning Based on Catchment Similarity for Large-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Liu, Z.; Kaheil, Y.; McCollum, J.
2016-12-01
Parameter calibration is a crucial step to ensure the accuracy of hydrological models. However, streamflow gauges are not available everywhere for calibrating a large-scale hydrologic model globally. Thus, assigning parameters appropriately for regions where the calibration cannot be performed directly has been a challenge for large-scale hydrologic modeling. Here we propose a method to estimate the model parameters in ungauged regions based on the values obtained through calibration in areas where gauge observations are available. This parameter set cloning is performed according to a catchment similarity index, a weighted sum index based on four catchment characteristic attributes. These attributes are IPCC Climate Zone, Soil Texture, Land Cover, and Topographic Index. The catchments with calibrated parameter values are donors, while the uncalibrated catchments are candidates. Catchment characteristic analyses are first conducted for both donors and candidates. For each attribute, we compute a characteristic distance between donors and candidates. Next, for each candidate, weights are assigned to the four attributes such that higher weights are given to properties that are more directly linked to the hydrologic dominant processes. This will ensure that the parameter set cloning emphasizes the dominant hydrologic process in the region where the candidate is located. The catchment similarity index for each donor - candidate couple is then created as the sum of the weighted distance of the four properties. Finally, parameters are assigned to each candidate from the donor that is "most similar" (i.e. with the shortest weighted distance sum). For validation, we applied the proposed method to catchments where gauge observations are available, and compared simulated streamflows using the parameters cloned by other catchments to the results obtained by calibrating the hydrologic model directly using gauge data. The comparison shows good agreement between the two models for different river basins as we show here. This method has been applied globally to the Hillslope River Routing (HRR) model using gauge observations obtained from the Global Runoff Data Center (GRDC). As next step, more catchment properties can be taken into account to further improve the representation of catchment similarity.
Measuring efficiency of university-industry Ph.D. projects using best worst method.
Salimi, Negin; Rezaei, Jafar
A collaborative Ph.D. project, carried out by a doctoral candidate, is a type of collaboration between university and industry. Due to the importance of such projects, researchers have considered different ways to evaluate the success, with a focus on the outputs of these projects. However, what has been neglected is the other side of the coin-the inputs. The main aim of this study is to incorporate both the inputs and outputs of these projects into a more meaningful measure called efficiency. A ratio of the weighted sum of outputs over the weighted sum of inputs identifies the efficiency of a Ph.D. The weights of the inputs and outputs can be identified using a multi-criteria decision-making (MCDM) method. Data on inputs and outputs are collected from 51 Ph.D. candidates who graduated from Eindhoven University of Technology. The weights are identified using a new MCDM method called Best Worst Method (BWM). Because there may be differences in the opinion of Ph.D. candidates and supervisors on weighing the inputs and outputs, data for BWM are collected from both groups. It is interesting to see that there are differences in the level of efficiency from the two perspectives, because of the weight differences. Moreover, a comparison between the efficiency scores of these projects and their success scores reveals differences that may have significant implications. A sensitivity analysis divulges the most contributing inputs and outputs.
Gao, Xiao; Wang, Quanchuan; Jackson, Todd; Zhao, Guang; Liang, Yi; Chen, Hong
2011-04-01
Despite evidence indicating fatness and thinness information are processed differently among weight-preoccupied and eating disordered individuals, the exact nature of these attentional biases is not clear. In this research, eye movement (EM) tracking assessed biases in specific component processes of visual attention (i.e., orientation, detection, maintenance and disengagement of gaze) in relation to body-related stimuli among 20 weight dissatisfied (WD) and 20 weight satisfied young women. Eye movements were recorded while participants completed a dot-probe task that featured fatness-neutral and thinness-neutral word pairs. Compared to controls, WD women were more likely to direct their initial gaze toward fatness words, had a shorter mean latency of first fixation on both fatness and thinness words, had longer first fixation on fatness words but shorter first fixation on thinness words, and shorter total gaze duration on thinness words. Reaction time data showed a maintenance bias towards fatness words among the WD women. In sum, results indicated WD women show initial orienting, speeded detection and initial maintenance biases towards fat body words in addition to a speeded detection - avoidance pattern of biases in relation to thin body words. In sum, results highlight the importance of the utility of EM-tracking as a means of identifying subtle attentional biases among weight dissatisfied women drawn from a non-clinical setting and the need to assess attentional biases as a dynamic process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Myelin protein zero gene sequencing diagnoses Charcot-Marie-Tooth Type 1B disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Y.; Zhang, H.; Madrid, R.
1994-09-01
Charcot-Marie-Tooth disease (CMT), the most common genetic neuropathy, affects about 1 in 2600 people in Norway and is found worldwide. CMT Type 1 (CMT1) has slow nerve conduction with demyelinated Schwann cells. Autosomal dominant CMT Type 1B (CMT1B) results from mutations in the myelin protein zero gene which directs the synthesis of more than half of all Schwann cell protein. This gene was mapped to the chromosome 1q22-1q23.1 borderline by fluorescence in situ hybridization. The first 7 of 7 reported CMT1B mutations are unique. Thus the most effective means to identify CMT1B mutations in at-risk family members and fetuses ismore » to sequence the entire coding sequence in dominant or sporadic CMT patients without the CMT1A duplication. Of the 19 primers used in 16 pars to uniquely amplify the entire MPZ coding sequence, 6 primer pairs were used to amplify and sequence the 6 exons. The DyeDeoxy Terminator cycle sequencing method used with four different color fluorescent lables was superior to manual sequencing because it sequences more bases unambiguously from extracted genomic DNA samples within 24 hours. This protocol was used to test 28 CMT and Dejerine-Sottas patients without CMT1A gene duplication. Sequencing MPZ gene-specific amplified fragments identified 9 polymorphic sites within the 6 exons that encode the 248 amino acid MPZ protein. The large number of major CMT1B mutations identified by single strand sequencing are being verified by reverse strand sequencing and when possible, by restriction enzyme analysis. This protocol can be used to distringuish CMT1B patients from othre CMT phenotypes and to determine the CMT1B status of relatives both presymptomatically and prenatally.« less
Super (a*, d*)-ℋ-antimagic total covering of second order of shackle graphs
NASA Astrophysics Data System (ADS)
Hesti Agustin, Ika; Dafik; Nisviasari, Rosanita; Prihandini, R. M.
2017-12-01
Let H be a simple and connected graph. A shackle of graph H, denoted by G = shack(H, v, n), is a graph G constructed by non-trivial graphs H 1, H 2, …, H n such that, for every 1 ≤ s, t ≤ n, H s and Ht have no a common vertex with |s - t| ≥ 2 and for every 1 ≤ i ≤ n - 1, Hi and H i+1 share exactly one common vertex v, called connecting vertex, and those k - 1 connecting vertices are all distinct. The graph G is said to be an (a*, d*)-H-antimagic total graph of second order if there exist a bijective function f : V(G) ∪ E(G) → {1, 2, …, |V(G)| + |E(G)|} such that for all subgraphs isomorphic to H, the total H-weights W(H)=\\displaystyle {\\sum }v\\in V(H)f(v)+\\displaystyle {\\sum }e\\in E(H)f(e) form an arithmetic sequence of second order of \\{a* ,a* +d* ,a* +3d* ,a* +6d* ,\\ldots ,a* +(\\frac{{n}2-n}{2})d* \\}, where a* and d* are positive integers and n is the number of all subgraphs isomorphic to H. An (a*, d*)-H-antimagic total labeling of second order f is called super if the smallest labels appear in the vertices. In this paper, we study a super (a*, d*)-H antimagic total labeling of second order of G = shack(H, v, n) by using a partition technique of second order.
Meta-analysis of 32 genome-wide linkage studies of schizophrenia
Ng, MYM; Levinson, DF; Faraone, SV; Suarez, BK; DeLisi, LE; Arinami, T; Riley, B; Paunio, T; Pulver, AE; Irmansyah; Holmans, PA; Escamilla, M; Wildenauer, DB; Williams, NM; Laurent, C; Mowry, BJ; Brzustowicz, LM; Maziade, M; Sklar, P; Garver, DL; Abecasis, GR; Lerer, B; Fallin, MD; Gurling, HMD; Gejman, PV; Lindholm, E; Moises, HW; Byerley, W; Wijsman, EM; Forabosco, P; Tsuang, MT; Hwu, H-G; Okazaki, Y; Kendler, KS; Wormley, B; Fanous, A; Walsh, D; O’Neill, FA; Peltonen, L; Nestadt, G; Lasseter, VK; Liang, KY; Papadimitriou, GM; Dikeos, DG; Schwab, SG; Owen, MJ; O’Donovan, MC; Norton, N; Hare, E; Raventos, H; Nicolini, H; Albus, M; Maier, W; Nimgaonkar, VL; Terenius, L; Mallet, J; Jay, M; Godard, S; Nertney, D; Alexander, M; Crowe, RR; Silverman, JM; Bassett, AS; Roy, M-A; Mérette, C; Pato, CN; Pato, MT; Roos, J Louw; Kohn, Y; Amann-Zalcenstein, D; Kalsi, G; McQuillin, A; Curtis, D; Brynjolfson, J; Sigmundsson, T; Petursson, H; Sanders, AR; Duan, J; Jazin, E; Myles-Worsley, M; Karayiorgou, M; Lewis, CM
2009-01-01
A genome scan meta-analysis (GSMA) was carried out on 32 independent genome-wide linkage scan analyses that included 3255 pedigrees with 7413 genotyped cases affected with schizophrenia (SCZ) or related disorders. The primary GSMA divided the autosomes into 120 bins, rank-ordered the bins within each study according to the most positive linkage result in each bin, summed these ranks (weighted for study size) for each bin across studies and determined the empirical probability of a given summed rank (PSR) by simulation. Suggestive evidence for linkage was observed in two single bins, on chromosomes 5q (142-168 Mb) and 2q (103-134 Mb). Genome-wide evidence for linkage was detected on chromosome 2q (119-152 Mb) when bin boundaries were shifted to the middle of the previous bins. The primary analysis met empirical criteria for ‘aggregate’ genome-wide significance, indicating that some or all of 10 bins are likely to contain loci linked to SCZ, including regions of chromosomes 1, 2q, 3q, 4q, 5q, 8p and 10q. In a secondary analysis of 22 studies of European-ancestry samples, suggestive evidence for linkage was observed on chromosome 8p (16-33 Mb). Although the newer genome-wide association methodology has greater power to detect weak associations to single common DNA sequence variants, linkage analysis can detect diverse genetic effects that segregate in families, including multiple rare variants within one locus or several weakly associated loci in the same region. Therefore, the regions supported by this meta-analysis deserve close attention in future studies. PMID:19349958
Wetting and Layering for Solid-on-Solid I: Identification of the Wetting Point and Critical Behavior
NASA Astrophysics Data System (ADS)
Lacoin, Hubert
2018-06-01
We provide a complete description of the low temperature wetting transition for the two dimensional solid-on-solid model. More precisely, we study the integer-valued field {(φ(x))_{x\\in Z^2}} , associated associated with the energy functional V(φ)=β \\sum_{x ˜ y}|φ(x)-φ(y)|-\\sumx ( h{1}_{φ(x)=0}-∞{1}_{φ(x) < 0} ). Since the pioneering work Chalker [15], it is known that for every {β} , there exists {hw(β) > 0} delimiting a transition between a delocalized phase ({h < hw(β)} ) where the proportion of points at level zero vanishes, and a localized phase ({h > hw(β)} ) where this proportion is positive. We prove in the present paper that for {β} sufficiently large we have h_w(β)= log (e^{4β}/e^{4β-1} ). Furthermore, we provide a sharp asymptotic for the free energy at the vicinity of the critical line: We show that close to {h_w(β)} , the free energy is approximately piecewise affine and that the points of discontinuity for the derivative of the affine approximation forms a geometric sequence accumulating on the right of {h_w(β)} . This asymptotic behavior provides strong evidence for the conjectured existence of countably many "layering transitions" at the vicinity of the wetting line, corresponding to jumps for the typical height of the field.
Prediction of Ras-effector interactions using position energy matrices.
Kiel, Christina; Serrano, Luis
2007-09-01
One of the more challenging problems in biology is to determine the cellular protein interaction network. Progress has been made to predict protein-protein interactions based on structural information, assuming that structural similar proteins interact in a similar way. In a previous publication, we have determined a genome-wide Ras-effector interaction network based on homology models, with a high accuracy of predicting binding and non-binding domains. However, for a prediction on a genome-wide scale, homology modelling is a time-consuming process. Therefore, we here successfully developed a faster method using position energy matrices, where based on different Ras-effector X-ray template structures, all amino acids in the effector binding domain are sequentially mutated to all other amino acid residues and the effect on binding energy is calculated. Those pre-calculated matrices can then be used to score for binding any Ras or effector sequences. Based on position energy matrices, the sequences of putative Ras-binding domains can be scanned quickly to calculate an energy sum value. By calibrating energy sum values using quantitative experimental binding data, thresholds can be defined and thus non-binding domains can be excluded quickly. Sequences which have energy sum values above this threshold are considered to be potential binding domains, and could be further analysed using homology modelling. This prediction method could be applied to other protein families sharing conserved interaction types, in order to determine in a fast way large scale cellular protein interaction networks. Thus, it could have an important impact on future in silico structural genomics approaches, in particular with regard to increasing structural proteomics efforts, aiming to determine all possible domain folds and interaction types. All matrices are deposited in the ADAN database (http://adan-embl.ibmc.umh.es/). Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam
2018-04-01
Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.