How Does Sequence Structure Affect the Judgment of Time? Exploring a Weighted Sum of Segments Model
ERIC Educational Resources Information Center
Matthews, William J.
2013-01-01
This paper examines the judgment of segmented temporal intervals, using short tone sequences as a convenient test case. In four experiments, we investigate how the relative lengths, arrangement, and pitches of the tones in a sequence affect judgments of sequence duration, and ask whether the data can be described by a simple weighted sum of…
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Measures with locally finite support and spectrum.
Meyer, Yves F
2016-03-22
The goal of this paper is the construction of measures μ on R(n)enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ f μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order.
Measures with locally finite support and spectrum
Meyer, Yves F.
2016-01-01
The goal of this paper is the construction of measures μ on Rn enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ^ of μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order. PMID:26929358
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.
Transition sum rules in the shell model
NASA Astrophysics Data System (ADS)
Lu, Yi; Johnson, Calvin W.
2018-03-01
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.
NASA Astrophysics Data System (ADS)
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
Transition sum rules in the shell model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Yi; Johnson, Calvin W.
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy- weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, in the case of the EWSR a double commutator. While most prior applications of the double-commutator have been to special cases, we derive general formulas for matrix elements of bothmore » operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We then apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E1) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground state electric quadrupole (E2) centroids in the $sd$-shell.« less
Transition sum rules in the shell model
Lu, Yi; Johnson, Calvin W.
2018-03-29
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy- weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, in the case of the EWSR a double commutator. While most prior applications of the double-commutator have been to special cases, we derive general formulas for matrix elements of bothmore » operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We then apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E1) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground state electric quadrupole (E2) centroids in the $sd$-shell.« less
Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065
Prediction equation for calculating fat mass in young Indian adults.
Sandhu, Jaspal Singh; Gupta, Giniya; Shenoy, Shweta
2010-06-01
Accurate measurement or prediction of fat mass is useful in physiology, nutrition and clinical medicine. Most predictive equations currently used to assess percentage of body fat or fat mass, using simple anthropometric measurements were derived from people in western societies and they may not be appropriate for individuals with other genotypic and phenotypic characteristics. We developed equations to predict fat mass from anthropometric measurements in young Indian adults. Fat mass was measured in 60 females and 58 males, aged 20 to 29 yrs by using hydrostatic weighing and by simultaneous measurement of residual lung volume. Anthropometric measure included weight (kg), height (m) and 4 skinfold thickness [STs (mm)]. Sex specific linear regression model was developed with fat mass as the dependent variable and all anthropometric measures as independent variables. The prediction equation obtained for fat mass (kg) for males was 8.46+0.32 (weight) - 15.16 (height) + 9.54 (log of sum of 4 STs) (R2= 0. 53, SEE=3.42 kg) and - 20.22 + 0.33 (weight) + 3.44 (height) + 7.66 (log of sum of 4 STs) (R2=0.72, SEE=3.01kg) for females. A new prediction equation for the measurement of fat mass was derived and internally validated in young Indian adults using simple anthropometric measurements.
NASA Astrophysics Data System (ADS)
Crnomarkovic, Nenad; Belosevic, Srdjan; Tomanovic, Ivan; Milicevic, Aleksandar
2017-12-01
The effects of the number of significant figures (NSF) in the interpolation polynomial coefficients (IPCs) of the weighted sum of gray gases model (WSGM) on results of numerical investigations and WSGM optimization were investigated. The investigation was conducted using numerical simulations of the processes inside a pulverized coal-fired furnace. The radiative properties of the gas phase were determined using the simple gray gas model (SG), two-term WSGM (W2), and three-term WSGM (W3). Ten sets of the IPCs with the same NSF were formed for every weighting coefficient in both W2 and W3. The average and maximal relative difference values of the flame temperatures, wall temperatures, and wall heat fluxes were determined. The investigation showed that the results of numerical investigations were affected by the NSF unless it exceeded certain value. The increase in the NSF did not necessarily lead to WSGM optimization. The combination of the NSF (CNSF) was the necessary requirement for WSGM optimization.
Prediction Equation for Calculating Fat Mass in Young Indian Adults
Sandhu, Jaspal Singh; Gupta, Giniya; Shenoy, Shweta
2010-01-01
Purpose Accurate measurement or prediction of fat mass is useful in physiology, nutrition and clinical medicine. Most predictive equations currently used to assess percentage of body fat or fat mass, using simple anthropometric measurements were derived from people in western societies and they may not be appropriate for individuals with other genotypic and phenotypic characteristics. We developed equations to predict fat mass from anthropometric measurements in young Indian adults. Methods Fat mass was measured in 60 females and 58 males, aged 20 to 29 yrs by using hydrostatic weighing and by simultaneous measurement of residual lung volume. Anthropometric measure included weight (kg), height (m) and 4 skinfold thickness [STs (mm)]. Sex specific linear regression model was developed with fat mass as the dependent variable and all anthropometric measures as independent variables. Results The prediction equation obtained for fat mass (kg) for males was 8.46+0.32 (weight) − 15.16 (height) + 9.54 (log of sum of 4 STs) (R2= 0. 53, SEE=3.42 kg) and − 20.22 + 0.33 (weight) + 3.44 (height) + 7.66 (log of sum of 4 STs) (R2=0.72, SEE=3.01kg) for females. Conclusion A new prediction equation for the measurement of fat mass was derived and internally validated in young Indian adults using simple anthropometric measurements. PMID:22375197
System Finds Horizontal Location of Center of Gravity
NASA Technical Reports Server (NTRS)
Johnston, Albert S.; Howard, Richard T.; Brewster, Linda L.
2006-01-01
An instrumentation system rapidly and repeatedly determines the horizontal location of the center of gravity of a laboratory vehicle that slides horizontally on three air bearings (see Figure 1). Typically, knowledge of the horizontal center-of-mass location of such a vehicle is needed in order to balance the vehicle properly for an experiment and/or to assess the dynamic behavior of the vehicle. The system includes a load cell above each air bearing, electronic circuits that generate digital readings of the weight on each load cell, and a computer equipped with software that processes the readings. The total weight and, hence, the mass of the vehicle are computed from the sum of the load-cell weight readings. Then the horizontal position of the center of gravity is calculated straightforwardly as the weighted sum of the known position vectors of the air bearings, the contribution of each bearing being proportional to the weight on that bearing. In the initial application for which this system was devised, the center- of-mass calculation is particularly simple because the air bearings are located at corners of an equilateral triangle. However, the system is not restricted to this simple geometry. The system acquires and processes weight readings at a rate of 800 Hz for each load cell. The total weight and the horizontal location of the center of gravity are updated at a rate of 800/3 approx. equals 267 Hz. In a typical application, a technician would use the center-of-mass output of this instrumentation system as a guide to the manual placement of small weights on the vehicle to shift the center of gravity to a desired horizontal position. Usually, the desired horizontal position is that of the geometric center. Alternatively, this instrumentation system could be used to provide position feedback for a control system that would cause weights to be shifted automatically (see Figure 2) in an effort to keep the center of gravity at the geometric center.
Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi
2017-10-05
We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
On the time-weighted quadratic sum of linear discrete systems
NASA Technical Reports Server (NTRS)
Jury, E. I.; Gutman, S.
1975-01-01
A method is proposed for obtaining the time-weighted quadratic sum for linear discrete systems. The formula of the weighted quadratic sum is obtained from matrix z-transform formulation. In addition, it is shown that this quadratic sum can be derived in a recursive form for several useful weighted functions. The discussion presented parallels that of MacFarlane (1963) for weighted quadratic integral for linear continuous systems.
Optimization of Long Range Major Rehabilitation of Airfield Pavements.
1983-01-01
the network level, the mathematical representation of choosing those projects that maximize the sum of the user value weighted structural performanceof ...quantitatively be compared . In addition, an estimate of an appropriate level of funding for the entire system can be made. The simple example shows a...pavement engineers to only working in the present. The designing and comparing of pavement maintenance and rehabilitation alternatives remain directed
González-Benito, J; Castillo, E; Cruz-Caldito, J F
2015-07-28
Nanothermal-expansion of poly(ethylene-co-vinylacetate), EVA, and poly(methyl methacrylate), PMMA, in the form of films was measured to finally obtain linear coefficients of thermal expansion, CTEs. The simple deflection of a cantilever in an atomic force microscope, AFM, was used to monitor thermal expansions at the nanoscale. The influences of: (a) the structure of EVA in terms of its composition (vinylacetate content) and (b) the size of PMMA chains in terms of the molecular weight were studied. To carry out this, several polymer samples were used, EVA copolymers with different weight percents of the vinylacetate comonomer (12, 18, 25 and 40%) and PMMA polymers with different weight average molecular weights (33.9, 64.8, 75.600 and 360.0 kg mol(-1)). The dependencies of the vinyl acetate weight fraction of EVA and the molecular weight of PMMA on their corresponding CTEs were analyzed to finally explain them using new, intuitive and very simple models based on the rule of mixtures. In the case of EVA copolymers a simple equation considering the weighted contributions of each comonomer was enough to estimate the final CTE above the glass transition temperature. On the other hand, when the molecular weight dependence is considered the free volume concept was used as novelty. The expansion of PMMA, at least at the nanoscale, was well and easily described by the sum of the weighted contributions of the occupied and free volumes, respectively.
On the number of infinite geodesics and ground states in disordered systems
NASA Astrophysics Data System (ADS)
Wehr, Jan
1997-04-01
We study first-passage percolation models and their higher dimensional analogs—models of surfaces with random weights. We prove that under very general conditions the number of lines or, in the second case, hypersurfaces which locally minimize the sum of the random weights is with probability one equal to 0 or with probability one equal to +∞. As corollaries we show that in any dimension d≥2 the number of ground states of an Ising ferromagnet with random coupling constants equals (with probability one) 2 or +∞. Proofs employ simple large-deviation estimates and ergodic arguments.
Energy-weighted sum rules connecting ΔZ = 2 nuclei within the SO(8) model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Štefánik, Dušan; Šimkovic, Fedor; Faessler, Amand
2013-12-30
Energy-weighted sum rules associated with ΔZ = 2 nuclei are obtained for the Fermi and the Gamow-Teller operators within the SO(8) model. It is found that there is a dominance of contribution of a single state of the intermediate nucleus to the sum rule. The results confirm founding obtained within the SO(5) model that the energy-weighted sum rules of ΔZ = 2 nuclei are governed by the residual interactions of nuclear Hamiltonian. A short discussion concerning some aspects of energy weighted sum rules in the case of realistic nuclei is included.
Graphical tensor product reduction scheme for the Lie algebras so(5) = sp(2) , su(3) , and g(2)
NASA Astrophysics Data System (ADS)
Vlasii, N. D.; von Rütte, F.; Wiese, U.-J.
2016-08-01
We develop in detail a graphical tensor product reduction scheme, first described by Antoine and Speiser, for the simple rank 2 Lie algebras so(5) = sp(2) , su(3) , and g(2) . This leads to an efficient practical method to reduce tensor products of irreducible representations into sums of such representations. For this purpose, the 2-dimensional weight diagram of a given representation is placed in a ;landscape; of irreducible representations. We provide both the landscapes and the weight diagrams for a large number of representations for the three simple rank 2 Lie algebras. We also apply the algebraic ;girdle; method, which is much less efficient for calculations by hand for moderately large representations. Computer code for reducing tensor products, based on the graphical method, has been developed as well and is available from the authors upon request.
Sommer, Christine; Sletner, Line; Mørkrid, Kjersti; Jenum, Anne Karen; Birkeland, Kåre Inge
2015-04-03
Maternal glucose and lipid levels are associated with neonatal anthropometry of the offspring, also independently of maternal body mass index (BMI). Gestational weight gain, however, is often not accounted for. The objective was to explore whether the effects of maternal glucose and lipid levels on offspring's birth weight and subcutaneous fat were independent of early pregnancy BMI and mid-gestational weight gain. In a population-based, multi-ethnic, prospective cohort of 699 women and their offspring, maternal anthropometrics were collected in gestational week 15 and 28. Maternal fasting plasma lipids, fasting and 2-hour glucose post 75 g glucose load, were collected in gestational week 28. Maternal risk factors were standardized using z-scores. Outcomes were neonatal birth weight and sum of skinfolds in four different regions. Mean (standard deviation) birth weight was 3491 ± 498 g and mean sum of skinfolds was 18.2 ± 3.9 mm. Maternal fasting glucose and HDL-cholesterol were predictors of birth weight, and fasting and 2-hour glucose were predictors of neonatal sum of skinfolds, independently of weight gain as well as early pregnancy BMI, gestational week at inclusion, maternal age, parity, smoking status, ethnic origin, gestational age and offspring's sex. However, weight gain was the strongest independent predictor of both birth weight and neonatal sum of skinfolds, with a 0.21 kg/week increased weight gain giving a 110.7 (95% confidence interval 76.6-144.9) g heavier neonate, and with 0.72 (0.38-1.06) mm larger sum of skinfolds. The effect size of mother's early pregnancy BMI on birth weight was higher in non-Europeans than in Europeans. Maternal fasting glucose and HDL-cholesterol were predictors of offspring's birth weight, and fasting and 2-hour glucose were predictors of neonatal sum of skinfolds, independently of weight gain. Mid-gestational weight gain was a stronger predictor of both birth weight and neonatal sum of skinfolds than early pregnancy BMI, maternal glucose and lipid levels.
Multiple-path model of spectral reflectance of a dyed fabric.
Rogers, Geoffrey; Dalloz, Nicolas; Fournel, Thierry; Hebert, Mathieu
2017-05-01
Experimental results are presented of the spectral reflectance of a dyed fabric as analyzed by a multiple-path model of reflection. The multiple-path model provides simple analytic expressions for reflection and transmission of turbid media by applying the Beer-Lambert law to each path through the medium and summing over all paths, each path weighted by its probability. The path-length probability is determined by a random-walk analysis. The experimental results presented here show excellent agreement with predictions made by the model.
Simple and accurate sum rules for highly relativistic systems
NASA Astrophysics Data System (ADS)
Cohen, Scott M.
2005-03-01
In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.
Sum rules for quasifree scattering of hadrons
NASA Astrophysics Data System (ADS)
Peterson, R. J.
2018-02-01
The areas d σ /d Ω of fitted quasifree scattering peaks from bound nucleons for continuum hadron-nucleus spectra measuring d2σ /d Ω d ω are converted to sum rules akin to the Coulomb sums familiar from continuum electron scattering spectra from nuclear charge. Hadronic spectra with or without charge exchange of the beam are considered. These sums are compared to the simple expectations of a nonrelativistic Fermi gas, including a Pauli blocking factor. For scattering without charge exchange, the hadronic sums are below this expectation, as also observed with Coulomb sums. For charge exchange spectra, the sums are near or above the simple expectation, with larger uncertainties. The strong role of hadron-nucleon in-medium total cross sections is noted from use of the Glauber model.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
Slanger, W D; Marchello, M J; Busboom, J R; Meyer, H H; Mitchell, L A; Hendrix, W F; Mills, R R; Warnock, W D
1994-06-01
Data of sixty finished, crossbred lambs were used to develop prediction equations of total weight of retail-ready cuts (SUM). These cuts were the leg, sirloin, loin, rack, shoulder, neck, riblets, shank, and lean trim (85/15). Measurements were taken on live lambs and on both hot and cold carcasses. A four-terminal bioelectrical impedance analyzer (BIA) was used to measure resistance (Rs, ohms) and reactance (Xc, ohms). Distances between detector terminals (L, centimeters) were recorded. Carcass temperatures (T, degrees C) at time of BIA readings were also recorded. The equation predicting SUM from cold carcass measurements (n = 53, R2 = .97) was .093 + .621 x weight-.0219 x Rs + .0248 x Xc + .182 x L-.338 x T. Resistance accounted for variability in SUM over and above weight and L (P = .0016). The above equation was used to rank cold carcasses in descending order of predicted SUM. An analogous ranking was obtained from a prediction equation that used weight only (R2 = .88). These rankings were divided into five categories: top 25%, middle 50%, bottom 25%, top 50%, and bottom 50%. Within-category differences in average fat cover, yield grade, and SUM as a percentage of cold carcass weight of carcasses not placed in the same category by both prediction equations were quantified with independent t-tests. These differences were statistically significant for all categories except middle 50%. This shows that BIA located those lambs that could more efficiently contribute to SUM because a higher portion of their weight was lean.
Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models
2002-03-01
such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most
Super (a,d)-H-antimagic covering of möbius ladder graph
NASA Astrophysics Data System (ADS)
Indriyani, Novia; Sri Martini, Titin
2018-04-01
Let G = (V(G), E(G)) be a simple graph. Let H-covering of G is a subgraph H 1, H 2, …, Hj with every edge in G is contained in at least one graph Hi for 1 ≤ i ≤ j. If every Hi is isomorphic, then G admits an H-covering. Furthermore, an (a,d)-H-antimagic covering if there bijective function ξ :V(G)\\cup E(G)\\to \\{1,2,3,\\ldots,|V(G)|+|E(G)|\\}. The H‑-weights for all subgraphs H‑ isomorphic to H ω ({H}^{\\prime })={\\sum }v\\in V({H^{\\prime })}ξ (v)+{\\sum }e\\in E({H^{\\prime })}ξ (e). The weights of subgraphs constitutes an arithmatic progression {a, a + d, …, a + (t ‑ 1)d} where a and d are positive integers and t is the number of subgraphs G isomorphic to H. If ξ (V(G))=\\{1,2,\\ldots,|V(G)|\\} then ξ is called super (a, d)-H-antimagic covering. The research provides super (a, d)-H-antimagic covering with d = {1, 3} of Möbius ladder graph Mn for n > 5 and n is odd.
A Groupwise Association Test for Rare Mutations Using a Weighted Sum Statistic
Madsen, Bo Eskerod; Browning, Sharon R.
2009-01-01
Resequencing is an emerging tool for identification of rare disease-associated mutations. Rare mutations are difficult to tag with SNP genotyping, as genotyping studies are designed to detect common variants. However, studies have shown that genetic heterogeneity is a probable scenario for common diseases, in which multiple rare mutations together explain a large proportion of the genetic basis for the disease. Thus, we propose a weighted-sum method to jointly analyse a group of mutations in order to test for groupwise association with disease status. For example, such a group of mutations may result from resequencing a gene. We compare the proposed weighted-sum method to alternative methods and show that it is powerful for identifying disease-associated genes, both on simulated and Encode data. Using the weighted-sum method, a resequencing study can identify a disease-associated gene with an overall population attributable risk (PAR) of 2%, even when each individual mutation has much lower PAR, using 1,000 to 7,000 affected and unaffected individuals, depending on the underlying genetic model. This study thus demonstrates that resequencing studies can identify important genetic associations, provided that specialised analysis methods, such as the weighted-sum method, are used. PMID:19214210
Physiological mechanisms of sustained fumagillin-induced weight loss.
An, Jie; Wang, Liping; Patnode, Michael L; Ridaura, Vanessa K; Haldeman, Jonathan M; Stevens, Robert D; Ilkayeva, Olga; Bain, James R; Muehlbauer, Michael J; Glynn, Erin L; Thomas, Steven; Muoio, Deborah; Summers, Scott A; Vath, James E; Hughes, Thomas E; Gordon, Jeffrey I; Newgard, Christopher B
2018-03-08
Current obesity interventions suffer from lack of durable effects and undesirable complications. Fumagillin, an inhibitor of methionine aminopeptidase-2, causes weight loss by reducing food intake, but with effects on weight that are superior to pair-feeding. Here, we show that feeding of rats on a high-fat diet supplemented with fumagillin (HF/FG) suppresses the aggressive feeding observed in pair-fed controls (HF/PF) and alters expression of circadian genes relative to the HF/PF group. Multiple indices of reduced energy expenditure are observed in HF/FG but not HF/PF rats. HF/FG rats also exhibit changes in gut hormones linked to food intake, increased energy harvest by gut microbiota, and caloric spilling in the urine. Studies in gnotobiotic mice reveal that effects of fumagillin on energy expenditure but not feeding behavior may be mediated by the gut microbiota. In sum, fumagillin engages weight loss-inducing behavioral and physiologic circuits distinct from those activated by simple caloric restriction.
Physiological mechanisms of sustained fumagillin-induced weight loss
An, Jie; Patnode, Michael L.; Haldeman, Jonathan M.; Stevens, Robert D.; Ilkayeva, Olga; Bain, James R.; Muehlbauer, Michael J.; Glynn, Erin L.; Thomas, Steven; Muoio, Deborah; Summers, Scott A.; Vath, James E.; Hughes, Thomas E.; Gordon, Jeffrey I.; Newgard, Christopher B.
2018-01-01
Current obesity interventions suffer from lack of durable effects and undesirable complications. Fumagillin, an inhibitor of methionine aminopeptidase-2, causes weight loss by reducing food intake, but with effects on weight that are superior to pair-feeding. Here, we show that feeding of rats on a high-fat diet supplemented with fumagillin (HF/FG) suppresses the aggressive feeding observed in pair-fed controls (HF/PF) and alters expression of circadian genes relative to the HF/PF group. Multiple indices of reduced energy expenditure are observed in HF/FG but not HF/PF rats. HF/FG rats also exhibit changes in gut hormones linked to food intake, increased energy harvest by gut microbiota, and caloric spilling in the urine. Studies in gnotobiotic mice reveal that effects of fumagillin on energy expenditure but not feeding behavior may be mediated by the gut microbiota. In sum, fumagillin engages weight loss–inducing behavioral and physiologic circuits distinct from those activated by simple caloric restriction. PMID:29515039
A method of predicting the energy-absorption capability of composite subfloor beams
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1987-01-01
A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.
Harel, Daphna; Hudson, Marie; Iliescu, Alexandra; Baron, Murray; Steele, Russell
2016-08-01
To develop a weighted summary score for the Medsger Disease Severity Scale (DSS) and to compare its measurement properties with those of a summed DSS score and a physician's global assessment (PGA) of severity score in systemic sclerosis (SSc). Data from 875 patients with SSc enrolled in a multisite observational research cohort were extracted from a central database. Item response theory was used to estimate weights for the DSS weighted score. Intraclass correlation coefficients (ICC) and convergent, discriminative, and predictive validity of the 3 summary measures in relation to patient-reported outcomes (PRO) and mortality were compared. Mean PGA was 2.69 (SD 2.16, range 0-10), mean DSS summed score was 8.60 (SD 4.02, range 0-36), and mean DSS weighted score was 8.11 (SD 4.05, range 0-36). ICC were similar for all 3 measures [PGA 6.9%, 95% credible intervals (CrI) 2.1-16.2; DSS summed score 2.5%, 95% CrI 0.4-6.7; DSS weighted score 2.0%, 95% CrI 0.1-5.6]. Convergent and discriminative validity of the 3 measures for PRO were largely similar. In Cox proportional hazards models adjusting for age and sex, the 3 measures had similar predictive ability for mortality (adjusted R(2) 13.9% for PGA, 12.3% for DSS summed score, and 10.7% DSS weighted score). The 3 summary scores appear valid and perform similarly. However, there were some concerns with the weights computed for individual DSS scales, with unexpected low weights attributed to lung, heart, and kidney, leading the PGA to be the preferred measure at this time. Further work refining the DSS could improve the measurement properties of the DSS summary scores.
ERIC Educational Resources Information Center
Soh, Kaycheng
2014-01-01
World university rankings (WUR) use the weight-and-sum approach to arrive at an overall measure which is then used to rank the participating universities of the world. Although the weight-and-sum procedure seems straightforward and accords with common sense, it has hidden methodological or statistical problems which render the meaning of the…
49 CFR 393.42 - Brakes required on all wheels.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subject to this part is not required to be equipped with brakes if the axle weight of the towed vehicle does not exceed 40 percent of the sum of the axle weights of the towing vehicle. (4) Any full trailer... of the towed vehicle does not exceed 40 percent of the sum of the axle weights of the towing vehicle...
Constructing general partial differential equations using polynomial and neural networks.
Zjavka, Ladislav; Pedrycz, Witold
2016-01-01
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
New QCD sum rules based on canonical commutation relations
NASA Astrophysics Data System (ADS)
Hayata, Tomoya
2012-04-01
New derivation of QCD sum rules by canonical commutators is developed. It is the simple and straightforward generalization of Thomas-Reiche-Kuhn sum rule on the basis of Kugo-Ojima operator formalism of a non-abelian gauge theory and a suitable subtraction of UV divergences. By applying the method to the vector and axial vector current in QCD, the exact Weinberg’s sum rules are examined. Vector current sum rules and new fractional power sum rules are also discussed.
Coherence analysis of a class of weighted networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; He, Jiaojiao; Zong, Yue; Ju, Tingting; Sun, Yu; Su, Weiyi
2018-04-01
This paper investigates consensus dynamics in a dynamical system with additive stochastic disturbances that is characterized as network coherence by using the Laplacian spectrum. We introduce a class of weighted networks based on a complete graph and investigate the first- and second-order network coherence quantifying as the sum and square sum of reciprocals of all nonzero Laplacian eigenvalues. First, the recursive relationship of its eigenvalues at two successive generations of Laplacian matrix is deduced. Then, we compute the sum and square sum of reciprocal of all nonzero Laplacian eigenvalues. The obtained results show that the scalings of first- and second-order coherence with network size obey four and five laws, respectively, along with the range of the weight factor. Finally, it indicates that the scalings of our studied networks are smaller than other studied networks when 1/√{d }
On the Total Edge Irregularity Strength of Generalized Butterfly Graph
NASA Astrophysics Data System (ADS)
Dwi Wahyuna, Hafidhyah; Indriati, Diari
2018-04-01
Let G(V, E) be a connected, simple, and undirected graph with vertex set V and edge set E. A total k-labeling is a map that carries vertices and edges of a graph G into a set of positive integer labels {1, 2, …, k}. An edge irregular total k-labeling λ: V(G) ∪ E(G) → {1, 2, …, k} of a graph G is a total k-labeling such that the weights calculated for all edges are distinct. The weight of an edge uv in G, denoted by wt(uv), is defined as the sum of the label of u, the label of v, and the label of uv. The total edge irregularity strength of G, denoted by tes(G), is the minimum value of the largest label k over all such edge irregular total k-labelings. A generalized butterfly graph, BFn , obtained by inserting vertices to every wing with assumption that sum of inserting vertices to every wing are same then it has 2n + 1 vertices and 4n ‑ 2 edges. In this paper, we investigate the total edge irregularity strength of generalized butterfly graph, BFn , for n > 2. The result is tes(B{F}n)=\\lceil \\frac{4n}{3}\\rceil .
Bayesian `hyper-parameters' approach to joint estimation: the Hubble constant from CMB measurements
NASA Astrophysics Data System (ADS)
Lahav, O.; Bridle, S. L.; Hobson, M. P.; Lasenby, A. N.; Sodré, L.
2000-07-01
Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalize this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint χ2 function a set of `hyper-parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the hyper-parameters, is very simple: instead of minimizing \\sum \\chi_j2 (where \\chi_j2 is per data set j) we propose to minimize \\sum Nj (\\chi_j2) (where Nj is the number of data points per data set j). We illustrate the method by estimating the Hubble constant H0 from different sets of recent cosmic microwave background (CMB) experiments (including Saskatoon, Python V, MSAM1, TOCO and Boomerang). The approach can be generalized for combinations of cosmic probes, and for other priors on the hyper-parameters.
The Difference Calculus and The NEgative Binomial Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o; Shenton, LR
2007-01-01
In a previous paper we state the dominant term in the third central moment of the maximum likelihood estimator k of the parameter k in the negative binomial probability function where the probability generating function is (p + 1 - pt){sup -k}. A partial sum of the series {Sigma}1/(k + x){sup 3} is involved, where x is a negative binomial random variate. In expectation this sum can only be found numerically using the computer. Here we give a simple definite integral in (0,1) for the generalized case. This means that now we do have a valid expression for {radical}{beta}{sub 11}(k)more » and {radical}{beta}{sub 11}(p). In addition we use the finite difference operator {Delta}, and E = 1 + {Delta} to set up formulas for low order moments. Other examples of the operators are quoted relating to the orthogonal set of polynomials associated with the negative binomial probability function used as a weight function.« less
Ferguson, John; Wheeler, William; Fu, YiPing; Prokunina-Olsson, Ludmila; Zhao, Hongyu; Sampson, Joshua
2013-01-01
With recent advances in sequencing, genotyping arrays, and imputation, GWAS now aim to identify associations with rare and uncommon genetic variants. Here, we describe and evaluate a class of statistics, generalized score statistics (GSS), that can test for an association between a group of genetic variants and a phenotype. GSS are a simple weighted sum of single-variant statistics and their cross-products. We show that the majority of statistics currently used to detect associations with rare variants are equivalent to choosing a specific set of weights within this framework. We then evaluate the power of various weighting schemes as a function of variant characteristics, such as MAF, the proportion associated with the phenotype, and the direction of effect. Ultimately, we find that two classical tests are robust and powerful, but details are provided as to when other GSS may perform favorably. The software package CRaVe is available at our website (http://dceg.cancer.gov/bb/tools/crave). PMID:23092956
NASA Astrophysics Data System (ADS)
Tohara, Takashi; Liang, Haichao; Tanaka, Hirofumi; Igarashi, Makoto; Samukawa, Seiji; Endo, Kazuhiko; Takahashi, Yasuo; Morie, Takashi
2016-03-01
A nanodisk array connected with a fin field-effect transistor is fabricated and analyzed for spiking neural network applications. This nanodevice performs weighted sums in the time domain using rising slopes of responses triggered by input spike pulses. The nanodisk arrays, which act as a resistance of several giga-ohms, are fabricated using a self-assembly bio-nano-template technique. Weighted sums are achieved with an energy dissipation on the order of 1 fJ, where the number of inputs can be more than one hundred. This amount of energy is several orders of magnitude lower than that of conventional digital processors.
Holder, J P; Benedetti, L R; Bradley, D K
2016-11-01
Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.
Hardware Implementation of a Bilateral Subtraction Filter
NASA Technical Reports Server (NTRS)
Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven
2009-01-01
A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.
Diagrams for the Free Energy and Density Weight Factors of the Ising Models.
1983-01-01
sum to zero . The associated R. A. Farrell, T. Morita, and P. H. E. Meijer, "Cluster Expan- also, "_ ratum: New Generating Functions and Results for the...given for the cubic lattices. We employ a theorem that states that a certain sum of diagrams is zero in order to obtain the density-dependent weight...these diagrams are given for the cubic lattices. We employ a theorem that states that a certain sum of diagrams is zero in order to obtain the density
Consultation sequencing of a hospital with multiple service points using genetic programming
NASA Astrophysics Data System (ADS)
Morikawa, Katsumi; Takahashi, Katsuhiko; Nagasawa, Keisuke
2018-07-01
A hospital with one consultation room operated by a physician and several examination rooms is investigated. Scheduled patients and walk-ins arrive at the hospital, each patient goes to the consultation room first, and some of them visit other service points before consulting the physician again. The objective function consists of the sum of three weighted average waiting times. The problem of sequencing patients for consultation is focused. To alleviate the stress of waiting, the consultation sequence is displayed. A dispatching rule is used to decide the sequence, and best rules are explored by genetic programming (GP). The simulation experiments indicate that the rules produced by GP can be reduced to simple permutations of queues, and the best permutation depends on the weight used in the objective function. This implies that a balanced allocation of waiting times can be achieved by ordering the priority among three queues.
Sum Rule for a Schiff-Like Dipole Moment
NASA Astrophysics Data System (ADS)
Raduta, A. A.; Budaca, R.
The energy-weighted sum rule for an electric dipole transition operator of a Schiff type differs from the Thomas-Reiche-Kuhn (TRK) sum rule by several corrective terms which depend on the number of system components, N. For illustration the formalism was applied to the case of Na clusters. One concludes that the random phase approximation (RPA) results for Na clusters obey the modified TRK sum rule.
A Solution to Weighted Sums of Squares as a Square
ERIC Educational Resources Information Center
Withers, Christopher S.; Nadarajah, Saralees
2012-01-01
For n = 1, 2, ... , we give a solution (x[subscript 1], ... , x[subscript n], N) to the Diophantine integer equation [image omitted]. Our solution has N of the form n!, in contrast to other solutions in the literature that are extensions of Euler's solution for N, a sum of squares. More generally, for given n and given integer weights m[subscript…
NASA Technical Reports Server (NTRS)
Hinson, E. W.
1981-01-01
The preliminary analysis and data analysis system development for the shuttle upper atmosphere mass spectrometer (SUMS) experiment are discussed. The SUMS experiment is designed to provide free stream atmospheric density, pressure, temperature, and mean molecular weight for the high altitude, high Mach number region.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Motkova, A. V.
2018-01-01
A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.
NASA Astrophysics Data System (ADS)
Prihandini, Rafiantika M.; Agustin, I. H.; Dafik
2018-04-01
In this paper we use simple and non trivial graph. If there exist a bijective function g:V(G) \\cup E(G)\\to \\{1,2,\\ldots,|V(G)|+|E(G)|\\}, such that for all subgraphs {P}2\\vartriangleright H of G isomorphic to H, then graph G is called an (a, b)-{P}2\\vartriangleright H-antimagic total graph. Furthermore, we can consider the total {P}2\\vartriangleright H-weights W({P}2\\vartriangleright H)={\\sum }v\\in V({P2\\vartriangleright H)}f(v)+{\\sum }e\\in E({P2\\vartriangleright H)}f(e) which should form an arithmetic sequence {a, a + d, a + 2d, …, a + (n ‑ 1)d}, where a and d are positive integers and n is the number of all subgraphs isomorphic to H. Our paper describes the existence of super (a, b)-{P}2\\vartriangleright H antimagic total labeling for graph operation of comb product namely of G=L\\vartriangleright H, where L is a (b, d*)-edge antimagic vertex labeling graph and H is a connected graph.
NASA Astrophysics Data System (ADS)
Sun, Jingliang; Liu, Chunsheng
2018-01-01
In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.
On the total irregularity strength of caterpillar with each internal vertex has degree three
NASA Astrophysics Data System (ADS)
Indriati, Diari; Rosyida, Isnaini; Widodo
2018-04-01
Let G be a simple, connected and undirected graph with vertex set V and edge set E. A total k-labeling f:V \\cup E\\to \\{1,2,\\ldots,k\\} is defined as totally irregular total k-labeling if the weights of any two different both vertices and edges are distinct. The weight of vertex x is defined as wt(x)=f(x)+{\\sum }xy\\in Ef(xy), while the weight of edge xy is wt(xy)=f(x)+f(xy)+f(y). A minimum k for which G has totally irregular total k-labeling is mentioned as total irregularity strength of G and denoted by ts(G). This paper contains investigation of totally irregular total k-labeling and determination of their total irregularity strengths for caterpillar graphs with each internal vertex between two stars has degree three. The results are ts({S}n,3,n)=\\lceil \\frac{2n}{2}\\rceil, ts({S}n,3,3,n)=\\lceil \\frac{2n+1}{2}\\rceil and ts({S}n,3,3,3,n)=\\lceil \\frac{2n+2}{2}\\rceil for n > 4:
Physical condition for elimination of ambiguity in conditionally convergent lattice sums
NASA Astrophysics Data System (ADS)
Young, K.
1987-02-01
The conditional convergence of the lattice sum defining the Madelung constant gives rise to an ambiguity in its value. It is shown that this ambiguity is related, through a simple and universal integral, to the average charge density on the crystal surface. The physically correct value is obtained by setting the charge density to zero. A simple and universally applicable formula for the Madelung constant is derived as a consequence. It consists of adding up dipole-dipole energies together with a nontrivial correction term.
Zhao, Tanfeng; Zhang, Qingyou; Long, Hailin; Xu, Lu
2014-01-01
In order to explore atomic asymmetry and molecular chirality in 2D space, benzenoids composed of 3 to 11 hexagons in 2D space were enumerated in our laboratory. These benzenoids are regarded as planar connected polyhexes and have no internal holes; that is, their internal regions are filled with hexagons. The produced dataset was composed of 357,968 benzenoids, including more than 14 million atoms. Rather than simply labeling the huge number of atoms as being either symmetric or asymmetric, this investigation aims at exploring a quantitative graph theoretical descriptor of atomic asymmetry. Based on the particular characteristics in the 2D plane, we suggested the weighted atomic sum as the descriptor of atomic asymmetry. This descriptor is measured by circulating around the molecule going in opposite directions. The investigation demonstrates that the weighted atomic sums are superior to the previously reported quantitative descriptor, atomic sums. The investigation of quantitative descriptors also reveals that the most asymmetric atom is in a structure with a spiral ring with the convex shape going in clockwise direction and concave shape going in anticlockwise direction from the atom. Based on weighted atomic sums, a weighted F index is introduced to quantitatively represent molecular chirality in the plane, rather than merely regarding benzenoids as being either chiral or achiral. By validating with enumerated benzenoids, the results indicate that the weighted F indexes were in accordance with their chiral classification (achiral or chiral) over the whole benzenoids dataset. Furthermore, weighted F indexes were superior to previously available descriptors. Benzenoids possess a variety of shapes and can be extended to practically represent any shape in 2D space—our proposed descriptor has thus the potential to be a general method to represent 2D molecular chirality based on the difference between clockwise and anticlockwise sums around a molecule. PMID:25032832
Character expansion methods for matrix models of dually weighted graphs
NASA Astrophysics Data System (ADS)
Kazakov, Vladimir A.; Staudacher, Matthias; Wynter, Thomas
1996-04-01
We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphys possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating the equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problems of phase transitions from random to flat lattices. January 1995
The terminator "toy" chemistry test: A simple tool to assess errors in transport schemes
Lauritzen, P. H.; Conley, A. J.; Lamarque, J. -F.; ...
2015-05-04
This test extends the evaluation of transport schemes from prescribed advection of inert scalars to reactive species. The test consists of transporting two interacting chemical species in the Nair and Lauritzen 2-D idealized flow field. The sources and sinks for these two species are given by a simple, but non-linear, "toy" chemistry that represents combination (X+X → X 2) and dissociation (X 2 → X+X). This chemistry mimics photolysis-driven conditions near the solar terminator, where strong gradients in the spatial distribution of the species develop near its edge. Despite the large spatial variations in each species, the weighted sum Xmore » T = X+2X 2 should always be preserved at spatial scales at which molecular diffusion is excluded. The terminator test demonstrates how well the advection–transport scheme preserves linear correlations. Chemistry–transport (physics–dynamics) coupling can also be studied with this test. Examples of the consequences of this test are shown for illustration.« less
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.
Sidey, Vasyl
2009-06-01
Systematic variations of the bond-valence sums calculated from the poorly determined bond-valence parameters [Sidey (2008), Acta Cryst. B64, 515-518] have been illustrated using a simple graphical scheme.
Slack, Robert J; Russell, Linda J; Barton, Nick P; Weston, Cathryn; Nalesso, Giovanna; Thompson, Sally-Anne; Allen, Morven; Chen, Yu Hua; Barnes, Ashley; Hodgson, Simon T; Hall, David A
2013-01-01
Chemokine receptor antagonists appear to access two distinct binding sites on different members of this receptor family. One class of CCR4 antagonists has been suggested to bind to a site accessible from the cytoplasm while a second class did not bind to this site. In this report, we demonstrate that antagonists representing a variety of structural classes bind to two distinct allosteric sites on CCR4. The effects of pairs of low-molecular weight and/or chemokine CCR4 antagonists were evaluated on CCL17- and CCL22-induced responses of human CCR4+ T cells. This provided an initial grouping of the antagonists into sets which appeared to bind to distinct binding sites. Binding studies were then performed with radioligands from each set to confirm these groupings. Some novel receptor theory was developed to allow the interpretation of the effects of the antagonist combinations. The theory indicates that, generally, the concentration-ratio of a pair of competing allosteric modulators is maximally the sum of their individual effects while that of two modulators acting at different sites is likely to be greater than their sum. The low-molecular weight antagonists could be grouped into two sets on the basis of the functional and binding experiments. The antagonistic chemokines formed a third set whose behaviour was consistent with that of simple competitive antagonists. These studies indicate that there are two allosteric regulatory sites on CCR4. PMID:25505571
A new adaptively central-upwind sixth-order WENO scheme
NASA Astrophysics Data System (ADS)
Huang, Cong; Chen, Li Li
2018-03-01
In this paper, we propose a new sixth-order WENO scheme for solving one dimensional hyperbolic conservation laws. The new WENO reconstruction has three properties: (1) it is central in smooth region for low dissipation, and is upwind near discontinuities for numerical stability; (2) it is a convex combination of four linear reconstructions, in which one linear reconstruction is sixth order, and the others are third order; (3) its linear weights can be any positive numbers with requirement that their sum equals one. Furthermore, we propose a simple smoothness indicator for the sixth-order linear reconstruction, this smooth indicator not only can distinguish the smooth region and discontinuities exactly, but also can reduce the computational cost, thus it is more efficient than the classical one.
Japanese professional nurses spend unnecessarily long time doing nursing assistants' tasks.
Kudo, Yasushi; Yoshimura, Emiko; Shahzad, Machiko Taruzuka; Shibuya, Akitaka; Aizawa, Yoshiharu
2012-09-01
In environments in which professional nurses do simple tasks, e.g., laundry, cleaning, and waste disposal, they cannot concentrate on technical jobs by utilizing their expertise to its fullest benefit. Particularly, in Japan, the nursing shortage is a serious problem. If professional nurses take their time to do any of these simple tasks, the tasks should be preferentially allocated to nursing assistants. Because there has been no descriptive study to investigate the amount of time Japanese professional nurses spent doing such simple tasks during their working time, their actual conditions remain unclear. Professional nurses recorded their total working time and the time they spent doing such simple tasks during the week of the survey period. The time an individual respondent spent doing one or more simple tasks during that week was summed up, as was their working time. Subsequently, the percentage of the summed time he or she spent doing any of those tasks in his or her summed working time was calculated. A total of 1,086 respondents in 19 hospitals that had 87 to 376 beds were analyzed (response rate: 53.3%). The average time (SD) that respondents spent doing those simple tasks and their total working time were 2.24 (3.35) hours and 37.48 (10.88) hours, respectively. The average percentage (SD) of the time they spent doing the simple tasks in their working time was 6.00% (8.39). Hospital administrators must decrease this percentage. Proper working environments in which professional nurses can concentrate more on their technical jobs must be created.
siMS Score: Simple Method for Quantifying Metabolic Syndrome.
Soldatovic, Ivan; Vukovic, Rade; Culafic, Djordje; Gajic, Milan; Dimitrijevic-Sreckovic, Vesna
2016-01-01
To evaluate siMS score and siMS risk score, novel continuous metabolic syndrome scores as methods for quantification of metabolic status and risk. Developed siMS score was calculated using formula: siMS score = 2*Waist/Height + Gly/5.6 + Tg/1.7 + TAsystolic/130-HDL/1.02 or 1.28 (for male or female subjects, respectively). siMS risk score was calculated using formula: siMS risk score = siMS score * age/45 or 50 (for male or female subjects, respectively) * family history of cardio/cerebro-vascular events (event = 1.2, no event = 1). A sample of 528 obese and non-obese participants was used to validate siMS score and siMS risk score. Scores calculated as sum of z-scores (each component of metabolic syndrome regressed with age and gender) and sum of scores derived from principal component analysis (PCA) were used for evaluation of siMS score. Variants were made by replacing glucose with HOMA in calculations. Framingham score was used for evaluation of siMS risk score. Correlation between siMS score with sum of z-scores and weighted sum of factors of PCA was high (r = 0.866 and r = 0.822, respectively). Correlation between siMS risk score and log transformed Framingham score was medium to high for age groups 18+,30+ and 35+ (0.835, 0.707 and 0.667, respectively). siMS score and siMS risk score showed high correlation with more complex scores. Demonstrated accuracy together with superior simplicity and the ability to evaluate and follow-up individual patients makes siMS and siMS risk scores very convenient for use in clinical practice and research as well.
Maximizing Gateway-Course Improvement by Making the Whole Greater than the Sum of the Parts
ERIC Educational Resources Information Center
Koch, Andrew K.; Prystowsky, Richard J.; Scinta, Tony
2017-01-01
Drawing on systems theory, this chapter uses two different institutional examples to demonstrate the benefits of combining gateway-course improvement initiatives with other student success efforts so that the combined approach makes the whole greater than the simple sum of the pieces.
Diagonalizing Tensor Covariants, Light-Cone Commutators, and Sum Rules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo, C. Y.
We derive fixed-mass sum rules for virtual Compton scattering the forward direction. We use the methods of both Dicus, Jackiw, and Teplitz (for the absorptive parts) and Heimann, Hey, and Mandula (for the real parts). We find a set of tensor covariansa such that the corresponding scalar amplitudes are proportional to simple t-channel parity-conserving helicity amplitudes. We give a relatively complete discussion of the convergence of the sum rules in a Regge model. (auth)
NASA Astrophysics Data System (ADS)
Hetényi, Balázs
2014-03-01
The Drude weight, the quantity which distinguishes metals from insulators, is proportional to the second derivative of the ground state energy with respect to a flux at zero flux. The same expression also appears in the definition of the Meissner weight, the quantity which indicates superconductivity, as well as in the definition of non-classical rotational inertia of bosonic superfluids. It is shown that the difference between these quantities depends on the interpretation of the average momentum term, which can be understood as the expectation value of the total momentum (Drude weight), the sum of the expectation values of single momenta (rotational inertia of a superfluid), or the sum over expectation values of momentum pairs (Meissner weight). This distinction appears naturally when the current from which the particular transport quantity is derived is cast in terms of shift operators.
Measures for assessing architectural speech security (privacy) of closed offices and meeting rooms.
Gover, Bradford N; Bradley, John S
2004-12-01
Objective measures were investigated as predictors of the speech security of closed offices and rooms. A new signal-to-noise type measure is shown to be a superior indicator for security than existing measures such as the Articulation Index, the Speech Intelligibility Index, the ratio of the loudness of speech to that of noise, and the A-weighted level difference of speech and noise. This new measure is a weighted sum of clipped one-third-octave-band signal-to-noise ratios; various weightings and clipping levels are explored. Listening tests had 19 subjects rate the audibility and intelligibility of 500 English sentences, filtered to simulate transmission through various wall constructions, and presented along with background noise. The results of the tests indicate that the new measure is highly correlated with sentence intelligibility scores and also with three security thresholds: the threshold of intelligibility (below which speech is unintelligible), the threshold of cadence (below which the cadence of speech is inaudible), and the threshold of audibility (below which speech is inaudible). The ratio of the loudness of speech to that of noise, and simple A-weighted level differences are both shown to be well correlated with these latter two thresholds (cadence and audibility), but not well correlated with intelligibility.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.
The Approximation of Two-Mode Proximity Matrices by Sums of Order-Constrained Matrices.
ERIC Educational Resources Information Center
Hubert, Lawrence; Arabie, Phipps
1995-01-01
A least-squares strategy is proposed for representing a two-mode proximity matrix as an approximate sum of a small number of matrices that satisfy certain simple order constraints on their entries. The primary class of constraints considered defines Q-forms for particular conditions in a two-mode matrix. (SLD)
Direct Sum Decomposition of Groups
ERIC Educational Resources Information Center
Thaheem, A. B.
2005-01-01
Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct…
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
Auditory alert systems with enhanced detectability
NASA Technical Reports Server (NTRS)
Begault, Durand R. (Inventor)
2008-01-01
Methods and systems for distinguishing an auditory alert signal from a background of one or more non-alert signals. In a first embodiment, a prefix signal, associated with an existing alert signal, is provided that has a signal component in each of three or more selected frequency ranges, with each signal component in each of three or more selected level at least 3-10 dB above an estimated background (non-alert) level in that frequency range. The alert signal may be chirped within one or more frequency bands. In another embodiment, an alert signal moves, continuously or discontinuously, from one location to another over a short time interval, introducing a perceived spatial modulation or jitter. In another embodiment, a weighted sum of background signals adjacent to each ear is formed, and the weighted sum is delivered to each ear as a uniform background; a distinguishable alert signal is presented on top of this weighted sum signal at one ear, or distinguishable first and second alert signals are presented at two ears of a subject.
Multiple Interactive Pollutants in Water Quality Trading
NASA Astrophysics Data System (ADS)
Sarang, Amin; Lence, Barbara J.; Shamsai, Abolfazl
2008-10-01
Efficient environmental management calls for the consideration of multiple pollutants, for which two main types of transferable discharge permit (TDP) program have been described: separate permits that manage each pollutant individually in separate markets, with each permit based on the quantity of the pollutant or its environmental effects, and weighted-sum permits that aggregate several pollutants as a single commodity to be traded in a single market. In this paper, we perform a mathematical analysis of TDP programs for multiple pollutants that jointly affect the environment (i.e., interactive pollutants) and demonstrate the practicality of this approach for cost-efficient maintenance of river water quality. For interactive pollutants, the relative weighting factors are functions of the water quality impacts, marginal damage function, and marginal treatment costs at optimality. We derive the optimal set of weighting factors required by this approach for important scenarios for multiple interactive pollutants and propose using an analytical elasticity of substitution function to estimate damage functions for these scenarios. We evaluate the applicability of this approach using a hypothetical example that considers two interactive pollutants. We compare the weighted-sum permit approach for interactive pollutants with individual permit systems and TDP programs for multiple additive pollutants. We conclude by discussing practical considerations and implementation issues that result from the application of weighted-sum permit programs.
Minimizing the Sum of Completion Times with Resource Dependant Times
NASA Astrophysics Data System (ADS)
Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe
2008-10-01
We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.
On the Hardness of Subset Sum Problem from Different Intervals
NASA Astrophysics Data System (ADS)
Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke
The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.
Warburton, William K.; Momayezi, Michael
2006-06-20
A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
Grassi, Mario; Nucera, Andrea
2010-01-01
The objective of this study was twofold: 1) to confirm the hypothetical eight scales and two-component summaries of the questionnaire Short Form 36 Health Survey (SF-36), and 2) to evaluate the performance of two alternative measures to the original physical component summary (PCS) and mental component summary (MCS). We performed principal component analysis (PCA) based on 35 items, after optimal scaling via multiple correspondence analysis (MCA), and subsequently on eight scales, after standard summative scoring. Item-based summary measures were planned. Data from the European Community Respiratory Health Survey II follow-up of 8854 subjects from 25 centers were analyzed to cross-validate the original and the novel PCS and MCS. Overall, the scale- and item-based comparison indicated that the SF-36 scales and summaries meet the supposed dimensionality. However, vitality, social functioning, and general health items did not fit data optimally. The novel measures, derived a posteriori by unit-rule from an oblique (correlated) MCA/PCA solution, are simple item sums or weighted scale sums where the weights are the raw scale ranges. These item-based scores yielded consistent scale-summary results for outliers profiles, with an expected known-group differences validity. We were able to confirm the hypothesized dimensionality of eight scales and two summaries of the SF-36. The alternative scoring reaches at least the same required standards of the original scoring. In addition, it can reduce the item-scale inconsistencies without loss of predictive validity.
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Real time pipelined system for forming the sum of products in the processing of video data
NASA Technical Reports Server (NTRS)
Wilcox, Brian (Inventor)
1988-01-01
A 3-by-3 convolver utilizes 9 binary arithmetic units connected in cascade for multiplying 12-bit binary pixel values P sub i which are positive or two's complement binary numbers by 5-bit magnitide (plus sign) weights W sub i which may be positive or negative. The weights are stored in registers including the sign bits. For a negative weight, the one's complement of the pixel value to be multiplied is formed at each unit by a bank of 17 exclusive or gates G sub i under control of the sign of the corresponding weight W sub i, and a correction is made by adding the sum of the absolute values of all the negative weights for each 3-by-3 kernel. Since this correction value remains constant as long as the weights are constant, it can be precomputed and stored in a register as a value to be added to the product PW of the first arithmetic unit.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our “worst-case weighted multi-objective game” model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call “robust-weighted Nash equilibrium”. We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications. PMID:26820512
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hishida, T.; Ohbayashi, K.; Saitoh, T.
2013-01-28
Core-level electronic structure of La{sub 1-x}Sr{sub x}MnO{sub 3} has been studied by x-ray photoemission spectroscopy (XPS). We first report, by the conventional XPS, the well-screened shoulder structure in Mn 2p{sub 3/2} peak, which had been observed only by hard x-ray photoemission spectroscopy so far. Multiple-peak analysis revealed that the Mn{sup 4+} spectral weight was not proportional to the nominal hole concentration x, indicating that a simple Mn{sup 3+}/Mn{sup 4+} intensity ratio analysis may result in a wrong quantitative elemental analysis. Considerable weight of the shoulder at x = 0.0 and the fact that the shoulder weight was even slightly goingmore » down from x = 0.2 to 0.4 were not compatible with the idea that this weight simply represents the metallic behavior. Further analysis found that the whole Mn 2p{sub 3/2} peak can be decomposed into four portions, the Mn{sup 4+}, the (nominal) Mn{sup 3+}, the shoulder, and the other spectral weight located almost at the Mn{sup 3+} location. We concluded that this weight represents the well-screened final state at Mn{sup 4+} sites, whereas the shoulder is known as that of the Mn{sup 3+} states. We found that the sum of these two spectral weight has an empirical relationship to the conductivity evolution with x.« less
Scaling exponent and dispersity of polymers in solution by diffusion NMR.
Williamson, Nathan H; Röding, Magnus; Miklavcic, Stanley J; Nydén, Magnus
2017-05-01
Molecular mass distribution measurements by pulsed gradient spin echo nuclear magnetic resonance (PGSE NMR) spectroscopy currently require prior knowledge of scaling parameters to convert from polymer self-diffusion coefficient to molecular mass. Reversing the problem, we utilize the scaling relation as prior knowledge to uncover the scaling exponent from within the PGSE data. Thus, the scaling exponent-a measure of polymer conformation and solvent quality-and the dispersity (M w /M n ) are obtainable from one simple PGSE experiment. The method utilizes constraints and parametric distribution models in a two-step fitting routine involving first the mass-weighted signal and second the number-weighted signal. The method is developed using lognormal and gamma distribution models and tested on experimental PGSE attenuation of the terminal methylene signal and on the sum of all methylene signals of polyethylene glycol in D 2 O. Scaling exponent and dispersity estimates agree with known values in the majority of instances, leading to the potential application of the method to polymers for which characterization is not possible with alternative techniques. Copyright © 2017 Elsevier Inc. All rights reserved.
The giant Gamow-Teller resonance states
NASA Astrophysics Data System (ADS)
Suzuki, Toshio
1982-04-01
The mean energy of the giant Gamow-Teller resonance state (GTS) is studied, which is defined by the non-energy-weighted and the linearly energy-weighted sum of the strengths for ΣAi = 1 τi- σi- Using Bohr and Mottelson's hamiltonian with the ξl· σ force, the difference between the mean energies of GTS and the isobaric analog state (IAS) is expressed as E GTS -E IAS,≈ 2<π¦Σ Ai=1ξ il i· σ i¦π>/ (3T 0-4(k τ-k στ) T 0. The observed energy systematics is well explained by kτ- kστ≈ 4/ A MeV . The relationship between the mean energies and the excitation energies of the collective states in the random phase approximation for charge-exchange excitations is discussed in a simple model. From the excitation energy systematics of GTS, the values of kστ and the Migdal parameter g' are estimated to be about k στ = {(16-24)}/{A}MeV and g' = 0.49-0.72 , respectively.
Interpretation of body residues for natural resources damage assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubitz, J.A.; Markarian, R.K.; Lauren, D.J.
1995-12-31
A 28-day caged mussel study using Corbicula sp. was conducted on Sugarland Run and the Potomac River following a spill of No. 2 fuel oil. In addition, resident Corbicula sp. from the Potomac River were sampled at the beginning and end of the study. The summed body residues of 39 polycyclic aromatic hydrocarbons (PAHs) ranged from 0.56 to 41 mg/kg dry weight within the study area. The summed body residues of the 18 PAHs that are routinely measured in the national oceanic and Atmospheric Administration Status and Trends Program (NST) ranged from 0.5 to 20 mg/kg dry weight for musselsmore » in this study. These data were similar to summed PAH concentrations reported in the NST for mussels from a variety of US coastal waters, which ranged from 0.4 to 24.5 mg/kg dry weight. This paper will discuss interpretation of PAH residues in Corbicula sp. to determine the spatial extent of the area affected by the oil spill. The toxicological significance of the PAH residues in both resident and caged mussels will also be presented.« less
Cho, Jae Heon; Ha, Sung Ryong
2010-03-15
An influence coefficient algorithm and a genetic algorithm (GA) were introduced to develop an automatic calibration model for QUAL2K, the latest version of the QUAL2E river and stream water-quality model. The influence coefficient algorithm was used for the parameter optimization in unsteady state, open channel flow. The GA, used in solving the optimization problem, is very simple and comprehensible yet still applicable to any complicated mathematical problem, where it can find the global-optimum solution quickly and effectively. The previously established model QUAL2Kw was used for the automatic calibration of the QUAL2K. The parameter-optimization method using the influence coefficient and genetic algorithm (POMIG) developed in this study and QUAL2Kw were each applied to the Gangneung Namdaecheon River, which has multiple reaches, and the results of the two models were compared. In the modeling, the river reach was divided into two parts based on considerations of the water quality and hydraulic characteristics. The calibration results by POMIG showed a good correspondence between the calculated and observed values for most of water-quality variables. In the application of POMIG and QUAL2Kw, relatively large errors were generated between the observed and predicted values in the case of the dissolved oxygen (DO) and chlorophyll-a (Chl-a) in the lowest part of the river; therefore, two weighting factors (1 and 5) were applied for DO and Chl-a in the lower river. The sums of the errors for DO and Chl-a with a weighting factor of 5 were slightly lower compared with the application of a factor of 1. However, with a weighting factor of 5 the sums of errors for other water-quality variables were slightly increased in comparison to the case with a factor of 1. Generally, the results of the POMIG were slightly better than those of the QUAL2Kw.
Adopting epidemic model to optimize medication and surgical intervention of excess weight
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2017-01-01
We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.
2016-01-01
Binocular disparity is detected in the primary visual cortex by a process similar to calculation of local cross-correlation between left and right retinal images. As a consequence, correlation-based neural signals convey information about false disparities as well as the true disparity. The false responses in the initial disparity detectors are eliminated at later stages in order to encode only disparities of the features correctly matched between the two eyes. For a simple stimulus configuration, a feed-forward nonlinear process can transform the correlation signal into the match signal. For human observers, depth judgement is determined by a weighted sum of the correlation and match signals rather than depending solely on the latter. The relative weight changes with spatial and temporal parameters of the stimuli, allowing adaptive recruitment of the two computations under different visual circumstances. A full transformation from correlation-based to match-based representation occurs at the neuronal population level in cortical area V4 and manifests in single-neuron responses of inferior temporal and posterior parietal cortices. Neurons in area V5/MT represent disparity in a manner intermediate between the correlation and match signals. We propose that the correlation and match signals in these areas contribute to depth perception in a weighted, parallel manner. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269600
Fujita, Ichiro; Doi, Takahiro
2016-06-19
Binocular disparity is detected in the primary visual cortex by a process similar to calculation of local cross-correlation between left and right retinal images. As a consequence, correlation-based neural signals convey information about false disparities as well as the true disparity. The false responses in the initial disparity detectors are eliminated at later stages in order to encode only disparities of the features correctly matched between the two eyes. For a simple stimulus configuration, a feed-forward nonlinear process can transform the correlation signal into the match signal. For human observers, depth judgement is determined by a weighted sum of the correlation and match signals rather than depending solely on the latter. The relative weight changes with spatial and temporal parameters of the stimuli, allowing adaptive recruitment of the two computations under different visual circumstances. A full transformation from correlation-based to match-based representation occurs at the neuronal population level in cortical area V4 and manifests in single-neuron responses of inferior temporal and posterior parietal cortices. Neurons in area V5/MT represent disparity in a manner intermediate between the correlation and match signals. We propose that the correlation and match signals in these areas contribute to depth perception in a weighted, parallel manner.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).
Knutsen, Helle K; Kvalem, Helen E; Thomsen, Cathrine; Frøshaug, May; Haugen, Margaretha; Becher, Georg; Alexander, Jan; Meltzer, Helle M
2008-02-01
This study investigates dietary exposure and serum levels of polybrominated diphenyl ethers (PBDEs) and hexabromocyclododecane (HBCD) in a group of Norwegians (n = 184) with a wide range of seafood consumption (4-455 g/day). Mean dietary exposure to Sum 5 PBDEs (1.5 ng/kg body weight/day) is among the highest reported. Since concentrations in foods were similar to those found elsewhere in Europe, this may be explained by high seafood consumption among Norwegians. Oily fish was the main dietary contributor both to Sum PBDEs and to the considerably lower HBCD intake (0.3 ng/kg body weight/day). Milk products appeared to contribute most to the BDE-209 intake (1.4 ng/kg body weight/day). BDE-209 and HBCD exposures are based on few food samples and need to be confirmed. Serum levels (mean Sum 7 PBDEs = 5.2 ng/g lipid) and congener patterns (BDE-47 > BDE-153 > BDE-99) were comparable with other European reports. Correlations between individual congeners were higher for the calculated dietary exposure than for serum levels. Further, significant but weak correlations were found between dietary exposure and serum levels for Sum PBDEs, BDE-47, and BDE-28 in males. This indicates that other sources in addition to diet need to be addressed.
Delivering both sum and difference beam distributions to a planar monopulse antenna array
Strassner, II, Bernd H.
2015-12-22
A planar monopulse radar apparatus includes a planar distribution matrix coupled to a planar antenna array having a linear configuration of antenna elements. The planar distribution matrix is responsive to first and second pluralities of weights applied thereto for providing both sum and difference beam distributions across the antenna array.
Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal
2009-01-01
The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455
A 2-categorical state sum model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baratin, Aristide, E-mail: abaratin@uwaterloo.ca; Freidel, Laurent, E-mail: lfreidel@perimeterinstitute.ca
It has long been argued that higher categories provide the proper algebraic structure underlying state sum invariants of 4-manifolds. This idea has been refined recently, by proposing to use 2-groups and their representations as specific examples of 2-categories. The challenge has been to make these proposals fully explicit. Here, we give a concrete realization of this program. Building upon our earlier work with Baez and Wise on the representation theory of 2-groups, we construct a four-dimensional state sum model based on a categorified version of the Euclidean group. We define and explicitly compute the simplex weights, which may be viewedmore » a categorified analogue of Racah-Wigner 6j-symbols. These weights solve a hexagon equation that encodes the formal invariance of the state sum under the Pachner moves of the triangulation. This result unravels the combinatorial formulation of the Feynman amplitudes of quantum field theory on flat spacetime proposed in A. Baratin and L. Freidel [Classical Quantum Gravity 24, 2027–2060 (2007)] which was shown to lead after gauge-fixing to Korepanov’s invariant of 4-manifolds.« less
Neyman-Pearson biometric score fusion as an extension of the sum rule
NASA Astrophysics Data System (ADS)
Hube, Jens Peter
2007-04-01
We define the biometric performance invariance under strictly monotonic functions on match scores as normalization symmetry. We use this symmetry to clarify the essential difference between the standard score-level fusion approaches of sum rule and Neyman-Pearson. We then express Neyman-Pearson fusion assuming match scores defined using false acceptance rates on a logarithmic scale. We show that by stating Neyman-Pearson in this form, it reduces to sum rule fusion for ROC curves with logarithmic slope. We also introduce a one parameter model of biometric performance and use it to express Neyman-Pearson fusion as a weighted sum rule.
Complex-energy approach to sum rules within nuclear density functional theory
Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...
2015-04-27
The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less
Statistical mechanics of the international trade network.
Fronczak, Agata; Fronczak, Piotr
2012-05-01
Analyzing real data on international trade covering the time interval 1950-2000, we show that in each year over the analyzed period the network is a typical representative of the ensemble of maximally random weighted networks, whose directed connections (bilateral trade volumes) are only characterized by the product of the trading countries' GDPs. It means that time evolution of this network may be considered as a continuous sequence of equilibrium states, i.e., a quasistatic process. This, in turn, allows one to apply the linear response theory to make (and also verify) simple predictions about the network. In particular, we show that bilateral trade fulfills a fluctuation-response theorem, which states that the average relative change in imports (exports) between two countries is a sum of the relative changes in their GDPs. Yearly changes in trade volumes prove that the theorem is valid.
Statistical mechanics of the international trade network
NASA Astrophysics Data System (ADS)
Fronczak, Agata; Fronczak, Piotr
2012-05-01
Analyzing real data on international trade covering the time interval 1950-2000, we show that in each year over the analyzed period the network is a typical representative of the ensemble of maximally random weighted networks, whose directed connections (bilateral trade volumes) are only characterized by the product of the trading countries' GDPs. It means that time evolution of this network may be considered as a continuous sequence of equilibrium states, i.e., a quasistatic process. This, in turn, allows one to apply the linear response theory to make (and also verify) simple predictions about the network. In particular, we show that bilateral trade fulfills a fluctuation-response theorem, which states that the average relative change in imports (exports) between two countries is a sum of the relative changes in their GDPs. Yearly changes in trade volumes prove that the theorem is valid.
Torque-Summing Brushless Motor
NASA Technical Reports Server (NTRS)
Vaidya, J. G.
1986-01-01
Torque channels function cooperatively but electrically independent for reliability. Brushless, electronically-commutated dc motor sums electromagnetic torques on four channels and applies them to single shaft. Motor operates with any combination of channels and continues if one or more of channels fail electrically. Motor employs single stator and rotor and mechanically simple; however, each of channels electrically isolated from other so that failure of one does not adversely affect others.
Can the oscillator strength of the quantum dot bandgap transition exceed unity?
NASA Astrophysics Data System (ADS)
Hens, Z.
2008-10-01
We discuss the apparent contradiction between the Thomas-Reiche-Kuhn sum rule for oscillator strengths and recent experimental data on the oscillator strength of the band gap transition of quantum dots. Starting from two simple single electron model systems, we show that the sum rule does not limit this oscillator strength to values below unity, or below the number of electrons in the highest occupied single electron state. The only upper limit the sum rule imposes on the oscillator strength of the quantum dot band gap transition is the total number of electrons in the quantum dot.
Jacob, Mathews; Blu, Thierry; Vaillant, Cedric; Maddocks, John H; Unser, Michael
2006-01-01
We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well.
Stepanov, Irina; Villalta, Peter W.; Knezevich, Aleksandar; Jensen, Joni; Hatsukami, Dorothy; Hecht, Stephen S.
2009-01-01
Smokeless tobacco contains 28 known carcinogens and causes precancerous oral lesions and oral and pancreatic cancer. A recent study conducted by our research team identified 8 different polycyclic aromatic hydrocarbons (PAH) in U.S. moist snuff, encouraging further investigations of this group of toxicants and carcinogens in smokeless tobacco products. In this study, we developed a gas chromatography-mass spectrometry method that allows simultaneous analysis of 23 various PAH in smokeless tobacco after a simple two-step extraction and purification procedure. The method produced coefficients of variation under 10% for most PAH. The limits of quantitation for different PAH varied between 0.3 ng/g tobacco and 11 ng/g tobacco, starting with a 300-mg sample. The recovery of the stable isotope-labeled internal standards averaged 87%. The method was applied to analysis of 23 moist snuff samples that include various flavors of the most popular U.S. moist snuff brands, as well as 17 samples representing the currently marketed brands of spit-free tobacco pouches, a relatively new type of smokeless tobacco. The sum of all detected PAH in conventional moist snuff averaged 11.6 (± 3.7) µg/g dry weight, 20% of this amount being comprised by carcinogenic PAH. The levels of PAH in new spit-free tobacco products were much lower than those in moist snuff, the sum of all detected PAH averaging 1.3 (±0.28) µg/g dry weight. Our findings render PAH one of the most prevalent groups of carcinogens in smokeless tobacco, along with tobacco-specific nitrosamines. Urgent measures are required from the U.S. tobacco industry to modify manufacturing processes so that the levels of these toxicants and carcinogens in the U.S. moist snuff are greatly reduced. PMID:19860436
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubrovsky, V. G.; Topovsky, A. V.
New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less
Ceelen, Manon; van Weissenbruch, Mirjam M; Prein, Janneke; Smit, Judith J; Vermeiden, Jan P W; Spreeuwenberg, Marieke; van Leeuwen, Flora E; Delemarre-van de Waal, Henriette A
2009-11-01
Little is known about post-natal growth in IVF offspring and the effects of rates of early post-natal growth on blood pressure and body fat composition during childhood and adolescence. The follow-up study comprised 233 IVF children aged 8-18 years and 233 spontaneously conceived controls born to subfertile parents. Growth data from birth to 4 years of age, available for 392 children (n = 193 IVF, n = 199 control), were used to study early post-natal growth. Furthermore, early post-natal growth velocity (weight gain) was related to blood pressure and skinfold measurements at follow-up. We found significantly lower weight, height and BMI standard deviation scores (SDSs) at 3 months, and weight SDS at 6 months of age in IVF children compared with controls. Likewise, IVF children demonstrated a greater gain in weight SDS (P < 0.001), height SDS (P = 0.013) and BMI SDS (P = 0.029) during late infancy (3 months to 1 year) versus controls. Weight gain during early childhood (1-3 years) was related to blood pressure in IVF children (P = 0.014 systolic, 0.04 diastolic) but not in controls. Growth during late infancy was not related to skinfold thickness in IVF children, unlike controls (P = 0.002 peripheral sum, 0.003 total sum). Growth during early childhood was related to skinfold thickness in both IVF and controls (P = 0.005 and 0.01 peripheral sum and P = 0.003 and 0.005 total sum, respectively). Late infancy growth velocity of IVF children was significantly higher compared with controls. Nevertheless, early childhood growth instead of infancy growth seemed to predict cardiovascular risk factors in IVF children. Further research is needed to confirm these findings and to follow-up growth and development of IVF children into adulthood.
Fisk, A T; Stern, G A; Hobson, K A; Strachan, W J; Loewen, M D; Norstrom, R J
2001-01-01
Samples of Calanus hyperboreus, a herbivorous copepod, were collected (n = 20) between April and July 1998, and water samples (n = 6) were collected in May 1998, in the Northwater Polynya (NOW) to examine persistent organic pollutants (POPs) in a high Arctic marine zooplankton. Lipid content (dry weight) doubled, water content (r2 = 0.88) and delta15N (r2 = 0.54) significantly decreased, and delta13C significantly increased (r2 = 0.30) in the C. hyperboreus over the collection period allowing an examination of the role of these variables in POP dynamics in this small pelagic zooplankton. The rank and concentrations of POP groups in C. hyperboreus over the entire sampling was sum of PCB (30.1 +/- 4.03 ng/g, dry weight) > sum of HCH (11.8 +/- 3.23) > sum of DDT (4.74 +/- 0.74), sum of CHLOR (4.44 +/- 1.0) > sum of CIBz (2.42 +/- 0.18), although these rankings varied considerably over the summer. The alpha- and gamma-HCH and lower chlorinated PCB congeners were the most common POPs in C. hyperboreus. The relationship between bioconcentration factor (BCF) and octanol-water partition coefficient (Kow) observed for the C. hyperboreus was linear and near 1:1 (slope = 0.72) for POPs with a log Kow between 3 and 6 but curvilinear when hydrophobic POPs (log Kow > 6) were included. Concentrations of sum of HCH. Sum of CHLOR and sum of CIBz increased over the sampling period, but no change in sum of PCB or sum of DDT was observed. After removing the effects of time, the variables lipid content, water content, delta15N and delta13C did not describe POP concentrations in C. hyperboreus. These results suggest that hydrophobic POP (log Kow = 3.86.0) concentrations in zooplankton are likely to reflect water concentrations and that POPs do not biomagnify in C. hyperboreus or likely in other small, herbivorous zooplankton.
Cumulative sum control charts for assessing performance in arterial surgery.
Beiles, C Barry; Morton, Anthony P
2004-03-01
The Melbourne Vascular Surgical Association (Melbourne, Australia) undertakes surveillance of mortality following aortic aneurysm surgery, patency at discharge following infrainguinal bypass and stroke and death following carotid endarterectomy. Quality improvement protocol employing the Deming cycle requires that the system for performing surgery first be analysed and optimized. Then process and outcome data are collected and these data require careful analysis. There must be a mechanism so that the causes of unsatisfactory outcomes can be determined and a good feedback mechanism must exist so that good performance is acknowledged and unsatisfactory performance corrected. A simple method for analysing these data that detects changes in average outcome rates is available using cumulative sum statistical control charts. Data have been analysed both retrospectively from 1999 to 2001, and prospectively during 2002 using cumulative sum control methods. A pathway to deal with control chart signals has been developed. The standard of arterial surgery in Victoria, Australia, is high. In one case a safe and satisfactory outcome was achieved by following the pathway developed by the audit committee. Cumulative sum control charts are a simple and effective tool for the identification of variations in performance standards in arterial surgery. The establishment of a pathway to manage problem performance is a vital part of audit activity.
Giant quadrupole and monopole resonances in /sup 28/Si
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lui, Y.; Bronson, J.D.; Youngblood, D.H.
1985-05-01
Inelastic alpha scattering measurements have been performed for /sup 28/Si at small angles including zero degrees. A total of 66% of the E0 energy-weighted sum rule was identified (using a Satchler version 2 form factor) centered at E/sub x/ = 17.9 MeV having a width of 4.8 MeV and 34% of the E2 energy-weighted sum rule was identified above E/sub x/ = 15.3 MeV centered at 19.0 MeV with a width of 4.4 MeV. The dependence of the extracted E0 strength on form factor and optical potential was explored.
Tuan, Pham Viet; Koo, Insoo
2017-10-06
In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.
A new enhanced index tracking model in portfolio optimization with sum weighted approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng
2017-04-01
Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.
A Biologically-Inspired Neural Network Architecture for Image Processing
1990-12-01
was organized into twelve groups of 8-by-8 node arrays. Weights were con- strained for each group of nodes, with each node "viewing" a 5-by-5 pixel...single wIndow * smk 0; for(J-0; j< 6 4 ; J++)( sum -sum +t %borfi][J]*rfarray[j]; ) /* Finished calculating ore block, one j:,,sition (first layer
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
[Hydrostatic weighing, skinfold thickness, body mass index relationships in high school girls].
Tahara, Y; Yukawa, K; Tsunawake, N; Saeki, S; Nishiyama, K; Urata, H; Katsuno, K; Fukuyama, Y; Michimukou, R; Uekata, M
1995-12-01
A study was conducted to evaluate body composition by hydrostatic weighing, skinfold thickness, and body mass index (BMI) in 102 senior high school girls, aged 15 to 18 in Nagasaki City. Body density measured by the underwater weighing method, was used to determine the fat weight (Fat) and lean body mass (LBM. or fat free weight: FFW) utilizing the formulas by Brozek et al. The results were as follows; 1. Mean values of body density were 1.04428 in the first grade girls, 1.04182 in the second grade, and 1.04185 in the third grade. 2. Mean values of percentage body fat (%Fat) were 23.5% in the first grade, 24.5% in the second and 24.5% in the third. 3. Percentage body fat (%Fat), lean body mass (LBM) and LBM/Height were not significantly with different advance of grade from the first to the third. 4. The correlation coefficients between percent body fat and the sum of two skinfold thicknesses, the sum of three skinfold thicknesses and the sum of seven skinfold thicknesses was 0.78, 0.79, and 0.80 respectively and were all statistically significant (p < 0.001). 5. The correlation coefficients between BMI and the sum of two skinfold thicknesses, the sum of three skinfold thicknesses and the sum of seven skinfold thicknesses was 0.74, 0.74, and 0.74 respectively and were all statistically significant (p < 0.001). 6. Mean values of BMI, Rohrer index and waist-hip ratio (WHR) in all subjects (n = 102) were 20.3, 128.2 and 0.72 respectively.
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Quantum Hurwitz numbers and Macdonald polynomials
NASA Astrophysics Data System (ADS)
Harnad, J.
2016-11-01
Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.
Pipeline active filter utilizing a booth type multiplier
NASA Technical Reports Server (NTRS)
Nathan, Robert (Inventor)
1987-01-01
Multiplier units of the modified Booth decoder and carry-save adder/full adder combination are used to implement a pipeline active filter wherein pixel data is processed sequentially, and each pixel need only be accessed once and multiplied by a predetermined number of weights simultaneously, one multiplier unit for each weight. Each multiplier unit uses only one row of carry-save adders, and the results are shifted to less significant multiplier positions and one row of full adders to add the carry to the sum in order to provide the correct binary number for the product Wp. The full adder is also used to add this product Wp to the sum of products .SIGMA.Wp from preceding multiply units. If m.times.m multiplier units are pipelined, the system would be capable of processing a kernel array of m.times.m weighting factors.
Simple model dielectric functions for insulators
NASA Astrophysics Data System (ADS)
Vos, Maarten; Grande, Pedro L.
2017-05-01
The Drude dielectric function is a simple way of describing the dielectric function of free electron materials, which have an uniform electron density, in a classical way. The Mermin dielectric function describes a free electron gas, but is based on quantum physics. More complex metals have varying electron densities and are often described by a sum of Drude dielectric functions, the weight of each function being taken proportional to the volume with the corresponding density. Here we describe a slight variation on the Drude dielectric functions that describes insulators in a semi-classical way and a form of the Levine-Louie dielectric function including a relaxation time that does the same within the framework of quantum physics. In the optical limit the semi-classical description of an insulator and the quantum physics description coincide, in the same way as the Drude and Mermin dielectric function coincide in the optical limit for metals. There is a simple relation between the coefficients used in the classical and quantum approaches, a relation that ensures that the obtained dielectric function corresponds to the right static refractive index. For water we give a comparison of the model dielectric function at non-zero momentum with inelastic X-ray measurements, both at relative small momenta and in the Compton limit. The Levine-Louie dielectric function including a relaxation time describes the spectra at small momentum quite well, but in the Compton limit there are significant deviations.
On Nash Equilibria in Stochastic Games
2003-10-01
Traditionally automata theory and veri cation has considered zero sum or strictly competitive versions of stochastic games . In these games there are two players...zero- sum discrete-time stochastic dynamic games . SIAM J. Control and Optimization, 19(5):617{634, 1981. 18. R.J. Lipton, E . Markakis, and A. Mehta...Playing large games using simple strate- gies. In EC 03: Electronic Commerce, pages 36{41. ACM Press, 2003. 19. A. Maitra and W. Sudderth. Finitely
Ultrasonic Imaging in Solids Using Wave Mode Beamforming.
di Scalea, Francesco Lanza; Sternini, Simone; Nguyen, Thompson Vu
2017-03-01
This paper discusses some improvements to ultrasonic synthetic imaging in solids with primary applications to nondestructive testing of materials and structures. Specifically, the study proposes new adaptive weights applied to the beamforming array that are based on the physics of the propagating waves, specifically the displacement structure of the propagating longitudinal (L) mode and shear (S) mode that are naturally coexisting in a solid. The wave mode structures can be combined with the wave geometrical spreading to better filter the array (in a matched filter approach) and improve its focusing ability compared to static array weights. This paper also proposes compounding, or summing, images obtained from the different wave modes to further improve the array gain without increasing its physical aperture. The wave mode compounding can be performed either incoherently or coherently, in analogy with compounding multiple frequencies or multiple excitations. Numerical simulations and experimental testing demonstrate the potential improvements obtainable by the wave structure adaptive weights compared to either static weights in conventional delay-and-sum focusing, or adaptive weights based on geometrical spreading alone in minimum-variance distortionless response focusing.
A novel beamformer design method for medical ultrasound. Part I: Theory.
Ranganathan, Karthik; Walker, William F
2003-01-01
The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.
Design of an automatic weight scale for an isolette
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Griffin, W.
1974-01-01
The design of an infant weight scale is reported that fits into an isolette without disturbing its controlled atmosphere. The scale platform uses strain gages to measure electronically deflections of cantilever beams positioned at its four corners. The weight of the infant is proportional to the sum of the output voltages produced by the gauges on each beam of the scale.
Soh, Nerissa L; Touyz, Stephen; Dobbins, Timothy A; Clarke, Simon; Kohn, Michael R; Lee, Ee Lian; Leow, Vincent; Ung, Ken E K; Walter, Garry
2009-01-01
To investigate the relationship between skinfold thickness and body mass index (BMI) in North European Caucasian and East Asian young women with and without anorexia nervosa (AN) in two countries. Height, weight and skinfold thicknesses were assessed in 137 young women with and without AN, in Australia and Singapore. The relationship between BMI and the sum of triceps, biceps, subscapular and iliac crest skinfolds was analysed with clinical status, ethnicity, age and country of residence as covariates. For the same BMI, women with AN had significantly smaller sums of skinfolds than women without AN. East Asian women both with and without AN had significantly greater skinfold sums than their North European Caucasian counterparts after adjusting for BMI. Lower BMI goals may be appropriate when managing AN patients of East Asian ancestry and the weight for height diagnostic criterion should be reconsidered for this group.
Simple and Double Alfven Waves: Hamiltonian Aspects
NASA Astrophysics Data System (ADS)
Webb, G. M.; Zank, G. P.; Hu, Q.; le Roux, J. A.; Dasgupta, B.
2011-12-01
We discuss the nature of simple and double Alfvén waves. Simple waves depend on a single phase variable \\varphi, but double waves depend on two independent phase variables \\varphi1 and \\varphi2. The phase variables depend on the space and time coordinates x and t. Simple and double Alfvén waves have the same integrals, namely, the entropy, density, magnetic pressure, and group velocity (the sum of the Alfvén and fluid velocities) are constant throughout the flow. We present examples of both simple and double Alfvén waves, and discuss Hamiltonian formulations of the waves.
Closed-form summations of Dowker's and related trigonometric sums
NASA Astrophysics Data System (ADS)
Cvijović, Djurdje; Srivastava, H. M.
2012-09-01
Through a unified and relatively simple approach which uses complex contour integrals, particularly convenient integration contours and calculus of residues, closed-form summation formulas for 12 very general families of trigonometric sums are deduced. One of them is a family of cosecant sums which was first summed in closed form in a series of papers by Dowker (1987 Phys. Rev. D 36 3095-101 1989 J. Math. Phys. 30 770-3 1992 J. Phys. A: Math. Gen. 25 2641-8), whose method has inspired our work in this area. All of the formulas derived here involve the higher-order Bernoulli polynomials. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’.
Chaos in learning a simple two-person game
Sato, Yuzuru; Akiyama, Eizo; Farmer, J. Doyne
2002-01-01
We investigate the problem of learning to play the game of rock–paper–scissors. Each player attempts to improve her/his average score by adjusting the frequency of the three possible responses, using reinforcement learning. For the zero sum game the learning process displays Hamiltonian chaos. Thus, the learning trajectory can be simple or complex, depending on initial conditions. We also investigate the non-zero sum case and show that it can give rise to chaotic transients. This is, to our knowledge, the first demonstration of Hamiltonian chaos in learning a basic two-person game, extending earlier findings of chaotic attractors in dissipative systems. As we argue here, chaos provides an important self-consistency condition for determining when players will learn to behave as though they were fully rational. That chaos can occur in learning a simple game indicates one should use caution in assuming real people will learn to play a game according to a Nash equilibrium strategy. PMID:11930020
NASA Technical Reports Server (NTRS)
Berk, A.; Temkin, A.
1985-01-01
A sum rule is derived for the auxiliary eigenvalues of an equation whose eigenspectrum pertains to projection operators which describe electron scattering from multielectron atoms and ions. The sum rule's right-hand side depends on an integral involving the target system eigenfunctions. The sum rule is checked for several approximations of the two-electron target. It is shown that target functions which have a unit eigenvalue in their auxiliary eigenspectrum do not give rise to well-defined projection operators except through a limiting process. For Hylleraas target approximations, the auxiliary equations are shown to contain an infinite spectrum. However, using a Rayleigh-Ritz variational principle, it is shown that a comparatively simple aproximation can exhaust the sum rule to better than five significant figures. The auxiliary Hylleraas equation is greatly simplified by conversion to a square root equation containing the same eigenfunction spectrum and from which the required eigenvalues are trivially recovered by squaring.
Monyeki, Kotsedi; Kemper, Han; Mogale, Alfred; Hay, Leon; Sekgala, Machoene; Mashiane, Tshephang; Monyeki, Suzan; Sebati, Betty
2017-08-29
The aim of this cross-sectional study was to investigate the association between birth weight, underweight, and blood pressure (BP) among Ellisras rural children aged between 5 and 15 years. Data were collected from 528 respondents who participated in the Ellisras Longitudinal Study (ELS) and had their birth weight recorded on their health clinic card. Standard procedure was used to measure the anthropometric measurements and BP. Linear regression was used to assess BP, underweight variables, and birth weight. Logistic regression was used to assess the association of hypertension risks, low birth weight, and underweight. The association between birth weight and BP was not statistically significant. There was a significant ( p < 0.05) association between mean BP and the sum of four skinfolds (β = 0.26, 95% CI 0.15-0.23) even after adjusting for age (β = 0.18, 95% CI 0.01-0.22). Hypertension was significantly associated with weight for age z-scores (OR = 5.13, 95% CI 1.89-13.92) even after adjusting for age and sex (OR = 5.26, 95% CI 1.93-14.34). BP was significantly associated with the sum of four skinfolds, but not birth weight. Hypertension was significantly associated with underweight. Longitudinal studies should confirm whether the changes in body weight we found can influence the risk of cardiovascular diseases.
Comparison of two weighted integration models for the cueing task: linear and likelihood
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2003-01-01
In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.
Construction of an Exome-Wide Risk Score for Schizophrenia Based on a Weighted Burden Test.
Curtis, David
2018-01-01
Polygenic risk scores obtained as a weighted sum of associated variants can be used to explore association in additional data sets and to assign risk scores to individuals. The methods used to derive polygenic risk scores from common SNPs are not suitable for variants detected in whole exome sequencing studies. Rare variants, which may have major effects, are seen too infrequently to judge whether they are associated and may not be shared between training and test subjects. A method is proposed whereby variants are weighted according to their frequency, their annotations and the genes they affect. A weighted sum across all variants provides an individual risk score. Scores constructed in this way are used in a weighted burden test and are shown to be significantly different between schizophrenia cases and controls using a five-way cross-validation procedure. This approach represents a first attempt to summarise exome sequence variation into a summary risk score, which could be combined with risk scores from common variants and from environmental factors. It is hoped that the method could be developed further. © 2017 John Wiley & Sons Ltd/University College London.
Sandau, Courtney D; Ayotte, Pierre; Dewailly, Eric; Duffe, Jason; Norstrom, Ross J
2002-01-01
Concentrations of polychlorinated biphenyls (PCBs), hydroxylated metabolites of PCBs (HO-PCBs) and octachlorostyrene (4-HO-HpCS), and pentachlorophenol (PCP) were determined in umbilical cord plasma samples from three different regions of Québec. The regions studied included two coastal areas where exposure to PCBs is high because of marine-food-based diets--Nunavik (Inuit people) and the Lower North Shore of the Gulf of St. Lawrence (subsistence fishermen)--and a southern Québec urban center where PCB exposure is at background levels (Québec City). The main chlorinated phenolic compound in all regions was PCP. Concentrations of PCP were not significantly different among regions (geometric mean concentration 1,670 pg/g, range 628-7,680 pg/g wet weight in plasma). The ratio of PCP to polychlorinated biphenyl congener number 153 (CB153) concentration ranged from 0.72 to 42.3. Sum HO-PCB (sigma HO-PCBs) concentrations were different among regions, with geometric mean concentrations of 553 (range 238-1,750), 286 (103-788), and 234 (147-464) pg/g wet weight plasma for the Lower North Shore, Nunavik, and the southern Québec groups, respectively. Lower North Shore samples also had the highest geometric mean concentration of sum PCBs (sum of 49 congeners; sigma PCBs), 2,710 (525-7,720) pg/g wet weight plasma. sigma PCB concentrations for Nunavik samples and southern samples were 1,510 (309-6,230) and 843 (290-1,650) pg/g wet weight plasma. Concentrations (log transformed) of sigma HO-PCBs and sigma PCBs were significantly correlated (r = 0.62, p < 0.001), as were concentrations of all major individual HO-PCB congeners and individual PCB congeners. In Nunavik and Lower North Shore samples, free thyroxine (T4) concentrations (log transformed) were negatively correlated with the sum of quantitated chlorinated phenolic compounds (sum PCP and sigma HO-PCBs; r = -0.47, p = 0.01, n = 20) and were not correlated with any PCB congeners or sigma PCBs. This suggests that PCP and HO-PCBs are possibly altering thyroid hormone status in newborns, which could lead to neurodevelopmental effects in infants. Further studies are needed to examine the effects of chlorinated phenolic compounds on thyroid hormone status in newborns. PMID:11940460
Wiley, A S; Lubree, H G; Joshi, S M; Bhat, D S; Ramdas, L V; Rao, A S; Thuse, N V; Deshpande, V U; Yajnik, C S
2016-04-01
Indian newborns have been described as 'thin-fat' compared with European babies, but little is known about how this phenotype relates to the foetal growth factor IGF-I (insulin-like growth factor I) or its binding protein IGFBP-3. To assess cord IGF-I and IGFBP-3 concentrations in a sample of Indian newborns and evaluate their associations with neonatal adiposity and maternal factors. A prospective cohort study of 146 pregnant mothers with dietary, anthropometric and biochemical measurements at 28 and 34 weeks gestation. Neonatal weight, length, skin-folds, circumferences, and cord blood IGF-I and IGFBP-3 concentrations were measured at birth. Average cord IGF-I and IGFBP-3 concentrations were 46.6 (2.2) and 1269.4 (41) ng mL(-1) , respectively. Girls had higher mean IGF-I than boys (51.4 ng mL(-1) vs. 42.9 ng mL(-1) ; P < 0.03), but IGFBP-3 did not differ. Cord IGF-I was positively correlated with all birth size measures except length, and most strongly with neonatal sum-of-skin-folds (r = 0.50, P < 0.001). IGFBP-3 was positively correlated with ponderal index, sum-of-skin-folds and placenta weight (r = 0.21, 0.19, 0.16, respectively; P < 0.05). Of maternal demographic and anthropometric characteristics, only parity was correlated with cord IGF-I (r = 0.27, P < 0.001). Among dietary behaviours, maternal daily milk intake at 34 weeks gestation predicted higher cord IGF-I compared to no-milk intake (51.8 ng mL(-1) vs. 36.5 ng mL(-1) , P < 0.01) after controlling for maternal characteristics, placental weight, and newborn gestational age, sex, weight and sum-of-skin-folds. Sum-of-skin-folds were positively associated with cord IGF-I in this multivariate model (57.3 ng mL(-1) vs. 35.1 ng mL(-1) for highest and lowest sum-of skin-fold quartile, P < 0.001). IGFBP-3 did not show significant relationships with these covariates. In this Indian study, cord IGF-I concentration was associated with greater adiposity among newborns. Maternal milk intake may play a role in this relationship. © 2015 World Obesity.
Simple data-smoothing and noise-suppression technique
NASA Technical Reports Server (NTRS)
Duty, R. L.
1970-01-01
Algorithm, based on the Borel method of summing divergent sequences, is used for smoothing noisy data where knowledge of frequency content is not required. Technique's effectiveness is demonstrated by a series of graphs.
Determination of total dissolved solids in water analysis
Howard, C.S.
1933-01-01
The figure for total dissolved solids, based on the weight of the residue on evaporation after heating for 1 hour at 180??C., is reasonably close to the sum of the determined constituents for most natural waters. Waters of the carbonate type that are high in magnesium may give residues that weigh less than the sum. Natural waters of the sulfate type usually give residues that are too high on account of incomplete drying.
Sykes-Muskett, Bianca J; Prestwich, Andrew; Lawton, Rebecca J; Armitage, Christopher J
2015-01-01
Financial incentives to improve health have received increasing attention, but are subject to ethical concerns. Monetary Contingency Contracts (MCCs), which require individuals to deposit money that is refunded contingent on reaching a goal, are a potential alternative strategy. This review evaluates systematically the evidence for weight loss-related MCCs. Randomised controlled trials testing the effect of weight loss-related MCCs were identified in online databases. Random-effects meta-analyses were used to calculate overall effect sizes for weight loss and participant retention. The association between MCC characteristics and weight loss/participant retention effects was calculated using meta-regression. There was a significant small-to-medium effect of MCCs on weight loss during treatment when one outlier study was removed. Group refunds, deposit not paid as lump sum, participants setting their own deposit size and additional behaviour change techniques were associated with greater weight loss during treatment. Post-treatment, there was no significant effect of MCCs on weight loss. There was a significant small-to-medium effect of MCCs on participant retention during treatment. Researcher-set deposits paid as one lump sum, refunds delivered on an all-or-nothing basis and refunds contingent on attendance at classes were associated with greater retention during treatment. Post-treatment, there was no significant effect of MCCs on participant retention. The results support the use of MCCs to promote weight loss and participant retention up to the point that the incentive is removed and identifies the conditions under which MCCs work best.
40 CFR 60.562-1 - Standards: Process emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methane and ethane) (TOC) by 98 weight percent, or to a concentration of 20 parts per million by volume (ppmv) on a dry basis, whichever is less stringent. The TOC is expressed as the sum of the actual... Polypropylene and Polyethylene Affected Facilities Procedure /a/ Applicable TOC weight percent range Control/no...
Algebraic grid adaptation method using non-uniform rational B-spline surface modeling
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, B. K.
1992-01-01
An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.
A Decision Support System for Solving Multiple Criteria Optimization Problems
ERIC Educational Resources Information Center
Filatovas, Ernestas; Kurasova, Olga
2011-01-01
In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…
The Seven Deadly Sins of World University Ranking: A Summary from Several Papers
ERIC Educational Resources Information Center
Soh, Kaycheng
2017-01-01
World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…
Assessment of the magnetic field exposure due to the battery current of digital mobile phones.
Jokela, Kari; Puranen, Lauri; Sihvonen, Ari-Pekka
2004-01-01
Hand-held digital mobile phones generate pulsed magnetic fields associated with the battery current. The peak value and the waveform of the battery current were measured for seven different models of digital mobile phones, and the results were applied to compute approximately the magnetic flux density and induced currents in the phone-user's head. A simple circular loop model was used for the magnetic field source and a homogeneous sphere consisting of average brain tissue equivalent material simulated the head. The broadband magnetic flux density and the maximal induced current density were compared with the guidelines of ICNIRP using two various approaches. In the first approach the relative exposure was determined separately at each frequency and the exposure ratios were summed to obtain the total exposure (multiple-frequency rule). In the second approach the waveform was weighted in the time domain with a simple low-pass RC filter and the peak value was divided by a peak limit, both derived from the guidelines (weighted peak approach). With the maximum transmitting power (2 W) the measured peak current varied from 1 to 2.7 A. The ICNIRP exposure ratio based on the current density varied from 0.04 to 0.14 for the weighted peak approach and from 0.08 to 0.27 for the multiple-frequency rule. The latter values are considerably greater than the corresponding exposure ratios 0.005 (min) to 0.013 (max) obtained by applying the evaluation based on frequency components presented by the new IEEE standard. Hence, the exposure does not seem to exceed the guidelines. The computed peak magnetic flux density exceeded substantially the derived peak reference level of ICNIRP, but it should be noted that in a near-field exposure the external field strengths are not valid indicators of exposure. Currently, no biological data exist to give a reason for concern about the health effects of magnetic field pulses from mobile phones.
Object attributes combine additively in visual search.
Pramod, R T; Arun, S P
2016-01-01
We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-03-16
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-01-01
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
[Levels and distribution of short chain chlorinated paraffins in seafood from Dalian, China].
Yu, Jun-Chao; Wang, Thanh; Wang, Ya-Wei; Meng, Mei; Chen, Ru; Jiang, Gui-Bin
2014-05-01
Seafood samples were collected from Dalian, China to study the accumulation and distribution characteristics of short chain chlorinated paraffins (SCCPs) by GC/ECNI-LRMS. Sum of SCCPs (dry weight) were in the range of 77-8 250 ng.g-1, with the lowest value in Scapharca subcrenata and highest concentration in Neptunea cumingi. The concentrations of sum of SCCPs (dry weight) in fish, shrimp/crab and shellfish were in the ranges of 100-3 510, 394-5 440, and 77-8 250 ng.g-1 , respectively. Overall, the C10 and C11 homologues were the most predominant carbon groups of SCCPs in seafood from this area,and a relatively higher proportion of C12-13 was observed in seafood with higher concentrations of sum of SCCPs . With regard to chlorine content, Cl1,, CI8 and CI6 were the major groups. Significant correlations were found among concentrations of different SCCP homologues (except C1, vs. Cl10 ) , which indicated that they might share the same sources and/or have similar accumulation, migration and transformation processes.
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
Narasimhalu, Kaavya; Lee, June; Auchus, Alexander P; Chen, Christopher P L H
2008-01-01
Previous work combining the Mini-Mental State Examination (MMSE) and Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) has been conducted in western populations. We ascertained, in an Asian population, (1) the best method of combining the tests, (2) the effects of educational level, and (3) the effect of different dementia etiologies. Data from 576 patients were analyzed (407 nondemented controls, 87 Alzheimer's disease and 82 vascular dementia patients). Sensitivity, specificity and AUC values were obtained using three methods, the 'And' rule, the 'Or' rule, and the 'weighted sum' method. The 'weighted sum' rule had statistically superior AUC and specificity results, while the 'Or' rule had the best sensitivity results. The IQCODE outperformed the MMSE in all analyses. Patients with no education benefited more from combined tests. There was no difference between Alzheimer's disease and vascular dementia populations in the predictive value of any of the combined methods. We recommend that the IQCODE be used to supplement the MMSE whenever available and that the 'weighted sum' method be used to combine the MMSE and the IQCODE, particularly in populations with low education. As the study population selected may not be representative of the general population, further studies are required before generalization to nonclinical samples. (c) 2007 S. Karger AG, Basel.
Woerd, Hendrik J van der; Wernand, Marcel R
2015-10-09
The colours from natural waters differ markedly over the globe, depending on the water composition and illumination conditions. The space-borne "ocean colour" instruments are operational instruments designed to retrieve important water-quality indicators, based on the measurement of water leaving radiance in a limited number (5 to 10) of narrow (≈10 nm) bands. Surprisingly, the analysis of the satellite data has not yet paid attention to colour as an integral optical property that can also be retrieved from multispectral satellite data. In this paper we re-introduce colour as a valuable parameter that can be expressed mainly by the hue angle (α). Based on a set of 500 synthetic spectra covering a broad range of natural waters a simple algorithm is developed to derive the hue angle from SeaWiFS, MODIS, MERIS and OLCI data. The algorithm consists of a weighted linear sum of the remote sensing reflectance in all visual bands plus a correction term for the specific band-setting of each instrument. The algorithm is validated by a set of 603 hyperspectral measurements from inland-, coastal- and near-ocean waters. We conclude that the hue angle is a simple objective parameter of natural waters that can be retrieved uniformly for all space-borne ocean colour instruments.
Effects of Preseason Training on the Sleep Characteristics of Professional Rugby League Players.
Thornton, Heidi R; Delaney, Jace A; Duthie, Grant M; Dascombe, Ben J
2018-02-01
To investigate the influence of daily and exponentially weighted moving training loads on subsequent nighttime sleep. Sleep of 14 professional rugby league athletes competing in the National Rugby League was recorded using wristwatch actigraphy. Physical demands were quantified using GPS technology, including total distance, high-speed distance, acceleration/deceleration load (SumAccDec; AU), and session rating of perceived exertion (AU). Linear mixed models determined effects of acute (daily) and subacute (3- and 7-d) exponentially weighted moving averages (EWMA) on sleep. Higher daily SumAccDec was associated with increased sleep efficiency (effect-size correlation; ES = 0.15; ±0.09) and sleep duration (ES = 0.12; ±0.09). Greater 3-d EWMA SumAccDec was associated with increased sleep efficiency (ES = 0.14; ±0.09) and an earlier bedtime (ES = 0.14; ±0.09). An increase in 7-d EWMA SumAccDec was associated with heightened sleep efficiency (ES = 0.15; ±0.09) and earlier bedtimes (ES = 0.15; ±0.09). The direction of the associations between training loads and sleep varied, but the strongest relationships showed that higher training loads increased various measures of sleep. Practitioners should be aware of the increased requirement for sleep during intensified training periods, using this information in the planning and implementation of training and individualized recovery modalities.
Photonuclear sum rules and the tetrahedral configuration of He4
NASA Astrophysics Data System (ADS)
Gazit, Doron; Barnea, Nir; Bacca, Sonia; Leidemann, Winfried; Orlandini, Giuseppina
2006-12-01
Three well-known photonuclear sum rules (SR), i.e., the Thomas-Reiche-Kuhn, the bremsstrahlungs and the polarizability SR are calculated for He4 with the realistic nucleon-nucleon potential Argonne V18 and the three-nucleon force Urbana IX. The relation between these sum rules and the corresponding energy weighted integrals of the cross section is discussed. Two additional equivalences for the bremsstrahlungs SR are given, which connect it to the proton-neutron and neutron-neutron distances. Using them, together with our result for the bremsstrahlungs SR, we find a deviation from the tetrahedral symmetry of the spatial configuration of He4. The possibility to access this deviation experimentally is discussed.
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
Optical implementation of inner product neural associative memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1995-01-01
An optical implementation of an inner-product neural associative memory is realized with a first spatial light modulator for entering an initial two-dimensional N-tuple vector and for entering a thresholded output vector image after each iteration until convergence is reached, and a second spatial light modulator for entering M weighted vectors of inner-product scalars multiplied with each of the M stored vectors, where the inner-product scalars are produced by multiplication of the initial input vector in the first iterative cycle (and thresholded vectors in subsequent iterative cycles) with each of the M stored vectors, and the weighted vectors are produced by multiplication of the scalars with corresponding ones of the stored vectors. A Hughes liquid crystal light valve is used for the dual function of summing the weighted vectors and thresholding the sum vector. The thresholded vector is then entered through the first spatial light modulator for reiteration of the process cycle until convergence is reached.
Evolving cell models for systems and synthetic biology.
Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio
2010-03-01
This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.
Demarini, S; Donnelly, M M
1994-01-01
Body fat (BF) is rarely determined routinely in infants due to the lack of a simple measuring device. A portable NIR instrument, successfully applied in adults, takes 5 seconds for a measurement and involves no skin manipulation. We designed this study 1) to compare BF estimates by NIR to skinfold thickness (ST) and 2) to assess the relationship of NIR and ST values with standard measures reflecting BF, such as Weight/Length Ratio, Body Mass Index and Ponderal Index. We studied BF in 40 healthy term infants within 12 hours of birth by NIR and ST at 3 standard sites: triceps (TRI), subscapular (SUB) and abdominal (ABD). RESULTS. Significant correlations were found between NIR and ST (R=0.70, 0.58 and 0.64 for SUB, TRI and ABD, respectively); between the sums of the 3 measurements (R= 0.69), and between birthweight and ST (R=0.57) or NIR (0.51), and between Weight/Length Ratio and ST (R=0.55) or NIR (R=0.51). We conclude that NIR measurements correlate well with skinfold measurements and NIR can be measured faster than skinfolds (5 vs 60 seconds). We speculate that NIR could be cost-effective for routine clinical measure of body fat and growth in infants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, Alyssa J. R.; Ghale, Kushal; Rieg, Carolin
In recent years, the popularity of density functional theory with periodic boundary conditions (DFT) has surged for the design and optimization of functional materials. However, no single DFT exchange–correlation functional currently available gives accurate adsorption energies on transition metals both when bonding to the surface is dominated by strong covalent or ionic bonding and when it has strong contributions from van der Waals interactions (i.e., dispersion forces). Here we present a new, simple method for accurately predicting adsorption energies on transition-metal surfaces based on DFT calculations, using an adaptively weighted sum of energies from RPBE and optB86b-vdW (or optB88-vdW) densitymore » functionals. This method has been benchmarked against a set of 39 reliable experimental energies for adsorption reactions. Our results show that this method has a mean absolute error and root mean squared error relative to experiments of 13.4 and 19.3 kJ/mol, respectively, compared to 20.4 and 26.4 kJ/mol for the BEEF-vdW functional. For systems with large van der Waals contributions, this method decreases these errors to 11.6 and 17.5 kJ/mol. Furthermore, this method provides predictions of adsorption energies both for processes dominated by strong covalent or ionic bonding and for those dominated by dispersion forces that are more accurate than those of any current standard DFT functional alone.« less
Hensley, Alyssa J. R.; Ghale, Kushal; Rieg, Carolin; ...
2017-01-26
In recent years, the popularity of density functional theory with periodic boundary conditions (DFT) has surged for the design and optimization of functional materials. However, no single DFT exchange–correlation functional currently available gives accurate adsorption energies on transition metals both when bonding to the surface is dominated by strong covalent or ionic bonding and when it has strong contributions from van der Waals interactions (i.e., dispersion forces). Here we present a new, simple method for accurately predicting adsorption energies on transition-metal surfaces based on DFT calculations, using an adaptively weighted sum of energies from RPBE and optB86b-vdW (or optB88-vdW) densitymore » functionals. This method has been benchmarked against a set of 39 reliable experimental energies for adsorption reactions. Our results show that this method has a mean absolute error and root mean squared error relative to experiments of 13.4 and 19.3 kJ/mol, respectively, compared to 20.4 and 26.4 kJ/mol for the BEEF-vdW functional. For systems with large van der Waals contributions, this method decreases these errors to 11.6 and 17.5 kJ/mol. Furthermore, this method provides predictions of adsorption energies both for processes dominated by strong covalent or ionic bonding and for those dominated by dispersion forces that are more accurate than those of any current standard DFT functional alone.« less
Bjelica, Dusko; Idrizovic, Kemal; Popovic, Stevo; Sisic, Nedim; Sekulic, Damir; Ostojic, Ljerka; Spasic, Miodrag; Zenic, Natasa
2016-01-01
Substance use and misuse (SUM) in adolescence is a significant public health problem and the extent to which adolescents exhibit SUM behaviors differs across ethnicity. This study aimed to explore the ethnicity-specific and gender-specific associations among sports factors, familial factors, and personal satisfaction with physical appearance (i.e., covariates) and SUM in a sample of adolescents from Federation of Bosnia and Herzegovina. In this cross-sectional study the participants were 1742 adolescents (17–18 years of age) from Bosnia and Herzegovina who were in their last year of high school education (high school seniors). The sample comprised 772 Croatian (558 females) and 970 Bosniak (485 females) adolescents. Variables were collected using a previously developed and validated questionnaire that included questions on SUM (alcohol drinking, cigarette smoking, and consumption of other drugs), sport factors, parental education, socioeconomic status, and satisfaction with physical appearance and body weight. The consumption of cigarettes remains high (37% of adolescents smoke cigarettes), with a higher prevalence among Croatians. Harmful drinking is also alarming (evidenced in 28.4% of adolescents). The consumption of illicit drugs remains low with 5.7% of adolescents who consume drugs, with a higher prevalence among Bosniaks. A higher likelihood of engaging in SUM is found among children who quit sports (for smoking and drinking), boys who perceive themselves to be good looking (for smoking), and girls who are not satisfied with their body weight (for smoking). Higher maternal education is systematically found to be associated with greater SUM in Bosniak girls. Information on the associations presented herein could be discretely disseminated as a part of regular school administrative functions. The results warrant future prospective studies that more precisely identify the causality among certain variables. PMID:27690078
The vibration discomfort of standing people: evaluation of multi-axis vibration.
Thuong, Olivier; Griffin, Michael J
2015-01-01
Few studies have investigated discomfort caused by multi-axis vibration and none has explored methods of predicting the discomfort of standing people from simultaneous fore-and-aft, lateral and vertical vibration of a floor. Using the method of magnitude estimation, 16 subjects estimated their discomfort caused by dual-axis and tri-axial motions (octave-bands centred on either 1 or 4 Hz with various magnitudes in the fore-and-aft, lateral and vertical directions) and the discomfort caused by single-axis motions. The method of predicting discomfort assumed in current standards (square-root of the sums of squares of the three components weighted according to their individual contributions to discomfort) provided reasonable predictions of the discomfort caused by multi-axis vibration. Improved predictions can be obtained for specific stimuli, but no single simple method will provide accurate predictions for all stimuli because the rate of growth of discomfort with increasing magnitude of vibration depends on the frequency and direction of vibration.
Object attributes combine additively in visual search
Pramod, R. T.; Arun, S. P.
2016-01-01
We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes. PMID:26967014
NASA Astrophysics Data System (ADS)
Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia
2018-04-01
In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
Bruno, Oscar P.; Turc, Catalin; Venakides, Stephanos
2016-01-01
This work, part I in a two-part series, presents: (i) a simple and highly efficient algorithm for evaluation of quasi-periodic Green functions, as well as (ii) an associated boundary-integral equation method for the numerical solution of problems of scattering of waves by doubly periodic arrays of scatterers in three-dimensional space. Except for certain ‘Wood frequencies’ at which the quasi-periodic Green function ceases to exist, the proposed approach, which is based on smooth windowing functions, gives rise to tapered lattice sums which converge superalgebraically fast to the Green function—that is, faster than any power of the number of terms used. This is in sharp contrast to the extremely slow convergence exhibited by the lattice sums in the absence of smooth windowing. (The Wood-frequency problem is treated in part II.) This paper establishes rigorously the superalgebraic convergence of the windowed lattice sums. A variety of numerical results demonstrate the practical efficiency of the proposed approach. PMID:27493573
NASA Astrophysics Data System (ADS)
Zhao, Yumin
1997-07-01
By the techniques of the Wick theorem for coupled clusters, the no-energy-weighted electromagnetic sum-rule calculations are presented in the sdg neutron-proton interacting boson model, the nuclear pair shell model and the fermion-dynamical symmetry model. The project supported by Development Project Foundation of China, National Natural Science Foundation of China, Doctoral Education Fund of National Education Committee, Fundamental Research Fund of Southeast University
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Manipulations of Cartesian Graphs: A First Introduction to Analysis.
ERIC Educational Resources Information Center
Lowenthal, Francis; Vandeputte, Christiane
1989-01-01
Introduces an introductory module for analysis. Describes stock of basic functions and their graphs as part one and three methods as part two: transformations of simple graphs, the sum of stock functions, and upper and lower bounds. (YP)
Complete convergence of randomly weighted END sequences and its application.
Li, Penghua; Li, Xiaoqin; Wu, Kehan
2017-01-01
We investigate the complete convergence of partial sums of randomly weighted extended negatively dependent (END) random variables. Some results of complete moment convergence, complete convergence and the strong law of large numbers for this dependent structure are obtained. As an application, we study the convergence of the state observers of linear-time-invariant systems. Our results extend the corresponding earlier ones.
Analog hardware for delta-backpropagation neural networks
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P. (Inventor)
1992-01-01
This is a fully parallel analog backpropagation learning processor which comprises a plurality of programmable resistive memory elements serving as synapse connections whose values can be weighted during learning with buffer amplifiers, summing circuits, and sample-and-hold circuits arranged in a plurality of neuron layers in accordance with delta-backpropagation algorithms modified so as to control weight changes due to circuit drift.
Least-Squares Analysis of Data with Uncertainty in "y" and "x": Algorithms in Excel and KaleidaGraph
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2018-01-01
For the least-squares analysis of data having multiple uncertain variables, the generally accepted best solution comes from minimizing the sum of weighted squared residuals over all uncertain variables, with, for example, weights in x[subscript i] taken as inversely proportional to the variance [delta][subscript xi][superscript 2]. A complication…
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Karniel, Amir; Avraham, Guy; Peles, Bat-Chen; Levy-Tzedek, Shelly; Nisky, Ilana
2010-01-01
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake. PMID:21206462
Fusion of classifiers for REIS-based detection of suspicious breast lesions
NASA Astrophysics Data System (ADS)
Lederman, Dror; Wang, Xingwei; Zheng, Bin; Sumkin, Jules H.; Tublin, Mitchell; Gur, David
2011-03-01
After developing a multi-probe resonance-frequency electrical impedance spectroscopy (REIS) system aimed at detecting women with breast abnormalities that may indicate a developing breast cancer, we have been conducting a prospective clinical study to explore the feasibility of applying this REIS system to classify younger women (< 50 years old) into two groups of "higher-than-average risk" and "average risk" of having or developing breast cancer. The system comprises one central probe placed in contact with the nipple, and six additional probes uniformly distributed along an outside circle to be placed in contact with six points on the outer breast skin surface. In this preliminary study, we selected an initial set of 174 examinations on participants that have completed REIS examinations and have clinical status verification. Among these, 66 examinations were recommended for biopsy due to findings of a highly suspicious breast lesion ("positives"), and 108 were determined as negative during imaging based procedures ("negatives"). A set of REIS-based features, extracted using a mirror-matched approach, was computed and fed into five machine learning classifiers. A genetic algorithm was used to select an optimal subset of features for each of the five classifiers. Three fusion rules, namely sum rule, weighted sum rule and weighted median rule, were used to combine the results of the classifiers. Performance evaluation was performed using a leave-one-case-out cross-validation method. The results indicated that REIS may provide a new technology to identify younger women with higher than average risk of having or developing breast cancer. Furthermore, it was shown that fusion rule, such as a weighted median fusion rule and a weighted sum fusion rule may improve performance as compared with the highest performing single classifier.
NASA Astrophysics Data System (ADS)
Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.
2016-05-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Partitioning the Metabolic Cost of Human Running: A Task-by-Task Approach
Arellano, Christopher J.; Kram, Rodger
2014-01-01
Compared with other species, humans can be very tractable and thus an ideal “model system” for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the “cost of generating force” hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be “individually” partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. PMID:24838747
Partitioning the metabolic cost of human running: a task-by-task approach.
Arellano, Christopher J; Kram, Rodger
2014-12-01
Compared with other species, humans can be very tractable and thus an ideal "model system" for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the "cost of generating force" hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be "individually" partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Nodal weighting factor method for ex-core fast neutron fluence evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, R. T.
The nodal weighting factor method is developed for evaluating ex-core fast neutron flux in a nuclear reactor by utilizing adjoint neutron flux, a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV, the unit fission source, and relative assembly nodal powers. The method determines each nodal weighting factor for ex-core neutron fast flux evaluation by solving the steady-state adjoint neutron transport equation with a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV as the adjoint source, by integrating the unit fission source with a typical fission spectrum to the solved adjointmore » flux over all energies, all angles and given nodal volume, and by dividing it with the sum of all nodal weighting factors, which is a normalization factor. Then, the fast neutron flux can be obtained by summing the various relative nodal powers times the corresponding nodal weighting factors of the adjacent significantly contributed peripheral assembly nodes and times a proper fast neutron attenuation coefficient over an operating period. A generic set of nodal weighting factors can be used to evaluate neutron fluence at the same location for similar core design and fuel cycles, but the set of nodal weighting factors needs to be re-calibrated for a transition-fuel-cycle. This newly developed nodal weighting factor method should be a useful and simplified tool for evaluating fast neutron fluence at selected locations of interest in ex-core components of contemporary nuclear power reactors. (authors)« less
Complex networks in the Euclidean space of communicability distances
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
An assessment of some non-gray global radiation models in enclosures
NASA Astrophysics Data System (ADS)
Meulemans, J.
2016-01-01
The accuracy of several non-gray global gas/soot radiation models, namely the Wide-Band Correlated-K (WBCK) model, the Spectral Line Weighted-sum-of-gray-gases model with one optimized gray gas (SLW-1), the (non-gray) Weighted-Sum-of-Gray-Gases (WSGG) model with different sets of coefficients (Smith et al., Soufiani and Djavdan, Taylor and Foster) was assessed on several test cases from the literature. Non-isothermal (or isothermal) participating media containing non-homogeneous (or homogeneous) mixtures of water vapor, carbon dioxide and soot in one-dimensional planar enclosures and multi-dimensional rectangular enclosures were investigated. For all the considered test cases, a benchmark solution (LBL or SNB) was used in order to compute the relative error of each model on the predicted radiative source term and the wall net radiative heat flux.
An evaluation of ozone exposure metrics for a seasonally drought-stressed ponderosa pine ecosystem.
Panek, Jeanne A; Kurpius, Meredith R; Goldstein, Allen H
2002-01-01
Ozone stress has become an increasingly significant factor in cases of forest decline reported throughout the world. Current metrics to estimate ozone exposure for forest trees are derived from atmospheric concentrations and assume that the forest is physiologically active at all times of the growing season. This may be inaccurate in regions with a Mediterranean climate, such as California and the Pacific Northwest, where peak physiological activity occurs early in the season to take advantage of high soil moisture and does not correspond to peak ozone concentrations. It may also misrepresent ecosystems experiencing non-average climate conditions such as drought years. We compared direct measurements of ozone flux into a ponderosa pine canopy with a suite of the most common ozone exposure metrics to determine which best correlated with actual ozone uptake by the forest. Of the metrics we assessed, SUM0 (the sum of all daytime ozone concentrations > 0) best corresponded to ozone uptake by ponderosa pine, however the correlation was only strong at times when the stomata were unconstrained by site moisture conditions. In the early growing season (May and June). SUM0 was an adequate metric for forest ozone exposure. Later in the season, when stomatal conductance was limited by drought. SUM0 overestimated ozone uptake. A better metric for seasonally drought-stressed forests would be one that incorporates forest physiological activity, either through mechanistic modeling, by weighting ozone concentrations by stomatal conductance, or by weighting concentrations by site moisture conditions.
Combining geodiversity with climate and topography to account for threatened species richness.
Tukiainen, Helena; Bailey, Joseph J; Field, Richard; Kangas, Katja; Hjort, Jan
2017-04-01
Understanding threatened species diversity is important for long-term conservation planning. Geodiversity-the diversity of Earth surface materials, forms, and processes-may be a useful biodiversity surrogate for conservation and have conservation value itself. Geodiversity and species richness relationships have been demonstrated; establishing whether geodiversity relates to threatened species' diversity and distribution pattern is a logical next step for conservation. We used 4 geodiversity variables (rock-type and soil-type richness, geomorphological diversity, and hydrological feature diversity) and 4 climatic and topographic variables to model threatened species diversity across 31 of Finland's national parks. We also analyzed rarity-weighted richness (a measure of site complementarity) of threatened vascular plants, fungi, bryophytes, and all species combined. Our 1-km 2 resolution data set included 271 threatened species from 16 major taxa. We modeled threatened species richness (raw and rarity weighted) with boosted regression trees. Climatic variables, especially the annual temperature sum above 5 °C, dominated our models, which is consistent with the critical role of temperature in this boreal environment. Geodiversity added significant explanatory power. High geodiversity values were consistently associated with high threatened species richness across taxa. The combined effect of geodiversity variables was even more pronounced in the rarity-weighted richness analyses (except for fungi) than in those for species richness. Geodiversity measures correlated most strongly with species richness (raw and rarity weighted) of threatened vascular plants and bryophytes and were weakest for molluscs, lichens, and mammals. Although simple measures of topography improve biodiversity modeling, our results suggest that geodiversity data relating to geology, landforms, and hydrology are also worth including. This reinforces recent arguments that conserving nature's stage is an important principle in conservation. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI
NASA Technical Reports Server (NTRS)
Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.
1985-01-01
The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
NASA Astrophysics Data System (ADS)
Saito, Norihito; Akagawa, Kazuyuki; Ito, Mayumi; Takazawa, Akira; Hayano, Yutaka; Saito, Yoshihiko; Ito, Meguru; Takami, Hideki; Iye, Masanori; Wada, Satoshi
2007-07-01
We report on a sodium D2 resonance coherent light source achieved in single-pass sum-frequency generation in periodically poled MgO-doped stoichiometric lithium tantalate with actively mode-locked Nd:YAG lasers. Mode-locked pulses at 1064 and 1319 nm are synchronized with a time resolution of 37 ps with the phase adjustment of the radio frequencies fed to acousto-optic mode lockers. An output power of 4.6 W at 589.1586 nm is obtained, and beam quality near the diffraction limit is also achieved in a simple design.
Saito, Norihito; Akagawa, Kazuyuki; Ito, Mayumi; Takazawa, Akira; Hayano, Yutaka; Saito, Yoshihiko; Ito, Meguru; Takami, Hideki; Iye, Masanori; Wada, Satoshi
2007-07-15
We report on a sodium D(2) resonance coherent light source achieved in single-pass sum-frequency generation in periodically poled MgO-doped stoichiometric lithium tantalate with actively mode-locked Nd:YAG lasers. Mode-locked pulses at 1064 and 1319 nm are synchronized with a time resolution of 37 ps with the phase adjustment of the radio frequencies fed to acousto-optic mode lockers. An output power of 4.6 W at 589.1586 nm is obtained, and beam quality near the diffraction limit is also achieved in a simple design.
Schrempft, Stephanie; van Jaarsveld, Cornelia H M; Fisher, Abigail; Wardle, Jane
2015-01-01
The home environment is thought to play a key role in early weight trajectories, although direct evidence is limited. There is general agreement that multiple factors exert small individual effects on weight-related outcomes, so use of composite measures could demonstrate stronger effects. This study therefore examined whether composite measures reflecting the 'obesogenic' home environment are associated with diet, physical activity, TV viewing, and BMI in preschool children. Families from the Gemini cohort (n = 1096) completed a telephone interview (Home Environment Interview; HEI) when their children were 4 years old. Diet, physical activity, and TV viewing were reported at interview. Child height and weight measurements were taken by the parents (using standard scales and height charts) and reported at interview. Responses to the HEI were standardized and summed to create four composite scores representing the food (sum of 21 variables), activity (sum of 6 variables), media (sum of 5 variables), and overall (food composite/21 + activity composite/6 + media composite/5) home environments. These were categorized into 'obesogenic risk' tertiles. Children in 'higher-risk' food environments consumed less fruit (OR; 95% CI = 0.39; 0.27-0.57) and vegetables (0.47; 0.34-0.64), and more energy-dense snacks (3.48; 2.16-5.62) and sweetened drinks (3.49; 2.10-5.81) than children in 'lower-risk' food environments. Children in 'higher-risk' activity environments were less physically active (0.43; 0.32-0.59) than children in 'lower-risk' activity environments. Children in 'higher-risk' media environments watched more TV (3.51; 2.48-4.96) than children in 'lower-risk' media environments. Neither the individual nor the overall composite measures were associated with BMI. Composite measures of the obesogenic home environment were associated as expected with diet, physical activity, and TV viewing. Associations with BMI were not apparent at this age.
Optical Oversampled Analog-to-Digital Conversion
1992-06-29
hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions
Optimal trajectories for hypersonic launch vehicles
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas
1994-01-01
In this paper, we derive a near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optical flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. Because liquid hydrogen fueled hypersonic aircraft are volume sensitive, as well as weight sensitive, the cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize gross take-off weight for a given payload mass and volume in orbit.
Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection
NASA Astrophysics Data System (ADS)
Harwati
2017-06-01
Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.
Meta-analyses of workplace physical activity and dietary behaviour interventions on weight outcomes.
Verweij, L M; Coffeng, J; van Mechelen, W; Proper, K I
2011-06-01
This meta-analytic review critically examines the effectiveness of workplace interventions targeting physical activity, dietary behaviour or both on weight outcomes. Data could be extracted from 22 studies published between 1980 and November 2009 for meta-analyses. The GRADE approach was used to determine the level of evidence for each pooled outcome measure. Results show moderate quality of evidence that workplace physical activity and dietary behaviour interventions significantly reduce body weight (nine studies; mean difference [MD]-1.19 kg [95% CI -1.64 to -0.74]), body mass index (BMI) (11 studies; MD -0.34 kg m⁻² [95% CI -0.46 to -0.22]) and body fat percentage calculated from sum of skin-folds (three studies; MD -1.12% [95% CI -1.86 to -0.38]). There is low quality of evidence that workplace physical activity interventions significantly reduce body weight and BMI. Effects on percentage body fat calculated from bioelectrical impedance or hydrostatic weighing, waist circumference, sum of skin-folds and waist-hip ratio could not be investigated properly because of a lack of studies. Subgroup analyses showed a greater reduction in body weight of physical activity and diet interventions containing an environmental component. As the clinical relevance of the pooled effects may be substantial on a population level, we recommend workplace physical activity and dietary behaviour interventions, including an environment component, in order to prevent weight gain. © 2010 The Authors. obesity reviews © 2010 International Association for the Study of Obesity.
Postoperative air leak grading is useful to predict prolonged air leak after pulmonary lobectomy.
Oh, Sang Gi; Jung, Yochun; Jheon, Sanghoon; Choi, Yunhee; Yun, Ju Sik; Na, Kook Joo; Ahn, Byoung Hee
2017-01-23
Results of studies to predict prolonged air leak (PAL; air leak longer than 5 days) after pulmonary lobectomy have been inconsistent and are of limited use. We developed a new scale representing the amount of early postoperative air leak and determined its correlation with air leak duration and its potential as a predictor of PAL. We grade postoperative air leak using a 5-grade scale. All 779 lobectomies from January 2005 to December 2009 with available medical records were reviewed retrospectively. We devised six 'SUM' variables using air leak grades in the initial 72 h postoperatively. Excluding unrecorded cases and postoperative broncho-pleural fistulas, there were 720 lobectomies. PAL occurred in 135 cases (18.8%). Correlation analyses showed each SUM variable highly correlated with air leak duration, and the SUM 4to9 , which was the sum of six consecutive values of air leak grades for every 8 h record on postoperative days 2 and 3, was proved to be the most powerful predictor of PAL; PAL could be predicted with 75.7% and 77.7% positive and negative predictive value, respectively, when SUM 4to9 ≥ 16. When 4 predictors derived from multivariable logistic regression of perioperative variables were combined with SUM 4to9 , there was no significant increase in predictability compared with SUM 4to9 alone. This simple new method to predict PAL using SUM 4to9 showed that the amount of early postoperative air leak is the most powerful predictor of PAL, therefore, grading air leak after pulmonary lobectomy is a useful method to predict PAL.
van der Woerd, Hendrik J.; Wernand, Marcel R.
2015-01-01
The colours from natural waters differ markedly over the globe, depending on the water composition and illumination conditions. The space-borne “ocean colour” instruments are operational instruments designed to retrieve important water-quality indicators, based on the measurement of water leaving radiance in a limited number (5 to 10) of narrow (≈10 nm) bands. Surprisingly, the analysis of the satellite data has not yet paid attention to colour as an integral optical property that can also be retrieved from multispectral satellite data. In this paper we re-introduce colour as a valuable parameter that can be expressed mainly by the hue angle (α). Based on a set of 500 synthetic spectra covering a broad range of natural waters a simple algorithm is developed to derive the hue angle from SeaWiFS, MODIS, MERIS and OLCI data. The algorithm consists of a weighted linear sum of the remote sensing reflectance in all visual bands plus a correction term for the specific band-setting of each instrument. The algorithm is validated by a set of 603 hyperspectral measurements from inland-, coastal- and near-ocean waters. We conclude that the hue angle is a simple objective parameter of natural waters that can be retrieved uniformly for all space-borne ocean colour instruments. PMID:26473859
Yang, Huayun; Zhou, Shanshan; Li, Weidong; Liu, Qi; Tu, Yunjie
2015-10-01
Sediment samples were analyzed to comprehensively characterize the concentrations, distribution, possible sources and potential biological risk of organochlorine pesticides in Qiandao Lake, China. Concentrations of sumHCH and sumDDT in sediments ranged from 0.03 to 5.75 ng/g dry weight and not detected to 14.39 ng/g dry weight. The predominant β-HCH and the α-HCH/γ-HCH ratios indicated that the residues of HCHs were derived not only from historical technical HCH use but also from additional usage of lindane. Ratios of o,p'-DDT/p,p'-DDT and DDD/DDE suggested that both dicofol-type DDT and technical DDT applications may be present in most study areas. Additionally, based on two sediment quality guidelines, γ-HCH, o,p'-DDT and p,p'-DDT could be the main organochlorine pesticides species of ecotoxicological concern in Qiandao Lake.
NASA Astrophysics Data System (ADS)
Jiang, G.; Wong, C. Y.; Lin, S. C. F.; Rahman, M. A.; Ren, T. R.; Kwok, Ngaiming; Shi, Haiyan; Yu, Ying-Hao; Wu, Tonghai
2015-04-01
The enhancement of image contrast and preservation of image brightness are two important but conflicting objectives in image restoration. Previous attempts based on linear histogram equalization had achieved contrast enhancement, but exact preservation of brightness was not accomplished. A new perspective is taken here to provide balanced performance of contrast enhancement and brightness preservation simultaneously by casting the quest of such solution to an optimization problem. Specifically, the non-linear gamma correction method is adopted to enhance the contrast, while a weighted sum approach is employed for brightness preservation. In addition, the efficient golden search algorithm is exploited to determine the required optimal parameters to produce the enhanced images. Experiments are conducted on natural colour images captured under various indoor, outdoor and illumination conditions. Results have shown that the proposed method outperforms currently available methods in contrast to enhancement and brightness preservation.
Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai
2017-03-01
This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.
Probabilistic model of bridge vehicle loads in port area based on in-situ load testing
NASA Astrophysics Data System (ADS)
Deng, Ming; Wang, Lei; Zhang, Jianren; Wang, Rei; Yan, Yanhong
2017-11-01
Vehicle load is an important factor affecting the safety and usability of bridges. An statistical analysis is carried out in this paper to investigate the vehicle load data of Tianjin Haibin highway in Tianjin port of China, which are collected by the Weigh-in- Motion (WIM) system. Following this, the effect of the vehicle load on test bridge is calculated, and then compared with the calculation result according to HL-93(AASHTO LRFD). Results show that the overall vehicle load follows a distribution with a weighted sum of four normal distributions. The maximum vehicle load during the design reference period follows a type I extremum distribution. The vehicle load effect also follows a weighted sum of four normal distributions, and the standard value of the vehicle load is recommended as 1.8 times that of the calculated value according to HL-93.
Quantum robots and environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-08-01
Quantum robots and their interactions with environments of quantum systems are described, and their study justified. A quantum robot is a mobile quantum system that includes an on-board quantum computer and needed ancillary systems. Quantum robots carry out tasks whose goals include specified changes in the state of the environment, or carrying out measurements on the environment. Each task is a sequence of alternating computation and action phases. Computation phase activites include determination of the action to be carried out in the next phase, and recording of information on neighborhood environmental system states. Action phase activities include motion of themore » quantum robot and changes in the neighborhood environment system states. Models of quantum robots and their interactions with environments are described using discrete space and time. A unitary step operator T that gives the single time step dynamics is associated with each task. T=T{sub a}+T{sub c} is a sum of action phase and computation phase step operators. Conditions that T{sub a} and T{sub c} should satisfy are given along with a description of the evolution as a sum over paths of completed phase input and output states. A simple example of a task{emdash}carrying out a measurement on a very simple environment{emdash}is analyzed in detail. A decision tree for the task is presented and discussed in terms of the sums over phase paths. It is seen that no definite times or durations are associated with the phase steps in the tree, and that the tree describes the successive phase steps in each path in the sum over phase paths. {copyright} {ital 1998} {ital The American Physical Society}« less
Zhao, Longshan; Wu, Faqi
2015-01-01
In this study, a simple travel time-based runoff model was proposed to simulate a runoff hydrograph on soil surfaces with different microtopographies. Three main parameters, i.e., rainfall intensity (I), mean flow velocity (v m) and ponding time of depression (t p), were inputted into this model. The soil surface was divided into numerous grid cells, and the flow length of each grid cell (l i) was then calculated from a digital elevation model (DEM). The flow velocity in each grid cell (v i) was derived from the upstream flow accumulation area using v m. The total flow travel time through each grid cell to the surface outlet was the sum of the sum of flow travel times along the flow path (i.e., the sum of l i/v i) and t p. The runoff rate at the slope outlet for each respective travel time was estimated by finding the sum of the rain rate from all contributing cells for all time intervals. The results show positive agreement between the measured and predicted runoff hydrographs. PMID:26103635
On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels
NASA Astrophysics Data System (ADS)
Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit
2012-12-01
This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.
Zhao, Longshan; Wu, Faqi
2015-01-01
In this study, a simple travel time-based runoff model was proposed to simulate a runoff hydrograph on soil surfaces with different microtopographies. Three main parameters, i.e., rainfall intensity (I), mean flow velocity (vm) and ponding time of depression (tp), were inputted into this model. The soil surface was divided into numerous grid cells, and the flow length of each grid cell (li) was then calculated from a digital elevation model (DEM). The flow velocity in each grid cell (vi) was derived from the upstream flow accumulation area using vm. The total flow travel time through each grid cell to the surface outlet was the sum of the sum of flow travel times along the flow path (i.e., the sum of li/vi) and tp. The runoff rate at the slope outlet for each respective travel time was estimated by finding the sum of the rain rate from all contributing cells for all time intervals. The results show positive agreement between the measured and predicted runoff hydrographs.
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1980-01-01
Simple procedures are presented for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is provided for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are provided for determining the two phases of life. The procedure involves two steps, each similar to the conventional application of the commonly used linear damage rule. When the sum of cycle ratios based on phase 1 lives reaches unity, phase 1 is presumed complete, and further loadings are summed as cycle ratios on phase 2 lives. When the phase 2 sum reaches unity, failure is presumed to occur. No other physical properties or material constants than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons of both methods are discussed.
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1981-01-01
Simple procedures are given for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is given for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are given for determining the two phases of life. The procedure comprises two steps, each similar to the conventional application of the commonly used linear damage rule. Once the sum of cycle ratios based on Phase I lives reaches unity, Phase I is presumed complete, and further loadings are summed as cycle ratios based on Phase II lives. When the Phase II sum attains unity, failure is presumed to occur. It is noted that no physical properties or material constants other than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons are discussed for both methods.
1993-02-01
the relative cost effectiveness of Ada and C++ [10]. (An overview of the Air Force report is given in Appendix D.) Surprisingly, the study deter- mined ...support; 5 = excellent support), followed by a total score, a weighted sum of the rankings based on weights deter- mined by an expert panel: Category...International Conference Location: Britannia International Hotel, London Sponsor. Ada Language UK, Ltd. POC: Helen Byard, Administrator, Ada UK, P.O. 322, York
Super (a*, d*)-ℋ-antimagic total covering of second order of shackle graphs
NASA Astrophysics Data System (ADS)
Hesti Agustin, Ika; Dafik; Nisviasari, Rosanita; Prihandini, R. M.
2017-12-01
Let H be a simple and connected graph. A shackle of graph H, denoted by G = shack(H, v, n), is a graph G constructed by non-trivial graphs H 1, H 2, …, H n such that, for every 1 ≤ s, t ≤ n, H s and Ht have no a common vertex with |s - t| ≥ 2 and for every 1 ≤ i ≤ n - 1, Hi and H i+1 share exactly one common vertex v, called connecting vertex, and those k - 1 connecting vertices are all distinct. The graph G is said to be an (a*, d*)-H-antimagic total graph of second order if there exist a bijective function f : V(G) ∪ E(G) → {1, 2, …, |V(G)| + |E(G)|} such that for all subgraphs isomorphic to H, the total H-weights W(H)=\\displaystyle {\\sum }v\\in V(H)f(v)+\\displaystyle {\\sum }e\\in E(H)f(e) form an arithmetic sequence of second order of \\{a* ,a* +d* ,a* +3d* ,a* +6d* ,\\ldots ,a* +(\\frac{{n}2-n}{2})d* \\}, where a* and d* are positive integers and n is the number of all subgraphs isomorphic to H. An (a*, d*)-H-antimagic total labeling of second order f is called super if the smallest labels appear in the vertices. In this paper, we study a super (a*, d*)-H antimagic total labeling of second order of G = shack(H, v, n) by using a partition technique of second order.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Hill, Katalin; Pénzes, Csanád Botond; Schnöller, Donát; Horváti, Kata; Bosze, Szilvia; Hudecz, Ferenc; Keszthelyi, Tamás; Kiss, Eva
2010-10-07
Tensiometry, sum-frequency vibrational spectroscopy, and atomic force microscopy were employed to assess the cell penetration ability of a peptide conjugate of the antituberculotic agent isoniazide. Isoniazide was conjugated to peptide (91)SEFAYGSFVRTVSLPV(106), a functional T-cell epitope of the immunodominant 16 kDa protein of Mycobacterium tuberculosis. As a simple but versatile model of the cell membrane a phospholipid Langmuir monolayer at the liquid/air interface was used. Changes induced in the structure of the phospholipid monolayer by injection of the peptide conjugate into the subphase were followed by tensiometry and sum-frequency vibrational spectroscopy. The drug penetrated lipid films were transferred to a solid support by the Langmuir-Blodgett technique, and their structures were characterized by atomic force microscopy. Peptide conjugation was found to strongly enhance the cell penetration ability of isoniazide.
Optimal trajectories for hypersonic launch vehicles
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas
1992-01-01
In this paper, we derive a near-optimal guidance law for the ascent trajectory from Earth surface to Earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optimal flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. The performance objective is a weighted sum of fuel mass and volume, with the weighting factor selected to give minimum gross take-off weight for a specific payload mass and volume.
A New Family of Solvable Pearson-Dirichlet Random Walks
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2011-07-01
An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.
Total edge irregularity strength of (n,t)-kite graph
NASA Astrophysics Data System (ADS)
Winarsih, Tri; Indriati, Diari
2018-04-01
Let G(V, E) be a simple, connected, and undirected graph with vertex set V and edge set E. A total k-labeling is a map that carries vertices and edges of a graph G into a set of positive integer labels {1, 2, …, k}. An edge irregular total k-labeling λ :V(G)\\cup E(G)\\to \\{1,2,\\ldots,k\\} of a graph G is a labeling of vertices and edges of G in such a way that for any different edges e and f, weights wt(e) and wt(f) are distinct. The weight wt(e) of an edge e = xy is the sum of the labels of vertices x and y and the label of the edge e. The total edge irregularity strength of G, tes(G), is defined as the minimum k for which a graph G has an edge irregular total k-labeling. An (n, t)-kite graph consist of a cycle of length n with a t-edge path (the tail) attached to one vertex of a cycle. In this paper, we investigate the total edge irregularity strength of the (n, t)-kite graph, with n > 3 and t > 1. We obtain the total edge irregularity strength of the (n, t)-kite graph is tes((n, t)-kite) = \\lceil \\frac{n+t+2}{3}\\rceil .
Robust Frequency Invariant Beamforming with Low Sidelobe for Speech Enhancement
NASA Astrophysics Data System (ADS)
Zhu, Yiting; Pan, Xiang
2018-01-01
Frequency invariant beamformers (FIBs) are widely used in speech enhancement and source localization. There are two traditional optimization methods for FIB design. The first one is convex optimization, which is simple but the frequency invariant characteristic of the beam pattern is poor with respect to frequency band of five octaves. The least squares (LS) approach using spatial response variation (SRV) constraint is another optimization method. Although, it can provide good frequency invariant property, it usually couldn’t be used in speech enhancement for its lack of weight norm constraint which is related to the robustness of a beamformer. In this paper, a robust wideband beamforming method with a constant beamwidth is proposed. The frequency invariant beam pattern is achieved by resolving an optimization problem of the SRV constraint to cover speech frequency band. With the control of sidelobe level, it is available for the frequency invariant beamformer (FIB) to prevent distortion of interference from the undesirable direction. The approach is completed in time-domain by placing tapped delay lines(TDL) and finite impulse response (FIR) filter at the output of each sensor which is more convenient than the Frost processor. By invoking the weight norm constraint, the robustness of the beamformer is further improved against random errors. Experiment results show that the proposed method has a constant beamwidth and almost the same white noise gain as traditional delay-and-sum (DAS) beamformer.
Simple estimate of critical volume
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1980-01-01
Method for estimating critical molar volume of materials is faster and simpler than previous procedures. Formula sums no more than 18 different contributions from components of chemical structure of material, and is as accurate (within 3 percent) as older more complicated models. Method should expedite many thermodynamic design calculations.
ERIC Educational Resources Information Center
Brilleslyper, Michael A.; Wolverton, Robert H.
2008-01-01
In this article we consider an example suitable for investigation in many mid and upper level undergraduate mathematics courses. Fourier series provide an excellent example of the differences between uniform and non-uniform convergence. We use Dirichlet's test to investigate the convergence of the Fourier series for a simple periodic saw tooth…
The costly benefits of opposing agricultural biotechnology.
Apel, Andrew
2010-11-30
Rigorous application of a simple definition of what constitutes opposition to agricultural biotechnology readily encompasses a wide array of key players in national and international systems of food production, distribution and governance. Even though the sum of political and financial benefits of opposing agricultural biotechnology appears vastly to outweigh the benefits which accrue to providers of agricultural biotechnology, technology providers actually benefit from this opposition. If these barriers to biotechnology were removed, subsistence farmers still would not represent a lucrative market for improved seed. The sum of all interests involved ensures that subsistence farmers are systematically denied access to agricultural biotechnology. Copyright © 2010 Elsevier B.V. All rights reserved.
Asymptotically Exact Heuristics for Prime Divisors of the Sequence {a^k+b^k}_{k=1}^infty
NASA Astrophysics Data System (ADS)
Moree, Pieter
2006-07-01
Let N_{a,b}(x) count the number of primes p<= x with p dividing a^k+b^k for some k>= 1. It is known that N_{a,b}(x)sim c(a,b)x/log x for some rational number c(a,b) that depends in a rather intricate way on a and b. A simple heuristic formula for N_{a,b}(x) is proposed and it is proved that it is asymptotically exact, i.e., has the same asymptotic behavior as N_{a,b}(x). Connections with Ramanujan sums and character sums are discussed.
NASA Astrophysics Data System (ADS)
Djaman, Koffi; Irmak, Suat; Sall, Mamadou; Sow, Abdoulaye; Kabenge, Isa
2017-10-01
The objective of this study was to quantify differences associated with using 24-h time step reference evapotranspiration (ETo), as compared with the sum of hourly ETo computations with the standardized ASCE Penman-Monteith (ASCE-PM) model for semi-arid dry conditions at Fanaye and Ndiaye (Senegal) and semiarid humid conditions at Sapu (The Gambia) and Kankan (Guinea). The results showed that there was good agreement between the sum of hourly ETo and daily time step ETo at all four locations. The daily time step overestimated the daily ETo relative to the sum of hourly ETo by 1.3 to 8% for the whole study periods. However, there is location and monthly dependence of the magnitude of ETo values and the ratio of the ETo values estimated by both methods. Sum of hourly ETo tends to give higher ETo during winter time at Fanaye and Sapu, while the daily ETo was higher from March to November at the same weather stations. At Ndiaye and Kankan, daily time step estimates of ETo were high during the year. The simple linear regression slopes between the sum of 24-h ETo and the daily time step ETo at all weather stations varied from 1.02 to 1.08 with high coefficient of determination (R 2 ≥ 0.87). Application of the hourly ETo estimation method might help on accurate ETo estimation to meet irrigation requirement under precision agriculture.
Analog Delta-Back-Propagation Neural-Network Circuitry
NASA Technical Reports Server (NTRS)
Eberhart, Silvio
1990-01-01
Changes in synapse weights due to circuit drifts suppressed. Proposed fully parallel analog version of electronic neural-network processor based on delta-back-propagation algorithm. Processor able to "learn" when provided with suitable combinations of inputs and enforced outputs. Includes programmable resistive memory elements (corresponding to synapses), conductances (synapse weights) adjusted during learning. Buffer amplifiers, summing circuits, and sample-and-hold circuits arranged in layers of electronic neurons in accordance with delta-back-propagation algorithm.
21 CFR 556.720 - Tetracycline.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS... body weight per day. (b) Tolerances. Tolerances are established for the sum of tetracycline residues in... liver, and 12 ppm in fat and kidney. [63 FR 57246, Oct. 27, 1998] ...
Weight and cost forecasting for advanced manned space vehicles
NASA Technical Reports Server (NTRS)
Williams, Raymond
1989-01-01
A mass and cost estimating computerized methology for predicting advanced manned space vehicle weights and costs was developed. The user friendly methology designated MERCER (Mass Estimating Relationship/Cost Estimating Relationship) organizes the predictive process according to major vehicle subsystem levels. Design, development, test, evaluation, and flight hardware cost forecasting is treated by the study. This methodology consists of a complete set of mass estimating relationships (MERs) which serve as the control components for the model and cost estimating relationships (CERs) which use MER output as input. To develop this model, numerous MER and CER studies were surveyed and modified where required. Additionally, relationships were regressed from raw data to accommodate the methology. The models and formulations which estimated the cost of historical vehicles to within 20 percent of the actual cost were selected. The result of the research, along with components of the MERCER Program, are reported. On the basis of the analysis, the following conclusions were established: (1) The cost of a spacecraft is best estimated by summing the cost of individual subsystems; (2) No one cost equation can be used for forecasting the cost of all spacecraft; (3) Spacecraft cost is highly correlated with its mass; (4) No study surveyed contained sufficient formulations to autonomously forecast the cost and weight of the entire advanced manned vehicle spacecraft program; (5) No user friendly program was found that linked MERs with CERs to produce spacecraft cost; and (6) The group accumulation weight estimation method (summing the estimated weights of the various subsystems) proved to be a useful method for finding total weight and cost of a spacecraft.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Jiang, Yulei; Antic, Tatjana; Giger, Maryellen L.; Eggener, Scott; Oto, Aytekin
2013-02-01
The purpose of this study was to study T2-weighted magnetic resonance (MR) image texture features and diffusionweighted (DW) MR image features in distinguishing prostate cancer (PCa) from normal tissue. We collected two image datasets: 23 PCa patients (25 PCa and 23 normal tissue regions of interest [ROIs]) imaged with Philips MR scanners, and 30 PCa patients (41 PCa and 26 normal tissue ROIs) imaged with GE MR scanners. A radiologist drew ROIs manually via consensus histology-MR correlation conference with a pathologist. A number of T2-weighted texture features and apparent diffusion coefficient (ADC) features were investigated, and linear discriminant analysis (LDA) was used to combine select strong image features. Area under the receiver operating characteristic (ROC) curve (AUC) was used to characterize feature effectiveness in distinguishing PCa from normal tissue ROIs. Of the features studied, ADC 10th percentile, ADC average, and T2-weighted sum average yielded AUC values (+/-standard error) of 0.95+/-0.03, 0.94+/-0.03, and 0.85+/-0.05 on the Phillips images, and 0.91+/-0.04, 0.89+/-0.04, and 0.70+/-0.06 on the GE images, respectively. The three-feature combination yielded AUC values of 0.94+/-0.03 and 0.89+/-0.04 on the Phillips and GE images, respectively. ADC 10th percentile, ADC average, and T2-weighted sum average, are effective in distinguishing PCa from normal tissue, and appear robust in images acquired from Phillips and GE MR scanners.
Explaining Common Variance Shared by Early Numeracy and Literacy
ERIC Educational Resources Information Center
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
Inspiring Examples in Rearrangements of Infinite Series
ERIC Educational Resources Information Center
Ramasinghe, W.
2002-01-01
Simple examples are really encouraging in the understanding of rearrangements of infinite series, since many texts and teachers provide only existence theorems. In the absence of examples, an existence theorem is just a statement and lends little confidence to understanding. Iterated sums of double series seem to have a similar spirit of…
Factor Analysis for Clustered Observations.
ERIC Educational Resources Information Center
Longford, N. T.; Muthen, B. O.
1992-01-01
A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horton, Megan K., E-mail: megan.horton@mssm.edu; Blount, Benjamin C.; Valentin-Blasini, Liza
Background: Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy Objectives: We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New Yorkmore » City using weighted quantile sum (WQS) regression. Methods: We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (±2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Results: Individual analyte concentrations in urine were significantly correlated (Spearman's r 0.4–0.5, p<0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Conclusions: Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. - Highlights: • Perchlorate, nitrate, thiocyanate and iodide measured in maternal urine. • Thyroid function (TSH and Free T4) measured in maternal blood. • Weighted quantile sum (WQS) regression examined complex mixture effect. • WQS identified an inverse association between the exposure mixture and maternal TSH. • Perchlorate indicated as the ‘bad actor’ of the mixture.« less
Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis.
Nieves, Jeri W; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J Americo M; Sorenson, Eric J; D'Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi
2016-12-01
There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale-Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5-68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of "good" micronutrients and "good" food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes.
Parameter Set Cloning Based on Catchment Similarity for Large-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Liu, Z.; Kaheil, Y.; McCollum, J.
2016-12-01
Parameter calibration is a crucial step to ensure the accuracy of hydrological models. However, streamflow gauges are not available everywhere for calibrating a large-scale hydrologic model globally. Thus, assigning parameters appropriately for regions where the calibration cannot be performed directly has been a challenge for large-scale hydrologic modeling. Here we propose a method to estimate the model parameters in ungauged regions based on the values obtained through calibration in areas where gauge observations are available. This parameter set cloning is performed according to a catchment similarity index, a weighted sum index based on four catchment characteristic attributes. These attributes are IPCC Climate Zone, Soil Texture, Land Cover, and Topographic Index. The catchments with calibrated parameter values are donors, while the uncalibrated catchments are candidates. Catchment characteristic analyses are first conducted for both donors and candidates. For each attribute, we compute a characteristic distance between donors and candidates. Next, for each candidate, weights are assigned to the four attributes such that higher weights are given to properties that are more directly linked to the hydrologic dominant processes. This will ensure that the parameter set cloning emphasizes the dominant hydrologic process in the region where the candidate is located. The catchment similarity index for each donor - candidate couple is then created as the sum of the weighted distance of the four properties. Finally, parameters are assigned to each candidate from the donor that is "most similar" (i.e. with the shortest weighted distance sum). For validation, we applied the proposed method to catchments where gauge observations are available, and compared simulated streamflows using the parameters cloned by other catchments to the results obtained by calibrating the hydrologic model directly using gauge data. The comparison shows good agreement between the two models for different river basins as we show here. This method has been applied globally to the Hillslope River Routing (HRR) model using gauge observations obtained from the Global Runoff Data Center (GRDC). As next step, more catchment properties can be taken into account to further improve the representation of catchment similarity.
Imparting Desired Attributes by Optimization in Structural Design
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2003-01-01
Commonly available optimization methods typically produce a single optimal design as a Constrained minimum of a particular objective function. However, in engineering design practice it is quite often important to explore as much of the design space as possible with respect to many attributes to find out what behaviors are possible and not possible within the initially adopted design concept. The paper shows that the very simple method of the sum of objectives is useful for such exploration. By geometrical argument it is demonstrated that if every weighting coefficient is allowed to change its magnitude and its sign then the method returns a set of designs that are all feasible, diverse in their attributes, and include the Pareto and non-Pareto solutions, at least for convex cases. Numerical examples in the paper include a case of an aircraft wing structural box with thousands of degrees of freedom and constraints, and over 100 design variables, whose attributes are structural mass, volume, displacement, and frequency. The method is inherently suitable for parallel, coarse-grained implementation that enables exploration of the design space in the elapsed time of a single structural optimization.
On the Asymmetric Zero-Range in the Rarefaction Fan
NASA Astrophysics Data System (ADS)
Gonçalves, Patrícia
2014-02-01
We consider one-dimensional asymmetric zero-range processes starting from a step decreasing profile leading, in the hydrodynamic limit, to the rarefaction fan of the associated hydrodynamic equation. Under that initial condition, and for totally asymmetric jumps, we show that the weighted sum of joint probabilities for second class particles sharing the same site is convergent and we compute its limit. For partially asymmetric jumps, we derive the Law of Large Numbers for a second class particle, under the initial configuration in which all positive sites are empty, all negative sites are occupied with infinitely many first class particles and there is a single second class particle at the origin. Moreover, we prove that among the infinite characteristics emanating from the position of the second class particle it picks randomly one of them. The randomness is given in terms of the weak solution of the hydrodynamic equation, through some sort of renormalization function. By coupling the constant-rate totally asymmetric zero-range with the totally asymmetric simple exclusion, we derive limiting laws for more general initial conditions.
A simplified model of precipitation enhancement over a heterogeneous surface
NASA Astrophysics Data System (ADS)
Cioni, Guido; Hohenegger, Cathy
2018-06-01
Soil moisture heterogeneities influence the onset of convection and subsequent evolution of precipitating systems through the triggering of mesoscale circulations. However, local evaporation also plays a role in determining precipitation amounts. Here we aim at disentangling the effect of advection and evaporation on precipitation over the course of a diurnal cycle by formulating a simple conceptual model. The derivation of the model is inspired by the results of simulations performed with a high-resolution (250 m) large eddy simulation model over a surface with varying degrees of heterogeneity. A key element of the conceptual model is the representation of precipitation as a weighted sum of advection and evaporation, each weighed by its own efficiency. The model is then used to isolate the main parameters that control precipitation variations over a spatially drier patch. It is found that these changes surprisingly do not depend on soil moisture itself but instead purely on parameters that describe the atmospheric initial state. The likelihood for enhanced precipitation over drier soils is discussed based on these parameters. Additional experiments are used to test the validity of the model.
Learning to represent spatial transformations with factored higher-order Boltzmann machines.
Memisevic, Roland; Hinton, Geoffrey E
2010-06-01
To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.
On the joint bimodality of temperature and moisture near stratocumulus cloud tops
NASA Technical Reports Server (NTRS)
Randall, D. A.
1983-01-01
The observed distributions of the thermodynamic variables near stratocumulus top are highly bimodal. Two simple models of sub-grid fractional cloudiness motivated by this observed bimodality are examined. In both models, certain low order moments of two independent, moist-conservative thermodynamic variables are assumed to be known. The first model is based on the assumption of two discrete populations of parcels: a warm-day population and a cool-moist population. If only the first and second moments are assumed to be known, the number of unknowns exceeds the number of independent equations. If the third moments are assumed to be known as well, the number of independent equations exceeds the number of unknowns. The second model is based on the assumption of a continuous joint bimodal distribution of parcels, obtained as the weighted sum of two binormal distributions. For this model, the third moments are used to obtain 9 independent nonlinear algebraic equations in 11 unknowns. Two additional equations are needed to determine the covariance within the two subpopulations. In case these two internal covariance vanish, the system of equations can be solved analytically.
NASA Astrophysics Data System (ADS)
Tanaka, J.; Kanungo, R.; Alcorta, M.; Aoi, N.; Bidaman, H.; Burbadge, C.; Christian, G.; Cruz, S.; Davids, B.; Diaz Varela, A.; Even, J.; Hackman, G.; Harakeh, M. N.; Henderson, J.; Ishimoto, S.; Kaur, S.; Keefe, M.; Krücken, R.; Leach, K. G.; Lighthall, J.; Padilla Rodal, E.; Randhawa, J. S.; Ruotsalainen, P.; Sanetullaev, A.; Smith, J. K.; Workman, O.; Tanihata, I.
2017-11-01
Proton inelastic scattering off a neutron halo nucleus, 11Li, has been studied in inverse kinematics at the IRIS facility at TRIUMF. The aim was to establish a soft dipole resonance and to obtain its dipole strength. Using a high quality 66 MeV 11Li beam, a strongly populated excited state in 11Li was observed at Ex = 0.80 ± 0.02 MeV with a width of Γ = 1.15 ± 0.06 MeV. A DWBA (distorted-wave Born approximation) analysis of the measured differential cross section with isoscalar macroscopic form factors leads us to conclude that this observed state is excited in an electric dipole (E1) transition. Under the assumption of isoscalar E1 transitions, the strength is evaluated to be extremely large amounting to 30 ∼ 296 Weisskopf units, exhausting 2.2% ∼ 21% of the isoscalar E1 energy-weighted sum rule (EWSR) value. The large observed strength originates from the halo and is consistent with the simple di-neutron model of 11Li halo.
Non-gray gas radiation effect on mixed convection in lid driven square cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherifi, Mohammed, E-mail: production1998@yahoo.fr; Benbrik, Abderrahmane, E-mail: abenbrik@umbb.dz; Laouar-Meftah, Siham, E-mail: laouarmeftah@gmail.com
A numerical study is performed to investigate the effect of non-gray radiation on mixed convection in a vertical two sided lid driven square cavity filled with air-H{sub 2}O-CO{sub 2} gas mixture. The vertical moving walls of the enclosure are maintained at two different but uniform temperatures. The horizontal walls are thermally insulated and considered as adiabatic walls. The governing differential equations are solved by a finite-volume method and the SIMPLE algorithm was adopted to solve the pressure–velocity coupling. The radiative transfer equation (RTE) is solved by the discrete ordinates method (DOM). The spectral line weighted sum of gray gases modelmore » (SLW) is used to account for non-gray radiation properties. Simulations are performed in configurations where thermal and shear forces induce cooperating buoyancy forces. Streamlines, isotherms, and Nusselt number are analyzed for three different values of Richardson’s number (from 0.1 to 10) and by considering three different medium (transparent medium, gray medium using the Planck mean absorption coefficient, and non-gray medium assumption).« less
Measuring efficiency of university-industry Ph.D. projects using best worst method.
Salimi, Negin; Rezaei, Jafar
A collaborative Ph.D. project, carried out by a doctoral candidate, is a type of collaboration between university and industry. Due to the importance of such projects, researchers have considered different ways to evaluate the success, with a focus on the outputs of these projects. However, what has been neglected is the other side of the coin-the inputs. The main aim of this study is to incorporate both the inputs and outputs of these projects into a more meaningful measure called efficiency. A ratio of the weighted sum of outputs over the weighted sum of inputs identifies the efficiency of a Ph.D. The weights of the inputs and outputs can be identified using a multi-criteria decision-making (MCDM) method. Data on inputs and outputs are collected from 51 Ph.D. candidates who graduated from Eindhoven University of Technology. The weights are identified using a new MCDM method called Best Worst Method (BWM). Because there may be differences in the opinion of Ph.D. candidates and supervisors on weighing the inputs and outputs, data for BWM are collected from both groups. It is interesting to see that there are differences in the level of efficiency from the two perspectives, because of the weight differences. Moreover, a comparison between the efficiency scores of these projects and their success scores reveals differences that may have significant implications. A sensitivity analysis divulges the most contributing inputs and outputs.
Gao, Xiao; Wang, Quanchuan; Jackson, Todd; Zhao, Guang; Liang, Yi; Chen, Hong
2011-04-01
Despite evidence indicating fatness and thinness information are processed differently among weight-preoccupied and eating disordered individuals, the exact nature of these attentional biases is not clear. In this research, eye movement (EM) tracking assessed biases in specific component processes of visual attention (i.e., orientation, detection, maintenance and disengagement of gaze) in relation to body-related stimuli among 20 weight dissatisfied (WD) and 20 weight satisfied young women. Eye movements were recorded while participants completed a dot-probe task that featured fatness-neutral and thinness-neutral word pairs. Compared to controls, WD women were more likely to direct their initial gaze toward fatness words, had a shorter mean latency of first fixation on both fatness and thinness words, had longer first fixation on fatness words but shorter first fixation on thinness words, and shorter total gaze duration on thinness words. Reaction time data showed a maintenance bias towards fatness words among the WD women. In sum, results indicated WD women show initial orienting, speeded detection and initial maintenance biases towards fat body words in addition to a speeded detection - avoidance pattern of biases in relation to thin body words. In sum, results highlight the importance of the utility of EM-tracking as a means of identifying subtle attentional biases among weight dissatisfied women drawn from a non-clinical setting and the need to assess attentional biases as a dynamic process. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Brunelle, Eugene J.
1994-01-01
The first few viewgraphs describe the general solution properties of linear elasticity theory which are given by the following two statements: (1) for stress B.C. on S(sub sigma) and zero displacement B.C. on S(sub u) the altered displacements u(sub i)(*) and the actual stresses tau(sub ij) are elastically dependent on Poisson's ratio nu alone: thus the actual displacements are given by u(sub i) = mu(exp -1)u(sub i)(*); and (2) for zero stress B.C. on S(sub sigma) and displacement B.C. on S(sub u) the actual displacements u(sub i) and the altered stresses tau(sub ij)(*) are elastically dependent on Poisson's ratio nu alone: thus the actual stresses are given by tau(sub ij) = E tau(sub ij)(*). The remaining viewgraphs describe the minimum parameter formulation of the general classical laminate theory plate problem as follows: The general CLT plate problem is expressed as a 3 x 3 system of differential equations in the displacements u, v, and w. The eighteen (six each) A(sub ij), B(sub ij), and D(sub ij) system coefficients are ply-weighted sums of the transformed reduced stiffnesses (bar-Q(sub ij))(sub k); the (bar-Q(sub ij))(sub k) in turn depend on six reduced stiffnesses (Q(sub ij))(sub k) and the material and geometry properties of the k(sup th) layer. This paper develops a method for redefining the system coefficients, the displacement components (u,v,w), and the position components (x,y) such that a minimum parameter formulation is possible. The pivotal steps in this method are (1) the reduction of (bar-Q(sub ij))(sub k) dependencies to just two constants Q(*) = (Q(12) + 2Q(66))/(Q(11)Q(22))(exp 1/2) and F(*) - (Q(22)/Q(11))(exp 1/2) in terms of ply-independent reference values Q(sub ij); (2) the reduction of the remaining portions of the A, B, and D coefficients to nondimensional ply-weighted sums (with 0 to 1 ranges) that are independent of Q(*) and F(*); and (3) the introduction of simple coordinate stretchings for u, v, w and x,y such that the process is neatly completed.
Hayıroğlu, Mert İlker; Keskin, Muhammed; Uzun, Ahmet Okan; Türkkan, Ceyhan; Tekkeşin, Ahmet İlker; Kozan, Ömer
Electrical phenomenon and remote myocardial ischemia are the main factors of ST segment depression in inferior leads in acute anterior myocardial infarction (AAMI). We investigated the prognostic value of the sum of ST segment depression amplitudes in inferior leads in patients with first AAMI treated with primary percutaneous coronary intervention. (PPCI). In this prospective analysis, we evaluated the in-hospital prognostic impact of the sum of ST segment depression in inferior leads on 206 patients with first AAMI. Patients were stratified by tertiles of the sum of admission ST segment depression in inferior leads. Clinical outcomes were compared between those tertiles. Univariate analysis revealed higher rate of in-hospital death for patients with ST segment depression in inferior leads in tertile 3, as compared to patients in tertile 1 (OR 9.8, 95% CI 1.5-78.2, p<0.001). After adjustment for baseline variables, ST segment depression in inferior leads in tertile 3 was associated with 5.7-fold hazard of in-hospital death (OR: 5.7, 95% CI 1.2-35.1, p<0.001). Spearman rank correlation test revealed correlation between the sum of ST segment depression amplitude in inferior leads and the sum of ST segment elevation amplitude in V1-6, L1 and aVL. Multivessel disease and additional RCA stenosis were also detected more often in tertile 3. The sum of ST segment depression amplitude in inferior leads of admission ECG in patients with first AAMI treated with PPCI provide an independent prognostic marker of in-hospital outcomes. Our data suggest the sum of ST segment depression amplitude to be a simple, feasible and clinically applicable tool for rapid risk stratification in patients with first AAMI. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam
2018-04-01
Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.
Li, Gai-Ling; Chen, Hui-Jian; Zhang, Wan-Xia; Tong, Qiang; Yan, You-E
2017-08-10
The effect of maternal omega-3 fatty acids intake on the body composition of the offspring is unclear. The aim of this study was to conduct a systematic review and meta-analysis to confirm the effects of omega-3 fatty acids supplementation during pregnancy and/or lactation on body weight, body length, body mass index (BMI), waist circumference, fat mass and sum of skinfold thicknesses of offspring. Human intervention studies were selected by a systematic search of PubMed, Web of Science, the Cochrane Library and references of related reviews and studies. Randomized controlled trials of maternal omega-3 fatty acids intake during pregnancy or lactation for offspring's growth were included. The data were analyzed with RevMan 5.3 and Stata 12.0. Effect sizes were presented as weighted mean differences (WMD) or standardized mean difference (SMD) with 95% confidence intervals (95% CI). Twenty-six studies comprising 10,970 participants were included. Significant increases were found in birth weight (WMD = 42.55 g, 95% CI: 21.25, 63.85) and waist circumference (WMD = 0.35 cm, 95% CI: 0.04, 0.67) in the omega-3 fatty acids group. There were no effects on birth length (WMD = 0.09 cm, 95% CI: -0.03, 0.21), postnatal length (WMD = 0.13 cm, 95% CI: -0.11, 0.36), postnatal weight (WMD = 0.04 kg, 95% CI: -0.07, 0.14), BMI (WMD = 0.09, 95% CI: -0.05, 0.23), the sum of skinfold thicknesses (WMD = 0.45 mm, 95% CI: -0.30, 1.20), fat mass (WMD = 0.05 kg, 95% CI: -0.01, 0.11) and the percentage of body fat (WMD = 0.04%, 95% CI: -0.38, 0.46). This meta-analysis showed that maternal omega-3 fatty acids supplementation can increase offspring's birth weight and postnatal waist circumference. However, it did not appear to influence children's birth length, postnatal weight/length, BMI, sum of skinfold thicknesses, fat mass and the percentage of body fat during postnatal period. Larger, well-designed studies are recommended to confirm this conclusion. Copyright © 2017 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
ISOFIT - A PROGRAM FOR FITTING SORPTION ISOTHERMS TO EXPERIMENTAL DATA
Isotherm expressions are important for describing the partitioning of contaminants in environmental systems. ISOFIT (ISOtherm FItting Tool) is a software program that fits isotherm parameters to experimental data via the minimization of a weighted sum of squared error (WSSE) obje...
Necessary and sufficient conditions for R₀ to be a sum of contributions of fertility loops.
Rueffler, Claus; Metz, Johan A J
2013-03-01
Recently, de-Camino-Beck and Lewis (Bull Math Biol 69:1341-1354, 2007) have presented a method that under certain restricted conditions allows computing the basic reproduction ratio R₀ in a simple manner from life cycle graphs, without, however, giving an explicit indication of these conditions. In this paper, we give various sets of sufficient and generically necessary conditions. To this end, we develop a fully algebraic counterpart of their graph-reduction method which we actually found more useful in concrete applications. Both methods, if they work, give a simple algebraic formula that can be interpreted as the sum of contributions of all fertility loops. This formula can be used in e.g. pest control and conservation biology, where it can complement sensitivity and elasticity analyses. The simplest of the necessary and sufficient conditions is that, for irreducible projection matrices, all paths from birth to reproduction have to pass through a common state. This state may be visible in the state representation for the chosen sampling time, but the passing may also occur in between sampling times, like a seed stage in the case of sampling just before flowering. Note that there may be more than one birth state, like when plants in their first year can already have different sizes at the sampling time. Also the common state may occur only later in life. However, in all cases R₀ allows a simple interpretation as the expected number of new individuals that in the next generation enter the common state deriving from a single individual in this state. We end with pointing to some alternative algebraically simple quantities with properties similar to those of R₀ that may sometimes be used to good effect in cases where no simple formula for R₀ exists.
Schrempft, Stephanie; van Jaarsveld, Cornelia H. M.; Fisher, Abigail; Wardle, Jane
2015-01-01
Objectives The home environment is thought to play a key role in early weight trajectories, although direct evidence is limited. There is general agreement that multiple factors exert small individual effects on weight-related outcomes, so use of composite measures could demonstrate stronger effects. This study therefore examined whether composite measures reflecting the ‘obesogenic’ home environment are associated with diet, physical activity, TV viewing, and BMI in preschool children. Methods Families from the Gemini cohort (n = 1096) completed a telephone interview (Home Environment Interview; HEI) when their children were 4 years old. Diet, physical activity, and TV viewing were reported at interview. Child height and weight measurements were taken by the parents (using standard scales and height charts) and reported at interview. Responses to the HEI were standardized and summed to create four composite scores representing the food (sum of 21 variables), activity (sum of 6 variables), media (sum of 5 variables), and overall (food composite/21 + activity composite/6 + media composite/5) home environments. These were categorized into ‘obesogenic risk’ tertiles. Results Children in ‘higher-risk’ food environments consumed less fruit (OR; 95% CI = 0.39; 0.27–0.57) and vegetables (0.47; 0.34–0.64), and more energy-dense snacks (3.48; 2.16–5.62) and sweetened drinks (3.49; 2.10–5.81) than children in ‘lower-risk’ food environments. Children in ‘higher-risk’ activity environments were less physically active (0.43; 0.32–0.59) than children in ‘lower-risk’ activity environments. Children in ‘higher-risk’ media environments watched more TV (3.51; 2.48–4.96) than children in ‘lower-risk’ media environments. Neither the individual nor the overall composite measures were associated with BMI. Conclusions Composite measures of the obesogenic home environment were associated as expected with diet, physical activity, and TV viewing. Associations with BMI were not apparent at this age. PMID:26248313
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
Polder, A; Odland, J O; Tkachev, A; Føreid, S; Savinova, T N; Skaare, J U
2003-05-01
The concentrations of HCB, alpha-, beta- and gamma-HCH, 3 chlordanes (CHLs), p,p'-DDE, p,p'-DDD, p,p'-DDT, and 30 PCBs (polychlorinated biphenyls) were determined in 140 human milk samples from Kargopol (n=19), Severodvinsk (n=50), Arkhangelsk (n=51) and Naryan-Mar (n=20). Pooled samples were used for determination of three toxaphenes (chlorobornanes, CHBs). The concentrations of HCB, beta-HCH and p,p'-DDE in Russian human milk were 2, 10 and 3 times higher than corresponding levels in Norway, respectively, while concentrations of sum-PCBs and sum-TEQs (toxic equivalent quantities) of the mono-ortho substituted PCBs were in the same range as corresponding levels in Norway. The PCB-156 contributed most to the sum-TEQs. Highest mean concentrations of HCB (129 microg/kg milk fat) and sum-PCBs (458 microg/kg milk fat) were detected in Naryan-Mar, while highest mean concentrations of sum-HCHs (408 microg/kg milk fat), sum-CHLs (48 microg/kg milk fat), sum-DDTs (1392 microg/kg milk fat) and sum-toxaphenes (13 microg/kg milk fat) were detected in Arkhangelsk. An eastward geographic trend of increasing ratios of alpha/beta-HCH, gamma/beta-HCH, p,p'-DDT/p,p'-DDE and PCB-180/28 was observed. In all areas the levels of sum-HCHs decreased with parity (number of children born). Considerable variation in levels of the analysed organochlorines (OCs) was found in all the studied areas. Breast milk from mothers nursing their second or third child (multiparas) in Naryan-Mar showed a significant different PCB profile compared to mothers giving birth to their first child (primiparas) from the same area and to primi- and multiparas in the other areas. Both p,p'-DDE and p,p'-DDT showed a significant, but weak, negative correlation with the infants birth weight.
Weaving and neural complexity in symmetric quantum states
NASA Astrophysics Data System (ADS)
Susa, Cristian E.; Girolami, Davide
2018-04-01
We study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.
2014-12-01
Primary Military Occupational Specialty PRO Proficiency Q-Q Quantile - Quantile RSS Residual Sum of Squares SI Shop Information T&R Training and...construct multivariate linear regression models to estimate Marines’ Computed Tier Score and time to achieve E-4 based on their individual personal...Science (GS) score, ASVAB Mathematics Knowledge (MK) score, ASVAB Paragraph Comprehension (PC) score, weight , and whether a Marine receives a weight
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
Systematics of strength function sum rules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Calvin W.
2015-08-28
Sum rules provide useful insights into transition strength functions and are often expressed as expectation values of an operator. In this letter I demonstrate that non-energy-weighted transition sum rules have strong secular dependences on the energy of the initial state. Such non-trivial systematics have consequences: the simplification suggested by the generalized Brink–Axel hypothesis, for example, does not hold for most cases, though it weakly holds in at least some cases for electric dipole transitions. Furthermore, I show the systematics can be understood through spectral distribution theory, calculated via traces of operators and of products of operators. Seen through this lens,more » violation of the generalized Brink–Axel hypothesis is unsurprising: one expectssum rules to evolve with excitation energy. Moreover, to lowest order the slope of the secular evolution can be traced to a component of the Hamiltonian being positive (repulsive) or negative (attractive).« less
NASA Astrophysics Data System (ADS)
Di Francesco, P.; Zinn-Justin, P.
2005-12-01
We prove higher rank analogues of the Razumov Stroganov sum rule for the ground state of the O(1) loop model on a semi-infinite cylinder: we show that a weighted sum of components of the ground state of the Ak-1 IRF model yields integers that generalize the numbers of alternating sign matrices. This is done by constructing minimal polynomial solutions of the level 1 U_q(\\widehat{\\frak{sl}(k)}) quantum Knizhnik Zamolodchikov equations, which may also be interpreted as quantum incompressible q-deformations of quantum Hall effect wavefunctions at filling fraction ν = k. In addition to the generalized Razumov Stroganov point q = -eiπ/k+1, another combinatorially interesting point is reached in the rational limit q → -1, where we identify the solution with extended Joseph polynomials associated with the geometry of upper triangular matrices with vanishing kth power.
A Progression of Static Equilibrium Laboratory Exercises
NASA Astrophysics Data System (ADS)
Kutzner, Mickey; Kutzner, Andrew
2013-10-01
Although simple architectural structures like bridges, catwalks, cantilevers, and Stonehenge have been integral in human societies for millennia, as have levers and other simple tools, modern students of introductory physics continue to grapple with Newton's conditions for static equilibrium. As formulated in typical introductory physics textbooks, these two conditions appear as ΣF=0(1) and Στ=0,(2) where each torque τ is defined as the cross product between the lever arm vector r and the corresponding applied force F, τ =r×F,(3) having magnitude, τ =Frsinθ.(4) The angle θ here is between the two vectors F and r. In Eq. (1), upward (downward) forces are considered positive (negative). In Eq. (2), counterclockwise (clockwise) torques are considered positive (negative). Equation (1) holds that the vector sum of the external forces acting on an object must be zero to prevent linear accelerations; Eq. (2) states that the vector sum of torques due to external forces about any axis must be zero to prevent angular accelerations. In our view these conditions can be problematic for students because a) the equations contain the unfamiliar summation notation Σ, b) students are uncertain of the role of torques in causing rotations, and c) it is not clear why the sum of torques is zero regardless of the choice of axis. Gianino5 describes an experiment using MBL and a force sensor to convey the meaning of torque as applied to a rigid-body lever system without exploring quantitative aspects of the conditions for static equilibrium.
Mobile Visual Search Based on Histogram Matching and Zone Weight Learning
NASA Astrophysics Data System (ADS)
Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong
2018-01-01
In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.
Davis, M E; Rutledge, J J; Cundiff, L V; Hauser, E R
1983-10-01
Several measures of life cycle cow efficiency were calculated using weights and individual feed consumptions recorded on 160 dams of beef, dairy and beef X dairy breeding and their progeny. Ratios of output to input were used to estimate efficiency, where outputs included weaning weights of progeny plus salvage value of the dam and inputs included creep feed consumed by progeny plus feed consumed by the dam over her entire lifetime. In one approach to estimating efficiency, inputs and outputs were weighted by probabilities that were a function of the cow herd age distribution and percentage calf crop in a theoretical herd. The second approach to estimating cow efficiency involved dividing the sum of the weights by the sum of the feed consumption values, with all pieces of information being given equal weighting. Relationships among efficiency estimates and various traits of dams and progeny were examined. Weights, heights, and weight:height ratios of dams at 240 d of age were not correlated significantly with subsequent efficiency of calf production, indicating that indirect selection for lifetime cow efficiency at an early age based on these traits would be ineffective. However, females exhibiting more efficient weight gains from 240 d to first calving tended to become more efficient dams. Correlations of efficiency with weight of dam at calving and at weaning were negative and generally highly significant. Height at withers was negatively related to efficiency. Ratio of weight to height indicated that fatter dams generally were less efficient. The effect of milk production on efficiency depended upon the breed combinations involved. Dams calving for the first time at an early age and continuing to calve at short intervals were superior in efficiency. Weaning rate was closely related to life cycle efficiency. Large negative correlations between efficiency and feed consumption of dams were observed, while correlations of efficiency with progeny weights and feed consumptions in individual parities tended to be positive though nonsignificant. However, correlations of efficiency with accumulative progeny weights and feed consumptions generally were significant.
Gubler, Philipp; Hattori, Koichi; Lee, Su Houng; ...
2016-03-15
In this paper, we investigate the mass spectra of open heavy flavor mesons in an external constant magnetic field within QCD sum rules. Spectral Ansatze on the phenomenological side are proposed in order to properly take into account mixing effects between the pseudoscalar and vector channels, and the Landau levels of charged mesons. The operator product expansion is implemented up to dimension-5 operators. As a result, we find for neutral D mesons a significant positive mass shift that goes beyond simple mixing effects. In contrast, charged D mesons are further subject to Landau level effects, which together with the mixingmore » effects almost completely saturate the mass shifts obtained in our sum rule analysis.« less
A longitudinal study of low back pain and daily vibration exposure in professional drivers.
Bovenzi, Massimo
2010-01-01
The aim of this study was to investigate the relation between low back pain (LBP) outcomes and measures of daily exposure to whole-body vibration (WBV) in professional drivers. In a study population of 202 male drivers, who were not affected with LBP at the initial survey, LBP in terms of duration, intensity, and disability was investigated over a two-year follow-up period. Vibration measurements were made on representative samples of machines and vehicles. The following measures of daily WBV exposure were obtained: (i) 8-h energy-equivalent frequency-weighted acceleration (highest axis), A(8)(max) in ms(-2) r.m.s.; (ii) A(8)(sum) (root-sum-of-squares) in ms(-2) r.m.s.; (iii) Vibration Dose Value (highest axis), VDV(max) in ms(-1.75); (iv) VDV(sum) (root-sum-of-quads) in ms(-1.75). The cumulative incidence of LBP over the follow-up period was 38.6%. The incidence of high pain intensity and severe disability was 16.8 and 14.4%, respectively. After adjustment for several confounders, VDV(max) or VDV(sum) gave better predictions of LBP outcomes over time than A(8)(max) or A(8)(sum), respectively. Poor predictions were obtained with A(8)(max), which is the currently preferred measure of daily WBV exposure in European countries. In multivariate data analysis, physical work load was a significant predictor of LBP outcomes over the follow-up period. Perceived psychosocial work environment was not associated with LBP.
See the Light! A Nice Application of Calculus to Chemistry
ERIC Educational Resources Information Center
Boersma, Stuart; McGowan, Garrett
2007-01-01
Some simple modeling with Riemann sums can be used to develop Beer's Law, which describes the relationship between the absorbance of light and the concentration of the solution which the light is penetrating. A further application of the usefulness of Beer's Law in creating calibration curves is also presented. (Contains 3 figures.)
Model-Based Reinforcement Learning under Concurrent Schedules of Reinforcement in Rodents
ERIC Educational Resources Information Center
Huh, Namjung; Jo, Suhyun; Kim, Hoseok; Sul, Jung Hoon; Jung, Min Whan
2009-01-01
Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's…
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...
Rates of profit as correlated sums of random variables
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2013-10-01
Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.
Xu, Gongxian; Liu, Ying; Gao, Qunwang
2016-02-10
This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Exploring local regularities for 3D object recognition
NASA Astrophysics Data System (ADS)
Tian, Huaiwen; Qin, Shengfeng
2016-11-01
In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.
Aksoy, S; Erdil, I; Hocaoglu, E; Inci, E; Adas, G T; Kemik, O; Turkay, R
2018-02-01
The present study indicates that simple and hydatid cysts in liver are a common health problem in Turkey. The aim of the study is to differentiate different types of hydatid cysts from simple cysts by using diffusion-weighted images. In total, 37 hydatid cysts and 36 simple cysts in the liver were diagnosed. We retrospectively reviewed the medical records of the patients who had both ultrasonography and magnetic resonance imaging. We measured apparent diffusion coefficient (ADC) values of all the cysts and then compared the findings. There was no statistically meaningful difference between the ADC values of simple cysts and type 1 hydatid cysts. However, for the other types of hydatid cysts, it is possible to differentiate hydatid cysts from simple cysts using the ADC values. Although in our study we cannot differentiate between type I hydatid cysts and simple cysts in the liver, diffusion-weighted images are very useful to differentiate different types of hydatid cysts from simple cysts using the ADC values.
Lower Cardiac Vagal Tone in Non-Obese Healthy Men with Unfavorable Anthropometric Characteristics
Ramos, Plínio S.; Araújo, Claudio Gil S.
2010-01-01
OBJECTIVES: to determine if there are differences in cardiac vagal tone values in non-obese healthy, adult men with and without unfavorable anthropometric characteristics. INTRODUCTION: It is well established that obesity reduces cardiac vagal tone. However, it remains unknown if decreases in cardiac vagal tone can be observed early in non-obese healthy, adult men presenting unfavorable anthropometric characteristics. METHODS: Among 1688 individuals assessed between 2004 and 2008, we selected 118 non-obese (BMI <30 kg/m2), healthy men (no known disease conditions or regular use of relevant medications), aged between 20 and 77 years old (42 ± 12-years-old). Their evaluation included clinical examination, anthropometric assessment (body height and weight, sum of six skinfolds, waist circumference and somatotype), a 4-second exercise test to estimate cardiac vagal tone and a maximal cardiopulmonary exercise test to exclude individuals with myocardial ischemia. The same physician performed all procedures. RESULTS: A lower cardiac vagal tone was found for the individuals in the higher quintiles – unfavorable anthropometric characteristics - of BMI (p=0.005), sum of six skinfolds (p=0.037) and waist circumference (p<0.001). In addition, the more endomorphic individuals also presented a lower cardiac vagal tone (p=0.023), while an ectomorphic build was related to higher cardiac vagal tone values as estimated by the 4-second exercise test (r=0.23; p=0.017). CONCLUSIONS: Non-obese and healthy adult men with unfavorable anthropometric characteristics tend to present lower cardiac vagal tone levels. Early identification of this trend by simple protocols that are non-invasive and risk-free, using select anthropometric characteristics, may be clinically useful in a global strategy to prevent cardiovascular disease. PMID:20126345
Simulations of lattice animals and trees
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Nadler, Walter; Grassberger, Peter
2005-01-01
The scaling behaviour of randomly branched polymers in a good solvent is studied in two to nine dimensions, using as microscopic models lattice animals and lattice trees on simple hypercubic lattices. As a stochastic sampling method we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. Essentially we start simulating percolation clusters (either site or bond), re-weigh them according to the animal (tree) ensemble, and prune or branch the further growth according to a heuristic fitness function. In contrast to previous applications of PERM, this fitness function is not the weight with which the actual configuration would contribute to the partition sum, but is closely related to it. We obtain high statistics of animals with up to several thousand sites in all dimension 2 <= d <= 9. In addition to the partition sum (number of different animals) we estimate gyration radii and numbers of perimeter sites. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4 and >=8. In addition, we present the hitherto most precise estimates for growth constants in d >= 3. For clusters with one site attached to an attractive surface, we verify for d >= 3 the superuniversality of the cross-over exponent phgr at the adsorption transition predicted by Janssen and Lyssy, but not for d = 2. There, we find phgr = 0.480(4) instead of the conjectured phgr = 1/2. Finally, we discuss the collapse of animals and trees, arguing that our present version of the algorithm is also efficient for some of the models studied in this context, but showing that it is not very efficient for the 'classical' model for collapsing animals.
Nie, Xiaoling; Wang, Yan; Li, Yaxin; Sun, Lei; Li, Tao; Yang, Minmin; Yang, Xueqiao; Wang, Wenxing
2017-10-01
To investigate the regional background trace element (TE) level in atmospheric deposition (dry and wet), TEs (Fe, Al, V, Cr, Mn, Ni, Cu, Zn, As, Se, Mo, Cd, Ba, and Pb) in 52 rainwater samples and 73 total suspended particles (TSP) samples collected in Mt. Lushan, Southern China, were analyzed using inductively coupled plasma-mass spectrometry (ICP-MS). The results showed that TEs in wet and dry deposition of the target area were significantly elevated compared within and outside China and the volume weight mean pH of rainwater was 4.43. The relative contributions of wet and dry depositions of TEs vary significantly among elements. The wet deposition fluxes of V, As, Cr, Se, Zn, and Cd exceeded considerably their dry deposition fluxes while dry deposition dominated the removal of pollution elements such as Mo, Cu, Ni, Mn, and Al. The summed dry deposition flux was four times higher than the summed wet deposition flux. Prediction results based on a simple accumulation model found that the content of seven toxic elements (Cr, Ni, Cu, Zn, As, Cd, and Pb) in soils could increase rapidly due to the impact of annual atmospheric deposition, and the increasing amounts of them reached 0.063, 0.012, 0.026, 0.459, 0.076, 0.004, and 0.145 mg kg -1 , respectively. In addition, the annual increasing rates ranged from 0.05% (Cr and Ni) to 2.08% (Cd). It was also predicted that atmospheric deposition induced the accumulation of Cr and Cd in surface soils. Cd was the critical element with the greatest potential ecological risk among all the elements in atmospheric deposition.
NASA Astrophysics Data System (ADS)
Fayache, M. S.; Sharma, S. Shelley; Zamick, L.
1996-10-01
Shell model calculations are performed for magnetic dipole excitations in8Be and10Be, first with a quadrupole-quadrupole interaction (Q·Q) and then with a realistic interaction. The calculations are performed both in a 0pspace and in a large space which includes all 2ℏωexcitations. In the 0pwithQ·Qwe have an analytic expression for the energies of all states. In this limit we find that in10Be theL=1S=0 scissors mode with isospinT=1 is degenerate with that ofT=2. By projection from an intrinsic state we can obtain simple expressions forB(M1) to the scissors modes in8Be and10Be. We plot cumulative sums for energy-weighted isovector orbital transitions fromJ=0+ground states to the 1+excited states. These have the structure of a low-energy plateau and a steep rise to a high-energy plateau. The relative magnitudes of these plateaux are discussed. By comparing8Be and10Be we find that contrary to the behaviour in heavy deformed nuclei,B(M1)orbitalis not proportional toB(E2). On the other hand, a sum rule which relatesB(M1) to the difference (B(E2)isoscalar-B(E2)isovector) succeeds in describing the difference in behaviours in the two nuclei. The results forQ·Qand the realistic interactions are compared, as are the results in the 0pspace and the large (0p+2ℏω) space. The Wigner supermultiplet scheme is a very useful guide in analyzing the shell model results.
Cosmology and the neutrino mass ordering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hannestad, Steen; Schwetz, Thomas, E-mail: sth@phys.au.dk, E-mail: schwetz@kit.edu
We propose a simple method to quantify a possible exclusion of the inverted neutrino mass ordering from cosmological bounds on the sum of the neutrino masses. The method is based on Bayesian inference and allows for a calculation of the posterior odds of normal versus inverted ordering. We apply the method for a specific set of current data from Planck CMB data and large-scale structure surveys, providing an upper bound on the sum of neutrino masses of 0.14 eV at 95% CL. With this analysis we obtain posterior odds for normal versus inverted ordering of about 2:1. If cosmological datamore » is combined with data from oscillation experiments the odds reduce to about 3:2. For an exclusion of the inverted ordering from cosmology at more than 95% CL, an accuracy of better than 0.02 eV is needed for the sum. We demonstrate that such a value could be reached with planned observations of large scale structure by analysing artificial mock data for a EUCLID-like survey.« less
Design and test of a four channel motor for electromechanical flight control actuation
NASA Technical Reports Server (NTRS)
1984-01-01
To provide a suitable electromagnetic torque summing approach to flight control system redundancy, a four channel motor capable of sustaining full performance after any two credible failures was designed, fabricated, and tested. The design consists of a single samarium cobalt permanent magnet rotor with four separate three phase windings arrayed in individual stator quadrants around the periphery. Trade studies established the sensitivities of weight and performance to such parameters as design speed, winding pattern, number of poles, magnet configuration, and strength. The motor electromagnetically sums the torque of the individual channels on a single rotor and eliminate complex mechanical gearing arrangements.
Sum and mean. Standard programs for activation analysis.
Lindstrom, R M
1994-01-01
Two computer programs in use for over a decade in the Nuclear Methods Group at NIST illustrate the utility of standard software: programs widely available and widely used, in which (ideally) well-tested public algorithms produce results that are well understood, and thereby capable of comparison, within the community of users. Sum interactively computes the position, net area, and uncertainty of the area of spectral peaks, and can give better results than automatic peak search programs when peaks are very small, very large, or unusually shaped. Mean combines unequal measurements of a single quantity, tests for consistency, and obtains the weighted mean and six measures of its uncertainty.
Losing weight is about balancing calories in (food and drink) with calories out (exercise). Sounds simple, right? But if it were that simple, you and the millions of other women struggling with their weight probably would have figured it out.
Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk
NASA Astrophysics Data System (ADS)
Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi
2016-09-01
Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.
Weighted Scaling in Non-growth Random Networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li
2012-09-01
We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.
Highly conductive composites for fuel cell flow field plates and bipolar plates
Jang, Bor Z; Zhamu, Aruna; Song, Lulu
2014-10-21
This invention provides a fuel cell flow field plate or bipolar plate having flow channels on faces of the plate, comprising an electrically conductive polymer composite. The composite is composed of (A) at least 50% by weight of a conductive filler, comprising at least 5% by weight reinforcement fibers, expanded graphite platelets, graphitic nano-fibers, and/or carbon nano-tubes; (B) polymer matrix material at 1 to 49.9% by weight; and (C) a polymer binder at 0.1 to 10% by weight; wherein the sum of the conductive filler weight %, polymer matrix weight % and polymer binder weight % equals 100% and the bulk electrical conductivity of the flow field or bipolar plate is at least 100 S/cm. The invention also provides a continuous process for cost-effective mass production of the conductive composite-based flow field or bipolar plate.
Weaving and neural complexity in symmetric quantum states
Susa, Cristian E.; Girolami, Davide
2017-12-27
Here, we study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.
14 CFR Appendix E to Part 420 - Tables for Explosive Site Plan
Code of Federal Regulations, 2010 CFR
2010-01-01
.... Table E-2—Liquid Propellant Explosive Equivalents Propellant combinations Explosive equivalent LO2/LH2 The larger of: 8W2/3 where W is the weight of LO2/LH2, or14% of W. LO2/LH2 + LO2/RP-1 Sum of (20% for LO2/RP-1) + the larger of: 8W2/3 where W is the weight of LO2/LH2, or14% of W. LO2/R-1 20% of W up to...
Weaving and neural complexity in symmetric quantum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susa, Cristian E.; Girolami, Davide
Here, we study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.
2013-04-01
from the University of Rochester. Marchetti has worked in digital image processing at Eastman Kodak and in digital control systems at Contraves USA...which was based on a weighted sum of the gain for self and the perceived gain of other stakeholder programs. o A more recent perception of gains weighs...handled with a weighted formula. To the extent that understanding is incomplete (i.e., knowledge of other’s gain is less than 1), a stakeholder program
Exact Maximum-Entropy Estimation with Feynman Diagrams
NASA Astrophysics Data System (ADS)
Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.
2018-02-01
A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.
Association between anthropometric indices and cardiometabolic risk factors in pre-school children.
Aristizabal, Juan C; Barona, Jacqueline; Hoyos, Marcela; Ruiz, Marcela; Marín, Catalina
2015-11-06
The world health organization (WHO) and the Identification and prevention of dietary- and lifestyle-induced health effects in children and infants- study (IDEFICS), released anthropometric reference values obtained from normal body weight children. This study examined the relationship between WHO [body mass index (BMI) and triceps- and subscapular-skinfolds], and IDEFICS (waist circumference, waist to height ratio and fat mass index) anthropometric indices with cardiometabolic risk factors in pre-school children ranging from normal body weight to obesity. A cross-sectional study with 232 children (aged 4.1 ± 0.05 years) was performed. Anthropometric measurements were collected and BMI, waist circumference, waist to height ratio, triceps- and subscapular-skinfolds sum and fat mass index were calculated. Fasting glucose, fasting insulin, homeostasis model analysis insulin resistance (HOMA-IR), blood lipids and apolipoprotein (Apo) B-100 (Apo B) and Apo A-I were determined. Pearson's correlation coefficient, multiple regression analysis and the receiver-operating characteristic (ROC) curve analysis were run. 51% (n = 73) of the boys and 52% (n = 47) of the girls were of normal body weight, 49% (n = 69) of the boys and 48% (n = 43) of the girls were overweight or obese. Anthropometric indices correlated (p < 0.001) with insulin: [BMI (r = 0.514), waist circumference (r = 0.524), waist to height ratio (r = 0.304), triceps- and subscapular-skinfolds sum (r = 0.514) and fat mass index (r = 0.500)], and HOMA-IR: [BMI (r = 0.509), waist circumference (r = 0.521), waist to height ratio (r = 0.296), triceps- and subscapular-skinfolds sum (r = 0.483) and fat mass index (r = 0.492)]. Similar results were obtained after adjusting by age and sex. The areas under the curve (AUC) to identify children with insulin resistance were significant (p < 0.001) and similar among anthropometric indices (AUC > 0.68 to AUC < 0.76). WHO and IDEFICS anthropometric indices correlated similarly with fasting insulin and HOMA-IR. The diagnostic accuracy of the anthropometric indices as a proxy to identify children with insulin resistance was similar. These data do not support the use of waist circumference, waist to height ratio, triceps- and subscapular- skinfolds sum or fat mass index, instead of the BMI as a proxy to identify pre-school children with insulin resistance, the most frequent alteration found in children ranging from normal body weight to obesity.
A clinimetric approach to assessing quality of life in epilepsy.
Cramer, J A
1993-01-01
Clinimetrics is a concept involving the use of rating scales for clinical phenomena ranging from physical examinations to functional performance. Clinimetric or rating scales can be used for defining patient status and changes that occur during long-term observation. The scores derived from such scales can be used as guidelines for intervention, treatment, or prediction of outcome. In epilepsy, clinimetric scales have been developed for assessing seizure frequency, seizure severity, adverse effects related to antiepileptic drugs (AEDs), and quality of life after surgery for epilepsy. The VA Epilepsy Cooperative Study seizure rating scale combines frequency and severity in a weighted scoring system for simple and complex partial and generalized tonic-clonic seizures, summing all items in a total seizure score. Similarly, the rating scales for systemic toxicity and neurotoxicity use scores weighted for severity for assessing specific adverse effects typically related to AEDs. A composite score, obtained by adding the scores for seizures, systemic toxicity, and neurotoxicity, represents the overall status of the patient at a given time. The Chalfont Seizure Severity Scale also applies scores relative to the impact of a given item on the patient, without factoring in seizure frequency. The Liverpool Seizure Severity Scale is a patient questionnaire covering perceived seizure severity and the impact of ictal and postictal events. The UCLA Epilepsy Surgery Inventory (ESI-55) assesses quality of life for patients who have undergone surgery for epilepsy using generic health status instruments with additional epilepsy-specific items.(ABSTRACT TRUNCATED AT 250 WORDS)
On the physical significance of the Effective Independence method for sensor placement
NASA Astrophysics Data System (ADS)
Jiang, Yaoguang; Li, Dongsheng; Song, Gangbing
2017-05-01
Optimally deploy sparse sensors for better damage identification and structural health monitoring is always a challenging task. The Effective Independence(EI) is one of the most influential sensor placement method and to be discussed in the paper. Specifically, the effect of the different weighting coefficients on the maximization of the Fisher information matrix(FIM) and the physical significance of the re-orthogonalization of modal shapes through QR decomposition in the EI method are addressed. By analyzing the widely used EI method, we found that the absolute identification space put forward along with the EI method is preferable to ensuring the maximization of the FIM, instead of the original EI coefficient which was post-multiolied by a weighting matrix. That is, deleting the row with the minimum EI coefficient can’t achieve the objective of maximizing the trace of FIM as initially conceived. Furthermore, we observed that in the computation of EI method, the sum of each retained row in the absolute identification space is a constant in each iteration. This potential property can be revealed distinctively by the product of target mode and its transpose, and its form is similar to an alternative formula of the EI method through orthogonal-triangular(QR) decomposition previously proposed by the authors. With it, the physical significance of re-orthogonalization of modal shapes through QR decomposition in the computation of EI method can be obviously manifested from a new perspective. Finally, two simple examples are provided to demonstrate the above two observations.
Bucknell On-Line Circulation System; A Library Staff View.
ERIC Educational Resources Information Center
Rivoire, Helena
The Bucknell On-Line Circulation System (BLOCS) was designed to meet the requirements of a circulation system of the Ellen Clarke Bertrand Library of Bucknell University. The requirements for an automated system were, in sum: (1) a system whose operations were not only reliable but simple enough for student assistants (many of whom work only 10…
A Simplified Technique for Scoring DSM-IV Personality Disorders with the Five-Factor Model
ERIC Educational Resources Information Center
Miller, Joshua D.; Bagby, R. Michael; Pilkonis, Paul A.; Reynolds, Sarah K.; Lynam, Donald R.
2005-01-01
The current study compares the use of two alternative methodologies for using the Five-Factor Model (FFM) to assess personality disorders (PDs). Across two clinical samples, a technique using the simple sum of selected FFM facets is compared with a previously used prototype matching technique. The results demonstrate that the more easily…
Code of Federal Regulations, 2012 CFR
2012-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2013 CFR
2013-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2011 CFR
2011-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2010 CFR
2010-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2014 CFR
2014-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Commission has entered into an effective agreement under subsection 274b. of the Act. Non-agreement State... access control measures that are not related to the safe use of, or security of, radiological materials... equivalent means the sum of the products of the dose equivalent to the organ or tissue and the weighting...
ERIC Educational Resources Information Center
Lohnas, Lynn J.; Kahana, Michael J.
2014-01-01
According to the retrieved context theory of episodic memory, the cue for recall of an item is a weighted sum of recently activated cognitive states, including previously recalled and studied items as well as their associations. We show that this theory predicts there should be compound cuing in free recall. Specifically, the temporal contiguity…
76 FR 44815 - Chlorantraniliprole; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-27
... effects resulting from short- term dosing were observed. Therefore, the aggregate risk is the sum of the... increased liver weight (males only). Incidental oral short/intermediate- N/A N/A There was no hazard term (1 to 30 days). identified via the oral route over the short- and intermediate-term and therefore, no...
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the... boat. (b) The boat must be loaded with a quantity of weight that, when submerged, is equal to the sum...
Goshvarpour, Ateke; Goshvarpour, Atefeh
2018-04-30
Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.
Enhanced linear-array photoacoustic beamforming using modified coherence factor.
Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador
2018-02-01
Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Roles of antinucleon degrees of freedom in the relativistic random phase approximation
NASA Astrophysics Data System (ADS)
Kurasawa, Haruki; Suzuki, Toshio
2015-11-01
The roles of antinucleon degrees of freedom in the relativistic random phase approximation (RPA) are investigated. The energy-weighted sum of the RPA transition strengths is expressed in terms of the double commutator between the excitation operator and the Hamiltonian, as in nonrelativistic models. The commutator, however, should not be calculated in the usual way in the local field theory, because, otherwise, the sum vanishes. The sum value obtained correctly from the commutator is infinite, owing to the Dirac sea. Most of the previous calculations take into account only some of the nucleon-antinucleon states, in order to avoid divergence problems. As a result, RPA states with negative excitation energy appear, which make the sum value vanish. Moreover, disregarding the divergence changes the sign of nuclear interactions in the RPA equation that describes the coupling of the nucleon particle-hole states with the nucleon-antinucleon states. Indeed, the excitation energies of the spurious state and giant monopole states in the no-sea approximation are dominated by these unphysical changes. The baryon current conservation can be described without touching the divergence problems. A schematic model with separable interactions is presented, which makes the structure of the relativistic RPA transparent.
On the Latent Variable Interpretation in Sum-Product Networks.
Peharz, Robert; Gens, Robert; Pernkopf, Franz; Domingos, Pedro
2017-10-01
One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets.
Verge and Foliot Clock Escapement: A Simple Dynamical System
ERIC Educational Resources Information Center
Denny, Mark
2010-01-01
The earliest mechanical clocks appeared in Europe in the 13th century. From about 1250 CE to 1670 CE, these simple clocks consisted of a weight suspended from a rope or chain that was wrapped around a horizontal axle. To tell time, the weight must fall with a slow uniform speed, but, under the action of gravity alone, such a suspended weight would…
Tahiraj, Enver; Cubela, Mladen; Ostojic, Ljerka; Rodek, Jelena; Zenic, Natasa; Sekulic, Damir; Lesnik, Blaz
2016-05-16
Adolescence is considered to be the most important period for the prevention of substance use and misuse (SUM). The aim of this study was to investigate the problem of SUM and to establish potentially important factors associated with SUM in Kosovar adolescents. Multi-stage simple random sampling was used to select participants. At the end of their high school education, 980 adolescents (623 females) ages 17 to 19 years old were enrolled in the study. The prevalence of smoking, alcohol consumption (measured by Alcohol Use Disorder Identification Test-AUDIT), and illegal drug use (dependent variables), as well as socio-demographic, scholastic, familial, and sports-related factors (independent variables), were assessed. Boys smoke cigarettes more often than girls with daily-smoking prevalence of 16% among boys and 9% among girls (OR = 1.85, 95% = CI 1.25-2.75). The prevalence of harmful drinking (i.e., AUDIT scores of >10) is found to be alarming (41% and 37% for boys and girls, respectively; OR = 1.13, 95% CI = 0.87-1.48), while 17% of boys and 9% of girls used illegal drugs (OR = 2.01, 95% CI = 1.35-2.95). The behavioral grade (observed as: excellent-average-poor) is the factor that was most significantly correlated with SUM both in boys and girls, with lower behavioral grades among those adolescents who consume substances. In girls, lower maternal education levels were associated with a decreased likelihood of SUM, whereas sports achievement was negatively associated with risky drinking. In boys, sports achievement decreased the likelihood of daily smoking. Information on the factors associated with SUM should be disseminated among sports and school authorities.
Properties of Augmented Kohn-Sham Potential for Energy as Simple Sum of Orbital Energies.
Zahariev, Federico; Levy, Mel
2017-01-12
A recent modification to the traditional Kohn-Sham method ( Levy , M. ; Zahariev , F. Phys. Rev. Lett. 2014 , 113 , 113002 ; Levy , M. ; Zahariev , F. Mol. Phys. 2016 , 114 , 1162 - 1164 ), which gives the ground-state energy as a direct sum of the occupied orbital energies, is discussed and its properties are numerically illustrated on representative atoms and ions. It is observed that current approximate density functionals tend to give surprisingly small errors for the highest occupied orbital energies that are obtained with the augmented potential. The appropriately shifted Kohn-Sham potential is the basic object within this direct-energy Kohn-Sham method and needs to be approximated. To facilitate approximations, several constraints to the augmented Kohn-Sham potential are presented.
Futschik, K; Ammann, M; Bachmayer, S; Kenndler, E
1993-08-06
The ionic species that are formed during the microbial growth of Escherichia coli were determined by capillary isotachophoresis as a function of the time of cultivation. This formation was indicated by the change in a sum parameter, the impedance of the nutrient broth, measured by a special electrode system. Based on the determination of the individual ions formed under the given conditions (identified as acetate, lactate, alpha-ketoglutarate, fumarate, ammonium and probably a simple amine), the change in conductivity was calculated and compared with that obtained by the impedance measurement of the bulk medium. From the results it can be concluded that the change in the sum parameter as a function of time is originated by the ions determined.
Saris, W H; Astrup, A; Prentice, A M; Zunft, H J; Formiguera, X; Verboeket-van de Venne, W P; Raben, A; Poppitt, S D; Seppelt, B; Johnston, S; Vasilaras, T H; Keogh, G F
2000-10-01
To investigate the long-term effects of changes in dietary carbohydrate/fat ratio and simple vs complex carbohydrates. Randomized controlled multicentre trial (CARMEN), in which subjects were allocated for 6 months either to a seasonal control group (no intervention) or to one of three experimental groups: a control diet group (dietary intervention typical of the average national intake); a low-fat high simple carbohydrate group; or a low-fat high complex carbohydrate group. Three hundred and ninety eight moderately obese adults. The change in body weight was the primary outcome; changes in body composition and blood lipids were secondary outcomes. Body weight loss in the low-fat high simple carbohydrate and low-fat high complex carbohydrate groups was 0.9 kg (P < 0.05) and 1.8 kg (P < 0.001), while the control diet and seasonal control groups gained weight (0.8 and 0.1 kg, NS). Fat mass changed by -1.3kg (P< 0.01), -1.8kg (P< 0.001) and +0.6kg (NS) in the low-fat high simple carbohydrate, low-fat high complex carbohydrate and control diet groups, respectively. Changes in blood lipids did not differ significantly between the dietary treatment groups. Our findings suggest that reduction of fat intake results in a modest but significant reduction in body weight and body fatness. The concomitant increase in either simple or complex carbohydrates did not indicate significant differences in weight change. No adverse effects on blood lipids were observed. These findings underline the importance of this dietary change and its potential impact on the public health implications of obesity.
Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis
Nieves, Jeri W.; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J. Americo M.; Sorenson, Eric J.; D’Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi
2017-01-01
IMPORTANCE There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). OBJECTIVE To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. EXPOSURES Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). MAIN OUTCOMES AND MEASURES Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale–Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). RESULTS Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5–68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of “good” micronutrients and “good” food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. CONCLUSIONS AND RELEVANCE Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes. PMID:27775751
Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights
NASA Astrophysics Data System (ADS)
Mohammed, Husam Jasim; Kasim, Maznah Mat; Shaharanee, Izwan Nizal Mohd
2017-11-01
This paper compares the performances of four rank-based weighting assessment techniques, Rank Sum (RS), Rank Reciprocal (RR), Rank Exponent (RE), and Rank Order Centroid (ROC) on five identified e-learning criteria to select the best weights method. A total of 35 experts in a public university in Malaysia were asked to rank the criteria and to evaluate five e-learning approaches which include blended learning, flipped classroom, ICT supported face to face learning, synchronous learning, and asynchronous learning. The best ranked criteria weights are defined as weights that have the least total absolute differences with the geometric mean of all weights, were then used to select the most suitable e-learning approach by using TOPSIS method. The results show that RR weights are the best, while flipped classroom approach implementation is the most suitable approach. This paper has developed a decision framework to aid decision makers (DMs) in choosing the most suitable weighting method for solving MCDM problems.
Analysis of Environmental Chemical Mixtures and Non-Hodgkin Lymphoma Risk in the NCI-SEER NHL Study.
Czarnota, Jenna; Gennings, Chris; Colt, Joanne S; De Roos, Anneclaire J; Cerhan, James R; Severson, Richard K; Hartge, Patricia; Ward, Mary H; Wheeler, David C
2015-10-01
There are several suspected environmental risk factors for non-Hodgkin lymphoma (NHL). The associations between NHL and environmental chemical exposures have typically been evaluated for individual chemicals (i.e., one-by-one). We determined the association between a mixture of 27 correlated chemicals measured in house dust and NHL risk. We conducted a population-based case-control study of NHL in four National Cancer Institute-Surveillance, Epidemiology, and End Results centers--Detroit, Michigan; Iowa; Los Angeles County, California; and Seattle, Washington--from 1998 to 2000. We used weighted quantile sum (WQS) regression to model the association of a mixture of chemicals and risk of NHL. The WQS index was a sum of weighted quartiles for 5 polychlorinated biphenyls (PCBs), 7 polycyclic aromatic hydrocarbons (PAHs), and 15 pesticides. We estimated chemical mixture weights and effects for study sites combined and for each site individually, and also for histologic subtypes of NHL. The WQS index was statistically significantly associated with NHL overall [odds ratio (OR) = 1.30; 95% CI: 1.08, 1.56; p = 0.006; for one quartile increase] and in the study sites of Detroit (OR = 1.71; 95% CI: 1.02, 2.92; p = 0.045), Los Angeles (OR = 1.44; 95% CI: 1.00, 2.08; p = 0.049), and Iowa (OR = 1.76; 95% CI: 1.23, 2.53; p = 0.002). The index was marginally statistically significant in Seattle (OR = 1.39; 95% CI: 0.97, 1.99; p = 0.071). The most highly weighted chemicals for predicting risk overall were PCB congener 180 and propoxur. Highly weighted chemicals varied by study site; PCBs were more highly weighted in Detroit, and pesticides were more highly weighted in Iowa. An index of chemical mixtures was significantly associated with NHL. Our results show the importance of evaluating chemical mixtures when studying cancer risk.
A universal reduced glass transition temperature for liquids
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1979-01-01
Data on the dependence of the glass transition temperature on the molecular structure for low-molecular-weight liquids are analyzed in order to determine whether Boyer's reduced glass transition temperature (1952) is a universal constant as proposed. It is shown that the Boyer ratio varies widely depending on the chemical nature of the molecule. It is pointed out that a characteristic temperature ratio, defined by the ratio of the sum of the melting temperature and the boiling temperature to the sum of the glass transition temperature and the boiling temperature, is a universal constant independent of the molecular structure of the liquid. The average value of the ratio obtained from data for 65 liquids is 1.15.
Diet models with linear goal programming: impact of achievement functions.
Gerdessen, J C; de Vries, J H M
2015-11-01
Diet models based on goal programming (GP) are valuable tools in designing diets that comply with nutritional, palatability and cost constraints. Results derived from GP models are usually very sensitive to the type of achievement function that is chosen.This paper aims to provide a methodological insight into several achievement functions. It describes the extended GP (EGP) achievement function, which enables the decision maker to use either a MinSum achievement function (which minimizes the sum of the unwanted deviations) or a MinMax achievement function (which minimizes the largest unwanted deviation), or a compromise between both. An additional advantage of EGP models is that from one set of data and weights multiple solutions can be obtained. We use small numerical examples to illustrate the 'mechanics' of achievement functions. Then, the EGP achievement function is demonstrated on a diet problem with 144 foods, 19 nutrients and several types of palatability constraints, in which the nutritional constraints are modeled with fuzzy sets. Choice of achievement function affects the results of diet models. MinSum achievement functions can give rise to solutions that are sensitive to weight changes, and that pile all unwanted deviations on a limited number of nutritional constraints. MinMax achievement functions spread the unwanted deviations as evenly as possible, but may create many (small) deviations. EGP comprises both types of achievement functions, as well as compromises between them. It can thus, from one data set, find a range of solutions with various properties.
"Compacted" procedures for adults' simple addition: A review and critique of the evidence.
Chen, Yalin; Campbell, Jamie I D
2018-04-01
We review recent empirical findings and arguments proffered as evidence that educated adults solve elementary addition problems (3 + 2, 4 + 1) using so-called compacted procedures (e.g., unconscious, automatic counting); a conclusion that could have significant pedagogical implications. We begin with the large-sample experiment reported by Uittenhove, Thevenot and Barrouillet (2016, Cognition, 146, 289-303), which tested 90 adults on the 81 single-digit addition problems from 1 + 1 to 9 + 9. They identified the 12 very-small addition problems with different operands both ≤ 4 (e.g., 4 + 3) as a distinct subgroup of problems solved by unconscious, automatic counting: These items yielded a near-perfectly linear increase in answer response time (RT) yoked to the sum of the operands. Using the data reported in the article, however, we show that there are clear violations of the sum-counting model's predictions among the very-small addition problems, and that there is no real RT boundary associated with addends ≤4. Furthermore, we show that a well-known associative retrieval model of addition facts-the network interference theory (Campbell, 1995)-predicts the results observed for these problems with high precision. We also review the other types of evidence adduced for the compacted procedure theory of simple addition and conclude that these findings are unconvincing in their own right and only distantly consistent with automatic counting. We conclude that the cumulative evidence for fast compacted procedures for adults' simple addition does not justify revision of the long-standing assumption that direct memory retrieval is ultimately the most efficient process of simple addition for nonzero problems, let alone sufficient to recommend significant changes to basic addition pedagogy.
Internalizing Trajectories in Young Boys and Girls: The Whole Is Not a Simple Sum of Its Parts
ERIC Educational Resources Information Center
Carter, Alice S.; Godoy, Leandra; Wagmiller, Robert L.; Veliz, Philip; Marakovitz, Susan; Briggs-Gowan, Margaret J.
2010-01-01
There is support for a differentiated model of early internalizing emotions and behaviors, yet researchers have not examined the course of multiple components of an internalizing domain across early childhood. In this paper we present growth models for the Internalizing domain of the Infant-Toddler Social and Emotional Assessment and its component…
College Savings Plans: Second Generation Progress and Problems.
ERIC Educational Resources Information Center
Olivas, Michael A.
College savings plans, which operate in 20 states, work on a simple premise: parents or grandparents place a lump sum in a contract or make monthly payments that guarantees the money will be sufficient for an equivalent of tuition and fees in a set period of time in the future. The state can guarantee the return by virtue of pooled assets. States…
The Research Laboratory of Electronics Progress Report Number 132: January 1-December 31, 1989
1990-01-01
between Binaural Hearing and Brainstem Auditory Evoked Potentials in Humans...fem- tosecond excitation pulses. This gives rise to the characteristic " beating " pattern which contains sum and difference frequencies. The "spike...vibrational modes whose through a simple optical network consisting simultaneous oscillations yield the " beating " of only two lenses, two gratings
The Combinatorial Trace Method in Action
ERIC Educational Resources Information Center
Krebs, Mike; Martinez, Natalie C.
2013-01-01
On any finite graph, the number of closed walks of length k is equal to the sum of the kth powers of the eigenvalues of any adjacency matrix. This simple observation is the basis for the combinatorial trace method, wherein we attempt to count (or bound) the number of closed walks of a given length so as to obtain information about the graph's…
ERIC Educational Resources Information Center
Cai, Li
2013-01-01
Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…
Modern Geometric Algebra: A (Very Incomplete!) Survey
ERIC Educational Resources Information Center
Suzuki, Jeff
2009-01-01
Geometric algebra is based on two simple ideas. First, the area of a rectangle is equal to the product of the lengths of its sides. Second, if a figure is broken apart into several pieces, the sum of the areas of the pieces equals the area of the original figure. Remarkably, these two ideas provide an elegant way to introduce, connect, and…
Viviani Polytopes and Fermat Points
ERIC Educational Resources Information Center
Zhou, Li
2012-01-01
Given a set of oriented hyperplanes P = {p1, . . . , pk} in R[superscript n], define v : R[superscript n] [right arrow] R by v(X) = the sum of the signed distances from X to p[subscript 1], . . . , p[subscript k], for any point X [is a member of] R[superscript n]. We give a simple geometric characterization of P for which v is constant, leading to…
Kasim-Karakas, Sidika E; Almario, Rogelio U; Cunningham, Wendy
2009-07-01
To compare the effects of protein vs. simple sugars on weight loss, body composition, and metabolic and endocrine parameters in polycystic ovary syndrome (PCOS). A 2-month, free-living, randomized, single-blinded study. University PCOS clinic. Thirty-three patients with PCOS. To achieve a final energy reduction of 450 kcal/day, first the daily energy intake was reduced by 700 kcal; then a 240-kcal supplement containing either whey protein or simple sugars was added. Changes in weight, fat mass, fasting glucose and insulin, plasma lipoproteins, and sex steroids. Twenty-four subjects (13 in the simple sugars group and 11 in the protein group) completed the study. The protein group lost more weight (-3.3 +/- 0.8 kg vs. -1.1 +/- 0.6 kg) and more fat mass (-3.1 +/- 0.9 kg vs. -0.5 +/- 0.6 kg) and had larger decreases in serum cholesterol (-33.0 +/- 8.4 mg/dL vs. -2.3 +/- 6.8 mg/dL), high-density lipoprotein cholesterol (-4.5 +/- 1.3 mg/dL vs. -0.4 +/- 1.3 mg/dL), and apoprotein B (-20 +/- 5 mg/dL vs. 3 +/- 5 mg/dL). In patients with PCOS, a hypocaloric diet supplemented with protein reduced body weight, fat mass, serum cholesterol, and apoprotein B more than the diet supplemented with simple sugars.
Connectotyping: Model Based Fingerprinting of the Functional Connectome
Miranda-Dominguez, Oscar; Mills, Brian D.; Carpenter, Samuel D.; Grant, Kathleen A.; Kroenke, Christopher D.; Nigg, Joel T.; Fair, Damien A.
2014-01-01
A better characterization of how an individual’s brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called “connectotype”, or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model’s ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach. PMID:25386919
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Hebbian based learning with winner-take-all for spiking neural networks
NASA Astrophysics Data System (ADS)
Gupta, Ankur; Long, Lyle
2009-03-01
Learning methods for spiking neural networks are not as well developed as the traditional neural networks that widely use back-propagation training. We propose and implement a Hebbian based learning method with winner-take-all competition for spiking neural networks. This approach is spike time dependent which makes it naturally well suited for a network of spiking neurons. Homeostasis with Hebbian learning is implemented which ensures stability and quicker learning. Homeostasis implies that the net sum of incoming weights associated with a neuron remains the same. Winner-take-all is also implemented for competitive learning between output neurons. We implemented this learning rule on a biologically based vision processing system that we are developing, and use layers of leaky integrate and fire neurons. The network when presented with 4 bars (or Gabor filters) of different orientation learns to recognize the bar orientations (or Gabor filters). After training, each output neuron learns to recognize a bar at specific orientation and responds by firing more vigorously to that bar and less vigorously to others. These neurons are found to have bell shaped tuning curves and are similar to the simple cells experimentally observed by Hubel and Wiesel in the striate cortex of cat and monkey.
NASA Astrophysics Data System (ADS)
Hu, Qinglei
2010-02-01
Semi-globally input-to-state stable (ISS) control law is derived for flexible spacecraft attitude maneuvers in the presence of parameter uncertainties and external disturbances. The modified rodrigues parameters (MRP) are used as the kinematic variables since they are nonsingular for all possible rotations. This novel simple control is a proportional-plus-derivative (PD) type controller plus a sign function through a special Lyapunov function construction involving the sum of quadratic terms in the angular velocities, kinematic parameters, modal variables and the cross state weighting. A sufficient condition under which this nonlinear PD-type control law can render the system semi-globally input-to-state stable is provided such that the closed-loop system is robust with respect to any disturbance within a quantifiable restriction on the amplitude, as well as the set of initial conditions, if the control gains are designed appropriately. In addition to detailed derivations of the new controllers design and a rigorous sketch of all the associated stability and attitude convergence proofs, extensive simulation studies have been conducted to validate the design and the results are presented to highlight the ensuring closed-loop performance benefits when compared with the conventional control schemes.
NASA Astrophysics Data System (ADS)
Gardi, Einan
2014-04-01
We compute a class of diagrams contributing to the multi-leg soft anomalous dimension through three loops, by renormalizing a product of semi-infinite non-lightlike Wilson lines in dimensional regularization. Using non-Abelian exponentiation we directly compute contributions to the exponent in terms of webs. We develop a general strategy to compute webs with multiple gluon exchanges between Wilson lines in configuration space, and explore their analytic structure in terms of α ij , the exponential of the Minkowski cusp angle formed between the lines i and j. We show that beyond the obvious inversion symmetry α ij → 1 /α ij , at the level of the symbol the result also admits a crossing symmetry α ij → - α ij , relating spacelike and timelike kinematics, and hence argue that in this class of webs the symbol alphabet is restricted to α ij and . We carry out the calculation up to three gluons connecting four Wilson lines, finding that the contributions to the soft anomalous dimension are remarkably simple: they involve pure functions of uniform weight, which are written as a sum of products of polylogarithms, each depending on a single cusp angle. We conjecture that this type of factorization extends to all multiple-gluon-exchange contributions to the anomalous dimension.
Spectroscopic Evidence for Nonuniform Starspot Properties on II Pegasi
NASA Technical Reports Server (NTRS)
ONeal, Douglas; Saar, Steven H.; Neff, James E.
1998-01-01
We present spectroscopic evidence for Multiple Spot temperatures on the RS CVn star II Pegasi (HD 224085). We model the strengths of the 7055 and 8860 A TiO absorption bands in the spectrum of II Peg using weighted sums of inactive comparison spectra: a K star to represent the nonspotted photosphere and an M star to represent the spots. The best fit yields independent measurements of the starspot filling factor (f(sub s) and mean spot temperature (T(sub s)) averaged over the visible hemisphere of the star. During three-fourths of a rotation of II Peg in late 1996, we measure a constant f(sub s) approximately equals 55% +/- 5%. However, (T(sub s) varies from 3350 +/- 60 to 3550 +/- 70 K. We compute (T(sub s) for two simple models: (1) a star with two distinct spot temperatures, and (2) a star with different umbral/penumbral area ratios. The changing (T(sub s) correlates with emission strengths of H(alpha) and the Ca II infrared triplet in the sense that cooler (T(sub s) accompanies weaker emission. We explore possible implications of these results for the physical properties of the spots on II Peg and for stellar surface structure in general.
NASA Technical Reports Server (NTRS)
Thomas, Jr., Jess Brooks (Inventor)
1999-01-01
The front end in GPS receivers has the functions of amplifying, down-converting, filtering and sampling the received signals. In the preferred embodiment, only two operations, A/D conversion and a sum, bring the signal from RF to filtered quadrature baseband samples. After amplification and filtering at RF, the L1 and L2 signals are each sampled at RF at a high selected subharmonic rate. The subharmonic sample rates are approximately 900 MHz for L1 and 982 MHz for L2. With the selected subharmonic sampling, the A/D conversion effectively down-converts the signal from RF to quadrature components at baseband. The resulting sample streams for L1 and L2 are each reduced to a lower rate with a digital filter, which becomes a straight sum in the simplest embodiment. The frequency subsystem can be very simple, only requiring the generation of a single reference frequency (e.g. 20.46 MHz minus a small offset) and the simple multiplication of this reference up to the subharmonic sample rates for L1 and L2. The small offset in the reference frequency serves the dual purpose of providing an advantageous offset in the down-converted carrier frequency and in the final baseband sample rate.
Stratum Weight Determination Using Shortest Path Algorithm
Susan L. King
2005-01-01
Forest Inventory and Analysis uses poststratification to calculate resource estimates. Each county has a different stratification, and the stratification may differ depending on the number of panels of data available. A ?5 by 5 sum? filter was passed over the reclassified forest/nonforest Multi-Resolution Landscape Characterization image used in Phase 1, generating an...
9 CFR 381.409 - Nutrition label content.
Code of Federal Regulations, 2010 CFR
2010-01-01
... general factors of 4, 4, and 9 calories per gram for protein, total carbohydrate, and total fat... calories per gram for protein, total carbohydrate less the amount of insoluble dietary fiber, and total fat... subtraction of the sum of the crude protein, total fat, moisture, and ash from the total weight of the product...
9 CFR 317.309 - Nutrition label content.
Code of Federal Regulations, 2010 CFR
2010-01-01
... general factors of 4, 4, and 9 calories per gram for protein, total carbohydrate, and total fat... calories per gram for protein, total carbohydrate less the amount of insoluble dietary fiber, and total fat... subtraction of the sum of the crude protein, total fat, moisture, and ash from the total weight of the product...
7 CFR 760.5 - Fair market value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... market value of the affected farmer's normal marketings, which, for the purposes of this subpart, shall be the sum of the net proceeds such farmer would have received for his normal marketings in each of... affected farmer's normal marketings for each such pay period by the average net price per hundred-weight of...
46 CFR 163.002-5 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... load means the sum of the weights of— (1) The rigid ladder or lift platform, the suspension cables (if... persons capacity of the hoist; (c) Lift height means the distance from the lowest step of the pilot ladder... (2) If the hoist does not have suspension cables, the ladder or lift platform is in its lowest...
46 CFR 163.002-5 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... load means the sum of the weights of— (1) The rigid ladder or lift platform, the suspension cables (if... persons capacity of the hoist; (c) Lift height means the distance from the lowest step of the pilot ladder... (2) If the hoist does not have suspension cables, the ladder or lift platform is in its lowest...
Do Employers Forgive Applicants' Bad Spelling in Résumés?
ERIC Educational Resources Information Center
Martin-Lacroux, Christelle; Lacroux, Alain
2017-01-01
Spelling deficiencies are becoming a growing concern among employers, but few studies have quantified this phenomenon and its impact on recruiters' choice. This article aims to highlight the relative weight of the form (the spelling skills) in application forms, compared with the content (the level of work experience), in recruiters' judgment…
The permeability coefficients of mixed matrix membranes of polydimethylsiloxane (PDMS) and silicalite crystal are taken as the sum of the permeability coefficients of membrane components each weighted by their associated mass fraction. The permeability coefficient of a membrane c...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Definitions. As used in this part: Additional tier 1 capital is defined in § 3.20(c). Advanced approaches... described in § 3.100(b)(1). Advanced approaches total risk-weighted assets means: (1) The sum of: (i) Credit... respect to a company, means any company that controls, is controlled by, or is under common control with...
On the null distribution of Bayes factors in linear regression
USDA-ARS?s Scientific Manuscript database
We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...
2015-12-24
minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution
Nouri-Borujerdi, Ali; Kazi, Salim Newaz
2014-01-01
In this study an expression for soot absorption coefficient is introduced to extend the weighted-sum-of-gray gases data to the furnace medium containing gas-soot mixture in a utility boiler 150 MWe. Heat transfer and temperature distribution of walls and within the furnace space are predicted by zone method technique. Analyses have been done considering both cases of presence and absence of soot particles at 100% load. To validate the proposed soot absorption coefficient, the expression is coupled with the Taylor and Foster's data as well as Truelove's data for CO2-H2O mixture and the total emissivities are calculated and compared with the Truelove's parameters for 3-term and 4-term gray gases plus two soot absorption coefficients. In addition, some experiments were conducted at 100% and 75% loads to measure furnace exit gas temperature as well as the rate of steam production. The predicted results show good agreement with the measured data at the power plant site. PMID:25143981
Gharehkhani, Samira; Nouri-Borujerdi, Ali; Kazi, Salim Newaz; Yarmand, Hooman
2014-01-01
In this study an expression for soot absorption coefficient is introduced to extend the weighted-sum-of-gray gases data to the furnace medium containing gas-soot mixture in a utility boiler 150 MWe. Heat transfer and temperature distribution of walls and within the furnace space are predicted by zone method technique. Analyses have been done considering both cases of presence and absence of soot particles at 100% load. To validate the proposed soot absorption coefficient, the expression is coupled with the Taylor and Foster's data as well as Truelove's data for CO2-H2O mixture and the total emissivities are calculated and compared with the Truelove's parameters for 3-term and 4-term gray gases plus two soot absorption coefficients. In addition, some experiments were conducted at 100% and 75% loads to measure furnace exit gas temperature as well as the rate of steam production. The predicted results show good agreement with the measured data at the power plant site.
NASA Astrophysics Data System (ADS)
Musdalifah, N.; Handajani, S. S.; Zukhronah, E.
2017-06-01
Competition between the homoneous companies cause the company have to keep production quality. To cover this problem, the company controls the production with statistical quality control using control chart. Shewhart control chart is used to normal distributed data. The production data is often non-normal distribution and occured small process shift. Grand median control chart is a control chart for non-normal distributed data, while cumulative sum (cusum) control chart is a sensitive control chart to detect small process shift. The purpose of this research is to compare grand median and cusum control charts on shuttlecock weight variable in CV Marjoko Kompas dan Domas by generating data as the actual distribution. The generated data is used to simulate multiplier of standard deviation on grand median and cusum control charts. Simulation is done to get average run lenght (ARL) 370. Grand median control chart detects ten points that out of control, while cusum control chart detects a point out of control. It can be concluded that grand median control chart is better than cusum control chart.
Some properties of the two-body effective interaction in the /sup 208/Pb region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groleau, R.
The (/sup 3/He,d) and (/sup 4/He,t) single proton transfer reactions on /sup 208/Pb and /sup 209/Bi were studied using 30 and 40 MeV He beams from the Princeton Cyclotron Laboratory. The outgoing d and t were detected by a position sensitive proportional counter in the focal plane of a Q-3D spectrometer. The resolution varied between 10 and 14 keV (FWHM). Using the ratio of the cross-sections for the (/sup 3/He,d) and (/sup 4/He,t) reactions to determine the magnitude of the angular momentum transfers, the spectroscopic factors for the reaction on /sup 209/Bi have been measured relative to the transitions tomore » the single particle states in these reactions on /sup 208/Pb. Sum rules as developed by Bansal and French are used to study the configurations vertical bar h/sub 9/2 x h/sub 9/2/>, vertical bar h/sub 9/2/ x f/sub 7/2/>, vertical bar h/sub 9/2 x i/sub 13/2/>, vertical bar h/sub 9/2/ x f/sub 5/2/>and part of vertical bar h/sub 9/2/ x p/sub 3/2/> and vertical bar h/sub 9/2/ x p/sub 1/2>. Using the linear energy weighted sum rule, the diagonal matrix elements of the effective interaction between valence protons around the /sup 208/Pb core are deduced. The matrix elements obtained from a simple empirical interaction V/sub I//sup T=1/ of a pure Wigner type are compared to the extracted matrix elements. The interaction is characterized by an attractive short-range (0.82j and a repulsive long-range (8.2fm) potential: V/sub I//sup T = 1/ (MeV =-/96 e/sup - (r/0.82) /sup 2// + 0.51 e/sup -(r/8.2)/sup 2/. The core polarization is studied using the experimental static electric quadrupole and magnetic dipole moments of the nuclei in the /sup 208/Pb region. In general, the magnetic moments of multiple valence nucleon nuclei are well predicted by simple rules of Racah algebra. The three and four valence proton spectra (/sup 211/At and /sup 212/Rn) calculated with the experimental two particle matrix elements agree well with the experimental spectra.« less
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
NASA Astrophysics Data System (ADS)
Jhan, Sin-Mu; Jin, Bih-Yaw
2017-11-01
A simple molecular orbital treatment of local current distributions inside single molecular junctions is developed in this paper. Using the first-order perturbation theory and nonequilibrium Green's function techniques in the framework of Hückel theory, we show that the leading contributions to local current distributions are directly proportional to the off-diagonal elements of transition density matrices. Under the orbital approximation, the major contributions to local currents come from a few dominant molecular orbital pairs which are mixed by the interactions between the molecule and electrodes. A few simple molecular junctions consisting of single- and multi-ring conjugated systems are used to demonstrate that local current distributions inside molecular junctions can be decomposed by partial sums of a few leading contributing transition density matrices.
Analysis of the influencing factors of global energy interconnection development
NASA Astrophysics Data System (ADS)
Zhang, Yi; He, Yongxiu; Ge, Sifan; Liu, Lin
2018-04-01
Under the background of building global energy interconnection and achieving green and low-carbon development, this paper grasps a new round of energy restructuring and the trend of energy technology change, based on the present situation of global and China's global energy interconnection development, established the index system of the impact of global energy interconnection development factors. A subjective and objective weight analysis of the factors affecting the development of the global energy interconnection was conducted separately by network level analysis and entropy method, and the weights are summed up by the method of additive integration, which gives the comprehensive weight of the influencing factors and the ranking of their influence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidbrink, W. W.; Persico, E. A. D.; Austin, M. E.
2016-02-09
Here, neutral-beam ions that are deflected onto loss orbits by Alfvén eigenmodes (AE) on their first bounce orbit and are detected by a fast-ion loss detector (FILD) satisfy the “local resonance” condition. This theory qualitatively explains FILD observations for a wide variety of AE-particle interactions. When coherent losses are measured for multiple AE, oscillations at the sum and difference frequencies of the independent modes are often observed. The amplitudes of the sum and difference peaks correlate with the amplitudes of the fundamental loss-signal amplitudes but do not correlate with the measured mode amplitudes. In contrast to a simple uniform-plasma theorymore » of the interaction, the loss-signal amplitude at the sum frequency is often larger than the loss-signal amplitude at the difference frequency, indicating a more detailed computation of the orbital trajectories through the mode eigenfunctions is needed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidbrink, W. W.; Persico, E. A. D.; Austin, M. E.
2016-02-15
Neutral-beam ions that are deflected onto loss orbits by Alfvén eigenmodes (AE) on their first bounce orbit and are detected by a fast-ion loss detector (FILD) satisfy the “local resonance” condition proposed by Zhang et al. [Nucl. Fusion 55, 22002 (2015)]. This theory qualitatively explains FILD observations for a wide variety of AE-particle interactions. When coherent losses are measured for multiple AE, oscillations at the sum and difference frequencies of the independent modes are often observed in the loss signal. The amplitudes of the sum and difference peaks correlate weakly with the amplitudes of the fundamental loss-signal amplitudes but domore » not correlate with the measured mode amplitudes. In contrast to a simple uniform-plasma theory of the interaction [Chen et al., Nucl. Fusion 54, 083005 (2014)], the loss-signal amplitude at the sum frequency is often larger than the loss-signal amplitude at the difference frequency, indicating a more detailed computation of the orbital trajectories through the mode eigenfunctions is needed.« less
A new approach for computing a flood vulnerability index using cluster analysis
NASA Astrophysics Data System (ADS)
Fernandez, Paulo; Mourato, Sandra; Moreira, Madalena; Pereira, Luísa
2016-08-01
A Flood Vulnerability Index (FloodVI) was developed using Principal Component Analysis (PCA) and a new aggregation method based on Cluster Analysis (CA). PCA simplifies a large number of variables into a few uncorrelated factors representing the social, economic, physical and environmental dimensions of vulnerability. CA groups areas that have the same characteristics in terms of vulnerability into vulnerability classes. The grouping of the areas determines their classification contrary to other aggregation methods in which the areas' classification determines their grouping. While other aggregation methods distribute the areas into classes, in an artificial manner, by imposing a certain probability for an area to belong to a certain class, as determined by the assumption that the aggregation measure used is normally distributed, CA does not constrain the distribution of the areas by the classes. FloodVI was designed at the neighbourhood level and was applied to the Portuguese municipality of Vila Nova de Gaia where several flood events have taken place in the recent past. The FloodVI sensitivity was assessed using three different aggregation methods: the sum of component scores, the first component score and the weighted sum of component scores. The results highlight the sensitivity of the FloodVI to different aggregation methods. Both sum of component scores and weighted sum of component scores have shown similar results. The first component score aggregation method classifies almost all areas as having medium vulnerability and finally the results obtained using the CA show a distinct differentiation of the vulnerability where hot spots can be clearly identified. The information provided by records of previous flood events corroborate the results obtained with CA, because the inundated areas with greater damages are those that are identified as high and very high vulnerability areas by CA. This supports the fact that CA provides a reliable FloodVI.
Ho, Lisa M; Nelson, Rendon C; Delong, David M
2007-05-01
To prospectively evaluate the use of lean body weight (LBW) as the main determinant of the volume and rate of contrast material administration during multi-detector row computed tomography of the liver. This HIPAA-compliant study had institutional review board approval. All patients gave written informed consent. Four protocols were compared. Standard protocol involved 125 mL of iopamidol injected at 4 mL/sec. Total body weight (TBW) protocol involved 0.7 g iodine per kilogram of TBW. Calculated LBW and measured LBW protocols involved 0.86 g of iodine per kilogram and 0.92 g of iodine per kilogram calculated or measured LBW for men and women, respectively. Injection rate used for the three experimental protocols was determined proportionally on the basis of the calculated volume of contrast material. Postcontrast attenuation measurements during portal venous phase were obtained in liver, portal vein, and aorta for each group and were summed for each patient. Patient-to-patient enhancement variability in same group was measured with Levene test. Two-tailed t test was used to compare the three experimental protocols with the standard protocol. Data analysis was performed in 101 patients (25 or 26 patients per group), including 56 men and 45 women (mean age, 53 years). Average summed attenuation values for standard, TBW, calculated LBW, and measured LBW protocols were 419 HU +/- 50 (standard deviation), 443 HU +/- 51, 433 HU +/- 50, and 426 HU +/- 33, respectively (P = not significant for all). Levene test results for summed attenuation data for standard, TBW, calculated LBW, and measured LBW protocols were 40 +/- 29, 38 +/- 33 (P = .83), 35 +/- 35 (P = .56), and 26 +/- 19 (P = .05), respectively. By excluding highly variable but poorly perfused adipose tissue from calculation of contrast medium dose, the measured LBW protocol may lessen patient-to-patient enhancement variability while maintaining satisfactory hepatic and vascular enhancement.
Healthy habits: efficacy of simple advice on weight control based on a habit-formation model.
Lally, P; Chipperfield, A; Wardle, J
2008-04-01
To evaluate the efficacy of a simple weight loss intervention, based on principles of habit formation. An exploratory trial in which overweight and obese adults were randomized either to a habit-based intervention condition (with two subgroups given weekly vs monthly weighing; n=33, n=36) or to a waiting-list control condition (n=35) over 8 weeks. Intervention participants were followed up for 8 months. A total of 104 adults (35 men, 69 women) with an average BMI of 30.9 kg m(-2). Intervention participants were given a leaflet containing advice on habit formation and simple recommendations for eating and activity behaviours promoting negative energy balance, together with a self-monitoring checklist. Weight change over 8 weeks in the intervention condition compared with the control condition and weight loss maintenance over 32 weeks in the intervention condition. At 8 weeks, people in the intervention condition had lost significantly more weight (mean=2.0 kg) than those in the control condition (0.4 kg), with no difference between weekly and monthly weighing subgroups. At 32 weeks, those who remained in the study had lost an average of 3.8 kg, with 54% losing 5% or more of their body weight. An intention-to-treat analysis (based on last-observation-carried-forward) reduced this to 2.6 kg, with 26% achieving a 5% weight loss. This easily disseminable, low-cost, simple intervention produced clinically significant weight loss. In limited resource settings it has potential as a tool for obesity management.
Brookes, V J; Hernández-Jover, M; Neslo, R; Cowled, B; Holyoake, P; Ward, M P
2014-01-01
We describe stakeholder preference modelling using a combination of new and recently developed techniques to elicit criterion weights to incorporate into a multi-criteria decision analysis framework to prioritise exotic diseases for the pig industry in Australia. Australian pig producers were requested to rank disease scenarios comprising nine criteria in an online questionnaire. Parallel coordinate plots were used to visualise stakeholder preferences, which aided identification of two diverse groups of stakeholders - one group prioritised diseases with impacts on livestock, and the other group placed more importance on diseases with zoonotic impacts. Probabilistic inversion was used to derive weights for the criteria to reflect the values of each of these groups, modelling their choice using a weighted sum value function. Validation of weights against stakeholders' rankings for scenarios based on real diseases showed that the elicited criterion weights for the group who prioritised diseases with livestock impacts were a good reflection of their values, indicating that the producers were able to consistently infer impacts from the disease information in the scenarios presented to them. The highest weighted criteria for this group were attack rate and length of clinical disease in pigs, and market loss to the pig industry. The values of the stakeholders who prioritised zoonotic diseases were less well reflected by validation, indicating either that the criteria were inadequate to consistently describe zoonotic impacts, the weighted sum model did not describe stakeholder choice, or that preference modelling for zoonotic diseases should be undertaken separately from livestock diseases. Limitations of this study included sampling bias, as the group participating were not necessarily representative of all pig producers in Australia, and response bias within this group. The method used to elicit criterion weights in this study ensured value trade-offs between a range of potential impacts, and that the weights were implicitly related to the scale of measurement of disease criteria. Validation of the results of the criterion weights against real diseases - a step rarely used in MCDA - added scientific rigour to the process. The study demonstrated that these are useful techniques for elicitation of criterion weights for disease prioritisation by stakeholders who are not disease experts. Preference modelling for zoonotic diseases needs further characterisation in this context. Copyright © 2013 Elsevier B.V. All rights reserved.
Patel, Lara A; Kindt, James T
2017-03-14
We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.
ANALYTICAL SOLUTIONS OF SINGULAR ISOTHERMAL QUADRUPOLE LENS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu Zhe; Lin, W. P.; Yang Xiaofeng, E-mail: chuzhe@shao.ac.cn, E-mail: linwp@shao.ac.cn
Using an analytical method, we study the singular isothermal quadrupole (SIQ) lens system, which is the simplest lens model that can produce four images. In this case, the radial mass distribution is in accord with the profile of the singular isothermal sphere lens, and the tangential distribution is given by adding a quadrupole on the monopole component. The basic properties of the SIQ lens have been studied in this Letter, including the deflection potential, deflection angle, magnification, critical curve, caustic, pseudo-caustic, and transition locus. Analytical solutions of the image positions and magnifications for the source on axes are derived. Wemore » find that naked cusps will appear when the relative intensity k of quadrupole to monopole is larger than 0.6. According to the magnification invariant theory of the SIQ lens, the sum of the signed magnifications of the four images should be equal to unity, as found by Dalal. However, if a source lies in the naked cusp, the summed magnification of the left three images is smaller than the invariant 1. With this simple lens system, we study the situations where a point source infinitely approaches a cusp or a fold. The sum of the magnifications of the cusp image triplet is usually not equal to 0, and it is usually positive for major cusps while negative for minor cusps. Similarly, the sum of magnifications of the fold image pair is usually not equal to 0 either. Nevertheless, the cusp and fold relations are still equal to 0 in that the sum values are divided by infinite absolute magnifications by definition.« less
Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory
NASA Astrophysics Data System (ADS)
Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.
2017-03-01
We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.
1986-01-01
The complex integer multiplier and adder over the direct sum of two copies of finite field developed by Cozzens and Finkelstein (1985) is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplication over the rings of integers modulo Fermat numbers can be performed by means of two integer multiplications, whereas the complex integer multiplication requires three integer multiplications. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed to compute a systolic array of the DFT can be reduced substantially. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
Silva, T M; de Medeiros, A N; Oliveira, R L; Gonzaga Neto, S; Queiroga, R de C R do E; Ribeiro, R D X; Leão, A G; Bezerra, L R
2016-07-01
This study aimed to determine the impact of replacing soybean meal with peanut cake in the diets of crossbred Boer goats as determined by carcass characteristics and quality and by the fatty acid profile of meat. Forty vaccinated and dewormed crossbred Boer goats were used. Goats had an average age of 5 mo and an average BW of 15.6 ± 2.7 kg. Goats were fed Tifton-85 hay and a concentrate consisting of corn bran, soybean meal, and mineral premix. Peanut cake was substituted for soybean meal at levels of 0.0, 33.33, 66.67, and 100%. Biometric and carcass morphometric measurements of crossbred Boer goats were not affected by replacing soybean meal with peanut cake in the diet. There was no influence of the replacement of soybean meal with peanut cake on weight at slaughter ( = 0.28), HCW ( = 0.26), cold carcass weight ( = 0.23), noncarcass components of weight ( = 0.71), or muscularity index values ( = 0.11). However, regression equations indicated that there would be a reduction of 18 and 11% for loin eye area and muscle:bone ratio, respectively, between the treatment without peanut cake and the treatment with total soybean meal replacement. The weights and yields of the commercial cuts were not affected ( > 0.05) by replacing soybean meal with peanut cake in the diet. Replacing soybean meal with peanut cake did not affect the pH ( = 0.79), color index ( > 0.05), and chemical composition ( > 0.05) of the meat (). However, a quadratic trend for the ash content was observed with peanut cake inclusion in the diet ( = 0.09). Peanut cake inclusion in the diet did not affect the concentrations of the sum of SFA ( = 0.29), the sum of unsaturated fatty acids (UFA; = 0.29), or the sum of PUFA ( = 0.97) or the SFA:UFA ratio ( = 0.23) in goat meat. However, there was a linear decrease ( = 0.01) in the sum of odd-chain fatty acids in the meat with increasing peanut cake in the diet. Soybean meal replacement with peanut cake did not affect the n-6:n-3 ratio ( = 0.13) or the medium-chain fatty acid ( = 0.76), long-chain fatty acid ( = 0.74), or atherogenicity index values ( = 0.60) in the meat. The sensory attributes of the longissimus lumborum did not differ with the inclusion of peanut cake in the diet as a replacement for soybean meal. These results suggest that based on carcass and meat characteristics, peanut cake can completely substitute soybean meal in the diet of crossbred Boer goats.
Blocks in the asymmetric simple exclusion process
NASA Astrophysics Data System (ADS)
Tracy, Craig A.; Widom, Harold
2017-12-01
In earlier work, the authors obtained formulas for the probability in the asymmetric simple exclusion process that the mth particle from the left is at site x at time t. They were expressed in general as sums of multiple integrals and, for the case of step initial condition, as an integral involving a Fredholm determinant. In the present work, these results are generalized to the case where the mth particle is the left-most one in a contiguous block of L particles. The earlier work depended in a crucial way on two combinatorial identities, and the present work begins with a generalization of these identities to general L.
Precipitation Efficiency in the Tropical Deep Convective Regime
NASA Technical Reports Server (NTRS)
Li, Xiaofan; Sui, C.-H.; Lau, K.-M.; Lau, William K. M. (Technical Monitor)
2001-01-01
Precipitation efficiency in the tropical deep convective regime is analyzed based on a 2-D cloud resolving simulation. The cloud resolving model is forced by the large-scale vertical velocity and zonal wind and large-scale horizontal advections derived from TOGA COARE for a 20-day period. Precipitation efficiency may be defined as a ratio of surface rain rate to sum of surface evaporation and moisture convergence (LSPE) or a ratio of surface rain rate to sum of condensation and deposition rates of supersaturated vapor (CMPE). Moisture budget shows that the atmosphere is moistened (dryed) when the LSPE is less (more) than 100 %. The LSPE could be larger than 100 % for strong convection. This indicates that the drying processes should be included in cumulus parameterization to avoid moisture bias. Statistical analysis shows that the sum of the condensation and deposition rates is bout 80 % of the sum of the surface evaporation rate and moisture convergence, which ads to proportional relation between the two efficiencies when both efficiencies are less han 100 %. The CMPE increases with increasing mass-weighted mean temperature and creasing surface rain rate. This suggests that precipitation is more efficient for warm environment and strong convection. Approximate balance of rates among the condensation, deposition, rain, and the raindrop evaporation is used to derive an analytical solution of the CMPE.
Thompson, Amanda L; Adair, Linda S; Bentley, Margaret E
2013-03-01
The prevalence of overweight among infants and toddlers has increased dramatically in the past three decades, highlighting the importance of identifying factors contributing to early excess weight gain, particularly in high-risk groups. Parental feeding styles and the attitudes and behaviors that characterize parental approaches to maintaining or modifying children's eating behavior are an important behavioral component shaping early obesity risk. Using longitudinal data from the Infant Care and Risk of Obesity Study, a cohort study of 217 African-American mother-infant pairs with feeding styles, dietary recalls, and anthropometry collected from 3 to 18 months of infant age, we examined the relationship between feeding styles, infant diet, and weight-for-age and sum of skinfolds. Longitudinal mixed models indicated that higher pressuring and indulgent feeding style scores were positively associated with greater infant energy intake, reduced odds of breastfeeding, and higher levels of age-inappropriate feeding of liquids and solids, whereas restrictive feeding styles were associated with lower energy intake, higher odds of breastfeeding, and reduced odds of inappropriate feeding. Pressuring and restriction were also oppositely related to infant size with pressuring associated with lower infant weight-for-age and restriction with higher weight-for-age and sum of skinfolds. Infant size also predicted maternal feeding styles in subsequent visits indicating that the relationship between size and feeding styles is likely bidirectional. Our results suggest that the degree to which parents are pressuring or restrictive during feeding shapes the early feeding environment and, consequently, may be an important environmental factor in the development of obesity. Copyright © 2012 The Obesity Society.
Crawford, E D; Batuello, J T; Snow, P; Gamito, E J; McLeod, D G; Partin, A W; Stone, N; Montie, J; Stock, R; Lynch, J; Brandt, J
2000-05-01
The current study assesses artificial intelligence methods to identify prostate carcinoma patients at low risk for lymph node spread. If patients can be assigned accurately to a low risk group, unnecessary lymph node dissections can be avoided, thereby reducing morbidity and costs. A rule-derivation technology for simple decision-tree analysis was trained and validated using patient data from a large database (4,133 patients) to derive low risk cutoff values for Gleason sum and prostate specific antigen (PSA) level. An empiric analysis was used to derive a low risk cutoff value for clinical TNM stage. These cutoff values then were applied to 2 additional, smaller databases (227 and 330 patients, respectively) from separate institutions. The decision-tree protocol derived cutoff values of < or = 6 for Gleason sum and < or = 10.6 ng/mL for PSA. The empiric analysis yielded a clinical TNM stage low risk cutoff value of < or = T2a. When these cutoff values were applied to the larger database, 44% of patients were classified as being at low risk for lymph node metastases (0.8% false-negative rate). When the same cutoff values were applied to the smaller databases, between 11 and 43% of patients were classified as low risk with a false-negative rate of between 0.0 and 0.7%. The results of the current study indicate that a population of prostate carcinoma patients at low risk for lymph node metastases can be identified accurately using a simple decision algorithm that considers preoperative PSA, Gleason sum, and clinical TNM stage. The risk of lymph node metastases in these patients is < or = 1%; therefore, pelvic lymph node dissection may be avoided safely. The implications of these findings in surgical and nonsurgical treatment are significant.
Tetrahedra and Their Nets: Mathematical and Pedagogical Implications
ERIC Educational Resources Information Center
Mussa, Derege Haileselassie
2013-01-01
If one has three sticks (lengths), when can you make a triangle with the sticks? As long as any two of the lengths sum to a value strictly larger than the third length one can make a triangle. Perhaps surprisingly, if one is given 6 sticks (lengths) there is no simple way of telling if one can build a tetrahedron with the sticks. In fact, even…
Additivity in tree biomass components of Pyrenean oak (Quercus pyrenaica Willd.)
Joao P. Carvalho; Bernard R. Parresol
2003-01-01
In tree biomass estimations, it is important to consider the property of additivity, i.e., the total tree biomass should equal the sum of the components. This work presents functions that allow estimation of the stem and crown dry weight components of Pyrenean oak (Quercus pyrenaica Willd.) trees. A procedure that considers additivity of tree biomass...
NASA Astrophysics Data System (ADS)
Douthett, Elwood (Jack) Moser, Jr.
1999-10-01
Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can only be determined by calculation. A Desirability Function is constructed that simultaneously measures how well multiple intervals fit in a given equal-tempered system. These measurements are made for octave (base 2) and tritave systems (base 3). Combinatorial properties important to music modulation are considered. These considerations lead These considerations lead to the construction of maximally even scales as partitions of an equal-tempered system.
Design of pilot studies to inform the construction of composite outcome measures.
Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing
2017-06-01
Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.
NASA Astrophysics Data System (ADS)
Kanada-En'yo, Yoshiko
2016-02-01
Isovector and isoscalar dipole excitations in 9Be and 10Be are investigated in the framework of antisymmetrized molecular dynamics, in which angular-momentum and parity projections are performed. In the present method, 1p-1h excitation modes built on the ground state and a large amplitude α -cluster mode are taken into account. The isovector giant dipole resonance (GDR) in E >20 MeV shows the two-peak structure, which is understood from the dipole excitation in the 2 α core part with the prolate deformation. Because of valence neutron modes against the 2 α core, low-energy E 1 resonances appear in E <20 MeV, exhausting about 20 % of the Thomas-Reiche-Kuhn sum rule and 10 % of the calculated energy-weighted sum. The dipole resonance at E ˜15 MeV in 10Be can be interpreted as the parity partner of the ground state having a 6He+α structure and has remarkable E 1 strength because of the coherent contribution of two valence neutrons. The isoscalar dipole strength for some low-energy resonances is significantly enhanced by the coupling with the α -cluster mode. For the E 1 strength of 9Be, the calculation overestimates the energy-weighted sum (EWS) in the low-energy (E <20 MeV) and GDR (20
Zenic, Natasa; Ostojic, Ljerka; Sisic, Nedim; Pojskic, Haris; Peric, Mia; Uljevic, Ognjen; Sekulic, Damir
2015-01-01
Objective The community of residence (ie, urban vs rural) is one of the known factors of influence on substance use and misuse (SUM). The aim of this study was to explore the community-specific prevalence of SUM and the associations that exist between scholastic, familial, sports and sociodemographic factors with SUM in adolescents from Bosnia and Herzegovina. Methods In this cross-sectional study, which was completed between November and December 2014, the participants were 957 adolescents (aged 17 to 18 years) from Bosnia and Herzegovina (485; 50.6% females). The independent variables were sociodemographic, academic, sport and familial factors. The dependent variables consisted of questions on cigarette smoking and alcohol consumption. We have calculated differences between groups of participants (gender, community), while the logistic regressions were applied to define associations between the independent and dependent variables. Results In the urban community, cigarette smoking is more prevalent in girls (OR=2.05; 95% CI 1.27 to 3.35), while harmful drinking is more prevalent in boys (OR=2.07; 95% CI 1.59 to 2.73). When data are weighted by gender and community, harmful drinking is more prevalent in urban boys (OR=1.97; 95% CI 1.31 to 2.95), cigarette smoking is more frequent in rural boys (OR=1.61; 95% CI 1.04 to 2.39), and urban girls misuse substances to a greater extent than rural girls (OR=1.70; 95% CI 1.16 to 2.51,OR=2.85; 95% CI 1.88 to 4.31,OR=2.78; 95% CI 1.67 to 4.61 for cigarette smoking, harmful drinking and simultaneous smoking-drinking, respectively). Academic failure is strongly associated with a higher likelihood of SUM. The associations between parental factors and SUM are more evident in urban youth. Sports factors are specifically correlated with SUM for urban girls. Conclusions Living in an urban environment should be considered as a higher risk factor for SUM in girls. Parental variables are more strongly associated with SUM among urban youth, most probably because of the higher parental involvement in children’ personal lives in urban communities (ie, college plans, for example). Specific indicators should be monitored in the prevention of SUM. PMID:26546145
Greitemeyer, Tobias; Nierula, Carina
2016-01-01
Previous research has shown that acute alcohol consumption is associated with negative responses toward outgroup members such as sexual minorities. However, simple alcohol cue exposure without actually consuming alcohol also influences social behavior. Hence, it was reasoned that priming participants with words related to alcohol (relative to neutral words) would promote prejudiced attitudes toward sexual minorities. In fact, an experiment showed that alcohol cue exposure causally led to more negative implicit attitudes toward lesbians and gay men. In contrast, participants' explicit attitudes were relatively unaffected by the priming manipulation. Moreover, participants' typical alcohol use was not related to their attitudes toward lesbians and gay men. In sum, it appears that not only acute alcohol consumption but also the simple exposure of alcohol cues may promote negative views toward lesbians and gay men.
Prediction of beta-turns in proteins using the first-order Markov models.
Lin, Thy-Hou; Wang, Ging-Ming; Wang, Yen-Tseng
2002-01-01
We present a method based on the first-order Markov models for predicting simple beta-turns and loops containing multiple turns in proteins. Sequences of 338 proteins in a database are divided using the published turn criteria into the following three regions, namely, the turn, the boundary, and the nonturn ones. A transition probability matrix is constructed for either the turn or the nonturn region using the weighted transition probabilities computed for dipeptides identified from each region. There are two such matrices constructed for the boundary region since the transition probabilities for dipeptides immediately preceding or following a turn are different. The window used for scanning a protein sequence from amino (N-) to carboxyl (C-) terminal is a hexapeptide since the transition probability computed for a turn tetrapeptide is capped at both the N- and C- termini with a boundary transition probability indexed respectively from the two boundary transition matrices. A sum of the averaged product of the transition probabilities of all the hexapeptides involving each residue is computed. This is then weighted with a probability computed from assuming that all the hexapeptides are from the nonturn region to give the final prediction quantity. Both simple beta-turns and loops containing multiple turns in a protein are then identified by the rising of the prediction quantity computed. The performance of the prediction scheme or the percentage (%) of correct prediction is evaluated through computation of Matthews correlation coefficients for each protein predicted. It is found that the prediction method is capable of giving prediction results with better correlation between the percent of correct prediction and the Matthews correlation coefficients for a group of test proteins as compared with those predicted using some secondary structural prediction methods. The prediction accuracy for about 40% of proteins in the database or 50% of proteins in the test set is better than 70%. Such a percentage for the test set is reduced to 30 if the structures of all the proteins in the set are treated as unknown.
Parallel interference cancellation for CDMA applications
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Simon, Marvin K. (Inventor); Raphaeli, Dan (Inventor)
1997-01-01
The present invention provides a method of decoding a spread spectrum composite signal, the composite signal comprising plural user signals that have been spread with plural respective codes, wherein each coded signal is despread, averaged to produce a signal value, analyzed to produce a tentative decision, respread, summed with other respread signals to produce combined interference signals, the method comprising scaling the combined interference signals with a weighting factor to produce a scaled combined interference signal, scaling the composite signal with the weighting factor to produce a scaled composite signal, scaling the signal value by the complement of the weighting factor to produce a leakage signal, combining the scaled composite signal, the scaled combined interference signal and the leakage signal to produce an estimate of a respective user signal.
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
Liu, C.; Liu, J.; Yao, Y. X.; ...
2017-01-16
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, J.; Yao, Y. X.
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
Renormalization group analysis of the Reynolds stress transport equation
NASA Technical Reports Server (NTRS)
Rubinstein, R.; Barton, J. M.
1992-01-01
The pressure velocity correlation and return to isotropy term in the Reynolds stress transport equation are analyzed using the Yakhot-Orszag renormalization group. The perturbation series for the relevant correlations, evaluated to lowest order in the epsilon-expansion of the Yakhot-Orszag theory, are infinite series in tensor product powers of the mean velocity gradient and its transpose. Formal lowest order Pade approximations to the sums of these series produce a fast pressure strain model of the form proposed by Launder, Reece, and Rodi, and a return to isotropy model of the form proposed by Rotta. In both cases, the model constant are computed theoretically. The predicted Reynolds stress ratios in simple shear flows are evaluated and compared with experimental data. The possibility is discussed of driving higher order nonlinear models by approximating the sums more accurately.
Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei
2016-02-01
This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.
The Weighted-Average Lagged Ensemble.
DelSole, T; Trenary, L; Tippett, M K
2017-11-01
A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.
Wittenberg, Philipp; Gan, Fah Fatt; Knoth, Sven
2018-04-17
The variable life-adjusted display (VLAD) is the first risk-adjusted graphical procedure proposed in the literature for monitoring the performance of a surgeon. It displays the cumulative sum of expected minus observed deaths. It has since become highly popular because the statistic plotted is easy to understand. But it is also easy to misinterpret a surgeon's performance by utilizing the VLAD, potentially leading to grave consequences. The problem of misinterpretation is essentially caused by the variance of the VLAD's statistic that increases with sample size. In order for the VLAD to be truly useful, a simple signaling rule is desperately needed. Various forms of signaling rules have been developed, but they are usually quite complicated. Without signaling rules, making inferences using the VLAD alone is difficult if not misleading. In this paper, we establish an equivalence between a VLAD with V-mask and a risk-adjusted cumulative sum (RA-CUSUM) chart based on the difference between the estimated probability of death and surgical outcome. Average run length analysis based on simulation shows that this particular RA-CUSUM chart has similar performance as compared to the established RA-CUSUM chart based on the log-likelihood ratio statistic obtained by testing the odds ratio of death. We provide a simple design procedure for determining the V-mask parameters based on a resampling approach. Resampling from a real data set ensures that these parameters can be estimated appropriately. Finally, we illustrate the monitoring of a real surgeon's performance using VLAD with V-mask. Copyright © 2018 John Wiley & Sons, Ltd.
Predicting forest dieback in Maine, USA: a simple model based on soil frost and drought
Allan N.D. Auclair; Warren E. Heilman; Blondel Brinkman
2010-01-01
Tree roots of northern hardwoods are shallow rooted, winter active, and minimally frost hardened; dieback is a winter freezing injury to roots incited by frost penetration in the absence of adequate snow cover and exacerbated by drought in summer. High soil water content greatly increases conductivity of frost. We develop a model based on the sum of z-scores of soil...
Configurational coupled cluster approach with applications to magnetic model systems
NASA Astrophysics Data System (ADS)
Wu, Siyuan; Nooijen, Marcel
2018-05-01
A general exponential, coupled cluster like, approach is discussed to extract an effective Hamiltonian in configurational space, as a sum of 1-body, 2-body up to n-body operators. The simplest two-body approach is illustrated by calculations on simple magnetic model systems. A key feature of the approach is that equations up to a certain rank do not depend on higher body cluster operators.
ERIC Educational Resources Information Center
Cepriá, Gemma; Salvatella, Luis
2014-01-01
All pH calculations for simple acid-base systems used in introductory courses on general or analytical chemistry can be carried out by using a general procedure requiring the use of predominance diagrams. In particular, the pH is calculated as the sum of an independent term equaling the average pK[subscript a] values of the acids involved in the…
2017-09-08
Expedition 52, as simple as 1-2-3: the first International Space Station science officer who was also the second woman to serve as a station crewmember has now finished her third long-duration mission…and that’s just for starters. Last week’s crew return to Earth puts Expedition 52 in the record books; we peeked inside and pulled out some of the best numbers to sum up the flight.
ERIC Educational Resources Information Center
Kahana, Michael J.; Sederberg, Per B.; Howard, Marc W.
2008-01-01
The temporal context model posits that search through episodic memory is driven by associations between the multiattribute representations of items and context. Context, in turn, is a recency weighted sum of previous experiences or memories. Because recently processed items are most similar to the current representation of context, M. Usher, E. J.…
The Testing of Airplane Fabrics
NASA Technical Reports Server (NTRS)
Schraivogel, Karl
1932-01-01
This report considers the determining factors in the choice of airplane fabrics, describes the customary methods of testing and reports some of the experimental results. To sum up briefly the results obtained with the different fabrics, it may be said that increasing the strength of covering fabrics by using coarser yarns ordinarily offers no difficulty, because the weight increment from doping is relatively smaller.
2015-12-01
issues. A weighted mean can be used in place of the grand mean3 and the STATA software automatically handles the assignment of the sums of squares. Thus...between groups (i.e., sphericity) using the multivariate test of means provided in STATA 12.1. This test checks whether or not population variances and
Influence of absorption by environmental water vapor on radiation transfer in wildland fires
D. Frankman; B. W. Webb; B. W. Butler
2008-01-01
The attenuation of radiation transfer from wildland flames to fuel by environmental water vapor is investigated. Emission is tracked from points on an idealized flame to locations along the fuel bed while accounting for absorption by environmental water vapor in the intervening medium. The Spectral Line Weighted-sum-of-gray-gases approach was employed for treating the...
ERIC Educational Resources Information Center
Soh, Kaycheng
2015-01-01
In the various world university ranking schemes, the "Overall" is a sum of the weighted indicator scores. As the indicators are of a different nature from each other, "Overall" conceals important differences. Factor analysis of the data from three prominent ranking schemes reveals that there are two factors in each of the…
Looking for Patterns in OfStEd Judgements about Primary Pupil Achievement in Design and Technology
ERIC Educational Resources Information Center
Cross, Alan
2006-01-01
Respective English governments have placed considerable faith, political weight and not inconsiderable sums of money in a system of school inspection organised and led by the Office for Standards in Education (OfStEd). This article considers a so-called foundation subject, design and technology, and the extent to which we might meaningfully use…
NASA Astrophysics Data System (ADS)
Chu, Huaqiang; Liu, Fengshan; Consalvi, Jean-Louis
2014-08-01
The relationship between the spectral line based weighted-sum-of-gray-gases (SLW) model and the full-spectrum k-distribution (FSK) model in isothermal and homogeneous media is investigated in this paper. The SLW transfer equation can be derived from the FSK transfer equation expressed in the k-distribution function without approximation. It confirms that the SLW model is equivalent to the FSK model in the k-distribution function form. The numerical implementation of the SLW relies on a somewhat arbitrary discretization of the absorption cross section whereas the FSK model finds the spectrally integrated intensity by integration over the smoothly varying cumulative-k distribution function using a Gaussian quadrature scheme. The latter is therefore in general more efficient as a fewer number of gray gases is required to achieve a prescribed accuracy. Sample numerical calculations were conducted to demonstrate the different efficiency of these two methods. The FSK model is found more accurate than the SLW model in radiation transfer in H2O; however, the SLW model is more accurate in media containing CO2 as the only radiating gas due to its explicit treatment of ‘clear gas.’
Multi Criteria Evaluation Module for RiskChanges Spatial Decision Support System
NASA Astrophysics Data System (ADS)
Olyazadeh, Roya; Jaboyedoff, Michel; van Westen, Cees; Bakker, Wim
2015-04-01
Multi-Criteria Evaluation (MCE) module is one of the five modules of RiskChanges spatial decision support system. RiskChanges web-based platform aims to analyze changes in hydro-meteorological risk and provides tools for selecting the best risk reduction alternative. It is developed under CHANGES framework (changes-itn.eu) and INCREO project (increo-fp7.eu). MCE tool helps decision makers and spatial planners to evaluate, sort and rank the decision alternatives. The users can choose among different indicators that are defined within the system using Risk and Cost Benefit analysis results besides they can add their own indicators. Subsequently the system standardizes and prioritizes them. Finally, the best decision alternative is selected by using the weighted sum model (WSM). The Application of this work is to facilitate the effect of MCE for analyzing changing risk over the time under different scenarios and future years by adopting a group decision making into practice and comparing the results by numeric and graphical view within the system. We believe that this study helps decision-makers to achieve the best solution by expressing their preferences for strategies under future scenarios. Keywords: Multi-Criteria Evaluation, Spatial Decision Support System, Weighted Sum Model, Natural Hazard Risk Management
Low-energy isovector and isoscalar dipole response in neutron-rich nuclei
NASA Astrophysics Data System (ADS)
Vretenar, D.; Niu, Y. F.; Paar, N.; Meng, J.
2012-04-01
The self-consistent random-phase approximation, based on the framework of relativistic energy density functionals, is employed in the study of isovector and isoscalar dipole response in 68Ni,132Sn, and 208Pb. The evolution of pygmy dipole states (PDSs) in the region of low excitation energies is analyzed as a function of the density dependence of the symmetry energy for a set of relativistic effective interactions. The occurrence of PDSs is predicted in the response to both the isovector and the isoscalar dipole operators, and its strength is enhanced with the increase in the symmetry energy at saturation and the slope of the symmetry energy. In both channels, the PDS exhausts a relatively small fraction of the energy-weighted sum rule but a much larger percentage of the inverse energy-weighted sum rule. For the isovector dipole operator, the reduced transition probability B(E1) of the PDSs is generally small because of pronounced cancellation of neutron and proton partial contributions. The isoscalar-reduced transition amplitude is predominantly determined by neutron particle-hole configurations, most of which add coherently, and this results in a collective response of the PDSs to the isoscalar dipole operator.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Dynamic Precursors of Flares in Active Region NOAA 10486
NASA Astrophysics Data System (ADS)
Korsós, M. B.; Gyenge, N.; Baranyi, T.; Ludmány, A.
2015-03-01
Four different methods are applied here to study the precursors of flare activity in the Active Region NOAA 10486. Two approaches track the temporal behaviour of suitably chosen features (one, the weighted hori- zontal gradient W G M , is the generalized form of the horizontal gradient of the magnetic field, G M ; the other is the sum of the horizontal gradient of the magnetic field, G S , for all sunspot pairs). W G M is a photospheric indicator, that is a proxy measure of magnetic non-potentiality of a specific area of the active region, i.e., it captures the temporal variation of the weighted horizontal gradient of magnetic flux summed up for the region where opposite magnetic polarities are highly mixed. The third one, referred to as the separateness parameter, S l- f , considers the overall morphology. Further, G S and S l- f are photospheric, newly defined quick-look indicators of the polarity mix of the entire active region. The fourth method is tracking the temporal variation of small X-ray flares, their times of succession and their energies observed by the Reuven Ramaty High Energy Solar Spectroscopic Imager instrument. All approaches yield specific pre-cursory signatures for the imminence of flares.
NASA Astrophysics Data System (ADS)
Webb, G. M.; Zank, G. P.; Burrows, R. H.; Ratkiewicz, R. E.
2011-02-01
Multi-dimensional Alfvén simple waves in magnetohydrodynamics (MHD) are investigated using Boillat's formalism. For simple wave solutions, all physical variables (the gas density, pressure, fluid velocity, entropy, and magnetic field induction in the MHD case) depend on a single phase function ϕ, which is a function of the space and time variables. The simple wave ansatz requires that the wave normal and the normal speed of the wave front depend only on the phase function ϕ. This leads to an implicit equation for the phase function and a generalization of the concept of a plane wave. We obtain examples of Alfvén simple waves, based on the right eigenvector solutions for the Alfvén mode. The Alfvén mode solutions have six integrals, namely that the entropy, density, magnetic pressure, and the group velocity (the sum of the Alfvén and fluid velocity) are constant throughout the wave. The eigenequations require that the rate of change of the magnetic induction B with ϕ throughout the wave is perpendicular to both the wave normal n and B. Methods to construct simple wave solutions based on specifying either a solution ansatz for n(ϕ) or B(ϕ) are developed.
NASA Astrophysics Data System (ADS)
Webb, G. M.; Zank, G. P.; Burrows, R.
2009-12-01
Multi-dimensional Alfvén simple waves in magnetohydrodynamics (MHD) are investigated using Boillat's formalism. For simple wave solutions, all physical variables (the gas density, pressure, fluid velocity, entropy, and magnetic field induction in the MHD case) depend on a single phase function ǎrphi which is a function of the space and time variables. The simple wave ansatz requires that the wave normal and the normal speed of the wave front depend only on the phase function ǎrphi. This leads to an implicit equation for the phase function, and a generalisation of the concept of a plane wave. We obtain examples of Alfvén simple waves, based on the right eigenvector solutions for the Alfvén mode. The Alfvén mode solutions have six integrals, namely that the entropy, density, magnetic pressure and the group velocity (the sum of the Alfvén and fluid velocity) are constant throughout the wave. The eigen-equations require that the rate of change of the magnetic induction B with ǎrphi throughout the wave is perpendicular to both the wave normal n and B. Methods to construct simple wave solutions based on specifying either a solution ansatz for n(ǎrphi) or B(ǎrphi) are developed.
NASA Astrophysics Data System (ADS)
Alekseev, Oleg; Mineev-Weinstein, Mark
2016-12-01
A point source on a plane constantly emits particles which rapidly diffuse and then stick to a growing cluster. The growth probability of a cluster is presented as a sum over all possible scenarios leading to the same final shape. The classical point for the action, defined as a minus logarithm of the growth probability, describes the most probable scenario and reproduces the Laplacian growth equation, which embraces numerous fundamental free boundary dynamics in nonequilibrium physics. For nonclassical scenarios we introduce virtual point sources, in which presence the action becomes the Kullback-Leibler entropy. Strikingly, this entropy is shown to be the sum of electrostatic energies of layers grown per elementary time unit. Hence the growth probability of the presented nonequilibrium process obeys the Gibbs-Boltzmann statistics, which, as a rule, is not applied out from equilibrium. Each layer's probability is expressed as a product of simple factors in an auxiliary complex plane after a properly chosen conformal map. The action at this plane is a sum of Robin functions, which solve the Liouville equation. At the end we establish connections of our theory with the τ function of the integrable Toda hierarchy and with the Liouville theory for noncritical quantum strings.
NASA Astrophysics Data System (ADS)
Richings, Gareth W.; Habershon, Scott
2018-04-01
We present significant algorithmic improvements to a recently proposed direct quantum dynamics method, based upon combining well established grid-based quantum dynamics approaches and expansions of the potential energy operator in terms of a weighted sum of Gaussian functions. Specifically, using a sum of low-dimensional Gaussian functions to represent the potential energy surface (PES), combined with a secondary fitting of the PES using singular value decomposition, we show how standard grid-based quantum dynamics methods can be dramatically accelerated without loss of accuracy. This is demonstrated by on-the-fly simulations (using both standard grid-based methods and multi-configuration time-dependent Hartree) of both proton transfer on the electronic ground state of salicylaldimine and the non-adiabatic dynamics of pyrazine.
Examination of the first excited state of 4He as a potential breathing mode
NASA Astrophysics Data System (ADS)
Bacca, Sonia; Barnea, Nir; Leidemann, Winfried; Orlandini, Giuseppina
2015-02-01
The isoscalar monopole excitation of 4He is studied within a few-body ab initio approach. We consider the transition density to the low-lying and narrow 0+ resonance, as well as various sum rules and the strength energy distribution itself at different momentum transfers q . Realistic nuclear forces of chiral and phenomenological nature are employed. Various indications for a collective breathing mode are found: (i) the specific shape of the transition density, (ii) the high degree of exhaustion of the non-energy-weighted sum rule at low q , and (iii) the complete dominance of the resonance peak in the excitation spectrum. For the incompressibility K of the α particle, two different definitions give two rather small values (22 and 36 MeV).
Urbanek, Margrit; Hayes, M Geoffrey; Armstrong, Loren L; Morrison, Jean; Lowe, Lynn P; Badon, Sylvia E; Scheftner, Doug; Pluzhnikov, Anna; Levine, David; Laurie, Cathy C; McHugh, Caitlin; Ackerman, Christine M; Mirel, Daniel B; Doheny, Kimberly F; Guo, Cong; Scholtens, Denise M; Dyer, Alan R; Metzger, Boyd E; Reddy, Timothy E; Cox, Nancy J; Lowe, William L
2013-09-01
Newborns characterized as large and small for gestational age are at risk for increased mortality and morbidity during the first year of life as well as for obesity and dysglycemia as children and adults. The intrauterine environment and fetal genes contribute to the fetal size at birth. To define the genetic architecture underlying the newborn size, we performed a genome-wide association study (GWAS) in 4281 newborns in four ethnic groups from the Hyperglycemia and Adverse Pregnancy Outcome Study. We tested for association with newborn anthropometric traits (birth length, head circumference, birth weight, percent fat mass and sum of skinfolds) and newborn metabolic traits (cord glucose and C-peptide) under three models. Model 1 adjusted for field center, ancestry, neonatal gender, gestational age at delivery, parity, maternal age at oral glucose tolerance test (OGTT); Model 2 adjusted for Model 1 covariates, maternal body mass index (BMI) at OGTT, maternal height at OGTT, maternal mean arterial pressure at OGTT, maternal smoking and drinking; Model 3 adjusted for Model 2 covariates, maternal glucose and C-peptide at OGTT. Strong evidence for association was observed with measures of newborn adiposity (sum of skinfolds model 3 Z-score 7.356, P = 1.90×10⁻¹³, and to a lesser degree fat mass and birth weight) and a region on Chr3q25.31 mapping between CCNL and LEKR1. These findings were replicated in an independent cohort of 2296 newborns. This region has previously been shown to be associated with birth weight in Europeans. The current study suggests that association of this locus with birth weight is secondary to an effect on fat as opposed to lean body mass.
Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms
NASA Astrophysics Data System (ADS)
Ablinger, Jakob; Blümlein, Johannes; Raab, Clemens; Schneider, Carsten; Wißbrock, Fabian
2014-08-01
We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version of the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators, new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∼30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N∈C. Integrals with a power-like divergence in N-space ∝aN,a∈R,a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.
SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, B; Liu, S; Zhang, T
2016-06-15
Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less
NASA Astrophysics Data System (ADS)
Sofiev, Mikhail; Ritenberga, Olga; Albertini, Roberto; Arteta, Joaquim; Belmonte, Jordina; Geller Bernstein, Carmi; Bonini, Maira; Celenk, Sevcan; Damialis, Athanasios; Douros, John; Elbern, Hendrik; Friese, Elmar; Galan, Carmen; Oliver, Gilles; Hrga, Ivana; Kouznetsov, Rostislav; Krajsek, Kai; Magyar, Donat; Parmentier, Jonathan; Plu, Matthieu; Prank, Marje; Robertson, Lennart; Steensen, Birthe Marie; Thibaudon, Michel; Segers, Arjo; Stepanovich, Barbara; Valdebenito, Alvaro M.; Vira, Julius; Vokou, Despoina
2017-10-01
The paper presents the first modelling experiment of the European-scale olive pollen dispersion, analyses the quality of the predictions, and outlines the research needs. A 6-model strong ensemble of Copernicus Atmospheric Monitoring Service (CAMS) was run throughout the olive season of 2014, computing the olive pollen distribution. The simulations have been compared with observations in eight countries, which are members of the European Aeroallergen Network (EAN). Analysis was performed for individual models, the ensemble mean and median, and for a dynamically optimised combination of the ensemble members obtained via fusion of the model predictions with observations. The models, generally reproducing the olive season of 2014, showed noticeable deviations from both observations and each other. In particular, the season was reported to start too early by 8 days, but for some models the error mounted to almost 2 weeks. For the end of the season, the disagreement between the models and the observations varied from a nearly perfect match up to 2 weeks too late. A series of sensitivity studies carried out to understand the origin of the disagreements revealed the crucial role of ambient temperature and consistency of its representation by the meteorological models and heat-sum-based phenological model. In particular, a simple correction to the heat-sum threshold eliminated the shift of the start of the season but its validity in other years remains to be checked. The short-term features of the concentration time series were reproduced better, suggesting that the precipitation events and cold/warm spells, as well as the large-scale transport, were represented rather well. Ensemble averaging led to more robust results. The best skill scores were obtained with data fusion, which used the previous days' observations to identify the optimal weighting coefficients of the individual model forecasts. Such combinations were tested for the forecasting period up to 4 days and shown to remain nearly optimal throughout the whole period.
A novel three-stage distance-based consensus ranking method
NASA Astrophysics Data System (ADS)
Aghayi, Nazila; Tavana, Madjid
2018-05-01
In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.
Photo-Spectrometer Realized In A Standard Cmos Ic Process
Simpson, Michael L.; Ericson, M. Nance; Dress, William B.; Jellison, Gerald E.; Sitter, Jr., David N.; Wintenberg, Alan L.
1999-10-12
A spectrometer, comprises: a semiconductor having a silicon substrate, the substrate having integrally formed thereon a plurality of layers forming photo diodes, each of the photo diodes having an independent spectral response to an input spectra within a spectral range of the semiconductor and each of the photo diodes formed only from at least one of the plurality of layers of the semiconductor above the substrate; and, a signal processing circuit for modifying signals from the photo diodes with respective weights, the weighted signals being representative of a specific spectral response. The photo diodes have different junction depths and different polycrystalline silicon and oxide coverings. The signal processing circuit applies the respective weights and sums the weighted signals. In a corresponding method, a spectrometer is manufactured by manipulating only the standard masks, materials and fabrication steps of standard semiconductor processing, and integrating the spectrometer with a signal processing circuit.
An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value
NASA Astrophysics Data System (ADS)
Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu
2018-03-01
Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.
Design of Probabilistic Random Forests with Applications to Anticancer Drug Sensitivity Prediction
Rahman, Raziur; Haider, Saad; Ghosh, Souparno; Pal, Ranadip
2015-01-01
Random forests consisting of an ensemble of regression trees with equal weights are frequently used for design of predictive models. In this article, we consider an extension of the methodology by representing the regression trees in the form of probabilistic trees and analyzing the nature of heteroscedasticity. The probabilistic tree representation allows for analytical computation of confidence intervals (CIs), and the tree weight optimization is expected to provide stricter CIs with comparable performance in mean error. We approached the ensemble of probabilistic trees’ prediction from the perspectives of a mixture distribution and as a weighted sum of correlated random variables. We applied our methodology to the drug sensitivity prediction problem on synthetic and cancer cell line encyclopedia dataset and illustrated that tree weights can be selected to reduce the average length of the CI without increase in mean error. PMID:27081304
The proper weighting function for retrieving temperatures from satellite measured radiances
NASA Technical Reports Server (NTRS)
Arking, A.
1976-01-01
One class of methods for converting satellite measured radiances into atmospheric temperature profiles, involves a linearization of the radiative transfer equation: delta r = the sum of (W sub i) (delta T sub i) where (i=1...s) and where delta T sub i is the deviation of the temperature in layer i from that of a reference atmosphere, delta R is the difference in the radiance at satellite altitude from the corresponding radiance for the reference atmosphere, and W sub i is the discrete (or vector) form of the T-weighting (i.e., temperature weighting) function W(P), where P is pressure. The top layer of the atmosphere corresponds to i = 1, the bottom layer to i = s - 1, and i = s refers to the surface. Linearization in temperature (or some function of temperature) is at the heart of all linear or matrix methods. The weighting function that should be used is developed.
NASA Astrophysics Data System (ADS)
Karrasch, C.; Hauschild, J.; Langer, S.; Heidrich-Meisner, F.
2013-06-01
We revisit the problem of the spin Drude weight D of the integrable spin-1/2 XXZ chain using two complementary approaches, exact diagonalization (ED) and the time-dependent density-matrix renormalization group (tDMRG). We pursue two main goals. First, we present extensive results for the temperature dependence of D. By exploiting time translation invariance within tDMRG, one can extract D for significantly lower temperatures than in previous tDMRG studies. Second, we discuss the numerical quality of the tDMRG data and elaborate on details of the finite-size scaling of the ED results, comparing calculations carried out in the canonical and grand-canonical ensembles. Furthermore, we analyze the behavior of the Drude weight as the point with SU(2)-symmetric exchange is approached and discuss the relative contribution of the Drude weight to the sum rule as a function of temperature.
Near-Optimal Operation of Dual-Fuel Launch Vehicles
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Chou, H. C.; Bowles, J. V.
1996-01-01
A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.
Relative trace-element concern indexes for eastern Kentucky coals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, S.L.
Coal trace elements that could affect environmental quality were studied in 372 samples (collected and analyzed by the Kentucky Geological Survey and the United States Geological Survey) from 36 coal beds in eastern Kentucky. Relative trace-element concern indexes are defined as the weighted sum of standarized (substract mean; divide by standard deviation) concentrations. Index R is calculated from uranium and thorium, index 1 from elements of minor concern (antimony, barium, bromine, chloride, cobalt, lithium, manganese, sodium, and strontium), index 2 from elements of moderate concern (chromium, copper, fluorine, nickel, vanadium, and zinc), and index 4 from elements of greatest concernmore » (arsenic, boron, cadmium, lead, mercury, molybdenum, and selenium). Numericals indicate weights, except that index R is weighted by 1, and index 124 is the unweighted sum of indexes 1, 2, and 4. Contour mapping indexes is valid because all indexes have nonnugget effect variograms. Index 124 is low west of Lee and Bell counties, and in Pike County. Index 124 is high in the area bounded by Boyd, Menifee, Knott, and Martin counties and in Owsley, Clay, and Leslie counties. Coal from some areas of eastern Kentucky is less likely to cause environmental problems than that from other areas. Positive correlations of all indexes with the centered log ratios of ash, and negative correlations with centered log ratios of carbon, hydrogen, nitrogen, oxygen, and sulfur indicate that trace elements of concern are predominantly associated with ash. Beneficiation probably would reduce indexes significantly.« less
12 CFR 324.152 - Simple risk weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... (that is, between zero and -1), then E equals the absolute value of RVC. If RVC is negative and less... the lowest applicable risk weight in this section. (1) Zero percent risk weight equity exposures. An....131(d)(2) is assigned a zero percent risk weight. (2) 20 percent risk weight equity exposures. An...
12 CFR 3.52 - Simple risk-weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... this paragraph (b). (1) Zero percent risk weight equity exposures. An equity exposure to a sovereign... International Monetary Fund, an MDB, and any other entity whose credit exposures receive a zero percent risk weight under § 3.32 may be assigned a zero percent risk weight. (2) 20 percent risk weight equity...
589 nm sum-frequency generation laser for the LGS/AO of Subaru Telescope
NASA Astrophysics Data System (ADS)
Saito, Yoshihiko; Hayano, Yutaka; Saito, Norihito; Akagawa, Kazuyuki; Takazawa, Akira; Kato, Mayumi; Ito, Meguru; Colley, Stephen; Dinkins, Matthew; Eldred, Michael; Golota, Taras; Guyon, Olivier; Hattori, Masayuki; Oya, Shin; Watanabe, Makoto; Takami, Hideki; Iye, Masanori; Wada, Satoshi
2006-06-01
We developed a high power and high beam quality 589 nm coherent light source by sum-frequency generation in order to utilize it as a laser guide star at the Subaru telescope. The sum-frequency generation is a nonlinear frequency conversion in which two mode-locked Nd:YAG lasers oscillating at 1064 and 1319 nm mix in a nonlinear crystal to generate a wave at the sum frequency. We achieved the qualities required for the laser guide star. The power of laser is reached to 4.5 W mixing 15.65 W at 1064 nm and 4.99 W at 1319 nm when the wavelength is adjusted to 589.159 nm. The wavelength is controllable in accuracy of 0.1 pm from 589.060 and 589.170 nm. The stability of the power holds within 1.3% during seven hours operation. The transverse mode of the beam is the TEM 00 and M2 of the beam is smaller than 1.2. We achieved these qualities by the following technical sources; (1) simple construction of the oscillator for high beam quality, (2) synchronization of mode-locked pulses at 1064 and 1319 nm by the control of phase difference between two radio frequencies fed to acousto-optic mode lockers, (3) precise tunability of wavelength and spectral band width, and (4) proper selection of nonlinear optical crystal. We report in this paper how we built up each technical source and how we combined those.
Radiative corrections to the solar lepton mixing sum rule
NASA Astrophysics Data System (ADS)
Zhang, Jue; Zhou, Shun
2016-08-01
The simple correlation among three lepton flavor mixing angles ( θ 12, θ 13, θ 23) and the leptonic Dirac CP-violating phase δ is conventionally called a sum rule of lepton flavor mixing, which may be derived from a class of neutrino mass models with flavor symmetries. In this paper, we consider the solar lepton mixing sum rule θ 12 ≈ θ 12 ν + θ 13 cos δ, where θ 12 ν stems from a constant mixing pattern in the neutrino sector and takes the value of θ 12 ν = 45 ° for the bi-maximal mixing (BM), {θ}_{12}^{ν } = { tan}^{-1}(1/√{2}) ≈ 35.3° for the tri-bimaximal mixing (TBM) or {θ}_{12}^{ν } = { tan}^{-1}(1/√{5+1}) ≈ 31.7° for the golden-ratio mixing (GR), and investigate the renormalization-group (RG) running effects on lepton flavor mixing parameters when this sum rule is assumed at a superhigh-energy scale. For illustration, we work within the framework of the minimal supersymmetric standard model (MSSM), and implement the Bayesian approach to explore the posterior distribution of δ at the low-energy scale, which becomes quite broad when the RG running effects are significant. Moreover, we also discuss the compatibility of the above three mixing scenarios with current neutrino oscillation data, and observe that radiative corrections can increase such a compatibility for the BM scenario, resulting in a weaker preference for the TBM and GR ones.
Neuromotor development in relation to birth weight in rabbits.
Harel, S; Shapira, Y; Hartzler, J; Teng, E L; Quiligan, E; Van Der Meulen, J P
1978-01-01
The development of neuromotor patterns in relation to birth weight was studied in the rabbit, a perinatal brain developer. In order to induce intrauterine growth retardation and to increase the number of low birth weight rabbits, experimental ischemia to half the fetuses in each doe was achieved by total ligation of approximately 30% of spiral vessels to the placenta, during the last trimester of gestation. Following natural delivery, the rabbit pups were periodically observed for the appearance of eye-opening and righting reflex, and for the cessations of falling, circling and dragging of hind limbs. An index of neuromotor development was assigned to each rabbit by summing up the age (in days) of appearance of each of the neuromotor milestones. An association was found between low birth weight and delayed neuromotor development at 2 weeks of age. The most significant correlation was found between low birth weight and delayed disappearance of falling. The latter may represent incoordination as an expression of cerebellar dysfunction.
Muralidharan, Vignesh; Balasubramani, Pragathi P; Chakravarthy, V Srinivasa; Gilat, Moran; Lewis, Simon J G; Moustafa, Ahmed A
2016-01-01
Experimental data show that perceptual cues can either exacerbate or ameliorate freezing of gait (FOG) in Parkinson's Disease (PD). For example, simple visual stimuli like stripes on the floor can alleviate freezing whereas complex stimuli like narrow doorways can trigger it. We present a computational model of the cognitive and motor cortico-basal ganglia loops that explains the effects of sensory and cognitive processes on FOG. The model simulates strong causative factors of FOG including decision conflict (a disagreement of various sensory stimuli in their association with a response) and cognitive load (complexity of coupling a stimulus with downstream mechanisms that control gait execution). Specifically, the model simulates gait of PD patients (freezers and non-freezers) as they navigate a series of doorways while simultaneously responding to several Stroop word cues in a virtual reality setup. The model is based on an actor-critic architecture of Reinforcement Learning involving Utility-based decision making, where Utility is a weighted sum of Value and Risk functions. The model accounts for the following experimental data: (a) the increased foot-step latency seen in relation to high conflict cues, (b) the high number of motor arrests seen in PD freezers when faced with a complex cue compared to the simple cue, and (c) the effect of dopamine medication on these motor arrests. The freezing behavior arises as a result of addition of task parameters (doorways and cues) and not due to inherent differences in the subject group. The model predicts a differential role of risk sensitivity in PD freezers and non-freezers in the cognitive and motor loops. Additionally this first-of-its-kind model provides a plausible framework for understanding the influence of cognition on automatic motor actions in controls and Parkinson's Disease.
Muralidharan, Vignesh; Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa; Gilat, Moran; Lewis, Simon J. G.; Moustafa, Ahmed A.
2017-01-01
Experimental data show that perceptual cues can either exacerbate or ameliorate freezing of gait (FOG) in Parkinson's Disease (PD). For example, simple visual stimuli like stripes on the floor can alleviate freezing whereas complex stimuli like narrow doorways can trigger it. We present a computational model of the cognitive and motor cortico-basal ganglia loops that explains the effects of sensory and cognitive processes on FOG. The model simulates strong causative factors of FOG including decision conflict (a disagreement of various sensory stimuli in their association with a response) and cognitive load (complexity of coupling a stimulus with downstream mechanisms that control gait execution). Specifically, the model simulates gait of PD patients (freezers and non-freezers) as they navigate a series of doorways while simultaneously responding to several Stroop word cues in a virtual reality setup. The model is based on an actor-critic architecture of Reinforcement Learning involving Utility-based decision making, where Utility is a weighted sum of Value and Risk functions. The model accounts for the following experimental data: (a) the increased foot-step latency seen in relation to high conflict cues, (b) the high number of motor arrests seen in PD freezers when faced with a complex cue compared to the simple cue, and (c) the effect of dopamine medication on these motor arrests. The freezing behavior arises as a result of addition of task parameters (doorways and cues) and not due to inherent differences in the subject group. The model predicts a differential role of risk sensitivity in PD freezers and non-freezers in the cognitive and motor loops. Additionally this first-of-its-kind model provides a plausible framework for understanding the influence of cognition on automatic motor actions in controls and Parkinson's Disease. PMID:28119584
Convex lattice polygons of fixed area with perimeter-dependent weights.
Rajesh, R; Dhar, Deepak
2005-01-01
We study fully convex polygons with a given area, and variable perimeter length on square and hexagonal lattices. We attach a weight tm to a convex polygon of perimeter m and show that the sum of weights of all polygons with a fixed area s varies as s(-theta(conv))eK(t)square root(s) for large s and t less than a critical threshold tc, where K(t) is a t-dependent constant, and theta(conv) is a critical exponent which does not change with t. Using heuristic arguments, we find that theta(conv) is 1/4 for the square lattice, but -1/4 for the hexagonal lattice. The reason for this unexpected nonuniversality of theta(conv) is traced to existence of sharp corners in the asymptotic shape of these polygons.
Charlier, Ruben; Caspers, Maarten; Knaeps, Sara; Mertens, Evelien; Lambrechts, Diether; Lefevre, Johan; Thomis, Martine
2017-03-01
Since both muscle mass and strength performance are polygenic in nature, the current study compared four genetic predisposition scores (GPS) in their ability to predict these phenotypes. Data were gathered within the framework of the first-generation Flemish Policy Research Centre "Sport, Physical Activity and Health" (2002-2004). Results are based on muscle characteristics data of 565 Flemish Caucasians (19-73 yr, 365 men). Skeletal muscle mass was determined from bioelectrical impedance. The Biodex dynamometer was used to measure isometric (PT static120° ) and isokinetic strength (PT dynamic60° and PT dynamic240° ), ballistic movement speed (S 20% ), and muscular endurance (Work) of the knee extensors. Genotyping was done for 153 gene variants, selected on the basis of a literature search and the expression quantitative trait loci of selected genes. Four GPS were designed: a total GPS (based on the sum of all 153 variants, each favorable allele = score 1), a data-driven and weighted GPS [respectively, the sum of favorable alleles of those variants with significant b-coefficients in stepwise regression (GPS dd ), and the sum of these variants weighted with their respective partial r 2 (GPS w )], and an elastic net GPS (based on the variants that were selected by an elastic net regularization; GPS en ). It was found that four different models for a GPS were able to significantly predict up to ~7% of the variance in strength performance. GPS en made the best prediction of SMM and Work. However, this was not the case for the remaining strength performance parameters, where best predictions were made by GPS dd and GPS w . Copyright © 2017 the American Physiological Society.
Eriksson, Ulrika; Haglund, Peter; Kärrman, Anna
2017-11-01
Per- and polyfluoroalkyl substances (PFASs) are ubiquitous in sludge and water from waste water treatment plants, as a result of their incorporation in everyday products and industrial processes. In this study, we measured several classes of persistent PFASs, precursors, transformation intermediates, and newly identified PFASs in influent and effluent sewage water and sludge from three municipal waste water treatment plants in Sweden, sampled in 2015. For sludge, samples from 2012 and 2014 were analyzed as well. Levels of precursors in sludge exceeded those of perfluoroalkyl acids and sulfonic acids (PFCAs and PFSAs), in 2015 the sum of polyfluoroalkyl phosphoric acid esters (PAPs) were 15-20ng/g dry weight, the sum of fluorotelomer sulfonic acids (FTSAs) was 0.8-1.3ng/g, and the sum of perfluorooctane sulfonamides and ethanols ranged from non-detected to 3.2ng/g. Persistent PFSAs and PFCAs were detected at 1.9-3.9ng/g and 2.4-7.3ng/g dry weight, respectively. The influence of precursor compounds was further demonstrated by an observed substantial increase for a majority of the persistent PFCAs and PFSAs in water after waste water treatment. Perfluorohexanoic acid (PFHxA), perfluorooctanoic acid (PFOA), perfluorohexane sulfonic acid (PFHxS), and perfluorooctane sulfonic acid (PFOS) had a net mass increase in all WWTPs, with mean values of 83%, 28%, 37% and 58%, respectively. The load of precursors and intermediates in influent water and sludge combined with net mass increase support the hypothesis that degradation of precursor compounds is a significant contributor to PFAS contamination in the environment. Copyright © 2017. Published by Elsevier B.V.
Vogel, Heike; Wolf, Stefanie; Rabasa, Cristina; Rodriguez-Pacheco, Francisca; Babaei, Carina S; Stöber, Franziska; Goldschmidt, Jürgen; DiMarchi, Richard D; Finan, Brian; Tschöp, Matthias H; Dickson, Suzanne L; Schürmann, Annette; Skibicka, Karolina P
2016-11-01
The obesity epidemic continues unabated and currently available pharmacological treatments are not sufficiently effective. Combining gut/brain peptide, GLP-1, with estrogen into a conjugate may represent a novel, safe and potent, strategy to treat diabesity. Here we demonstrate that the central administration of GLP-1-estrogen conjugate reduced food reward, food intake, and body weight in rats. In order to determine the brain location of the interaction of GLP-1 with estrogen, we avail of single-photon emission computed tomography imaging of regional cerebral blood flow and pinpoint a brain site unexplored for its role in feeding and reward, the supramammillary nucleus (SUM) as a potential target of the conjugated GLP-1-estrogen. We confirm that conjugated GLP-1 and estrogen directly target the SUM with site-specific microinjections. Additional microinjections of GLP-1-estrogen into classic energy balance controlling nuclei, the lateral hypothalamus (LH) and the nucleus of the solitary tract (NTS) revealed that the metabolic benefits resulting from GLP-1-estrogen injections are mediated through the LH and to some extent by the NTS. In contrast, no additional benefit of the conjugate was noted on food reward when the compound was microinjected into the LH or the NTS, identifying the SUM as the only neural substrate identified here to underlie the reward reducing benefits of GLP-1 and estrogen conjugate. Collectively we discover a surprising neural substrate underlying food intake and reward effects of GLP-1 and estrogen and uncover a new brain area capable of regulating energy balance and reward. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Pattern Adaptation and Normalization Reweighting.
Westrick, Zachary M; Heeger, David J; Landy, Michael S
2016-09-21
Adaptation to an oriented stimulus changes both the gain and preferred orientation of neural responses in V1. Neurons tuned near the adapted orientation are suppressed, and their preferred orientations shift away from the adapter. We propose a model in which weights of divisive normalization are dynamically adjusted to homeostatically maintain response products between pairs of neurons. We demonstrate that this adjustment can be performed by a very simple learning rule. Simulations of this model closely match existing data from visual adaptation experiments. We consider several alternative models, including variants based on homeostatic maintenance of response correlations or covariance, as well as feedforward gain-control models with multiple layers, and we demonstrate that homeostatic maintenance of response products provides the best account of the physiological data. Adaptation is a phenomenon throughout the nervous system in which neural tuning properties change in response to changes in environmental statistics. We developed a model of adaptation that combines normalization (in which a neuron's gain is reduced by the summed responses of its neighbors) and Hebbian learning (in which synaptic strength, in this case divisive normalization, is increased by correlated firing). The model is shown to account for several properties of adaptation in primary visual cortex in response to changes in the statistics of contour orientation. Copyright © 2016 the authors 0270-6474/16/369805-12$15.00/0.
Yoo, Jeong-Ki; Kim, Jong-Hwan
2012-02-01
When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).
Belury, Martha A; Cole, Rachel M; Bailey, Brittney E; Ke, Jia-Yu; Andridge, Rebecca R; Kiecolt-Glaser, Janice K
2016-05-01
Supplementation with linoleic acid (LA; 18:2Ω6)-rich oils increases lean mass and decreases trunk adipose mass in people. Erythrocyte fatty acids reflect the dietary pattern of fatty acid intake and endogenous metabolism of fatty acids. The aim of this study is to determine the relationship of erythrocyte LA, with aspects of body composition, insulin resistance, and inflammation. Additionally, we tested for relationships of oleic acid (OA) and the sum of long chain omega-three fatty acids (LC-Ω3-SUM), on the same outcomes. Men and women (N = 139) were evaluated for body composition, insulin resistance, and serum inflammatory markers, IL-6, and c-reactive protein (CRP) and erythrocyte fatty acid composition after an overnight fast. LA was positively related to appendicular lean mass/body mass index and inversely related to trunk adipose mass. Additionally, LA was inversely related to insulin resistance and IL-6. While there was an inverse relationship between OA or LC-Ω3-SUM with markers of inflammation, there were no relationships between OA or LC-Ω3-SUM with body composition or HOMA-IR. Higher erythrocyte LA was associated with improved body composition, insulin resistance, and inflammation. Erythrocyte OA or LC-Ω3-SUM was unrelated to body composition and insulin resistance. There is much controversy about whether all unsaturated fats have the same benefits for metabolic syndrome and weight gain. We sought to test the strength of the relationships between three unsaturated fatty acid in erythrocytes with measurements of body composition, metabolism, and inflammation in healthy adults. Linoleic acid, but not oleic acid or the sum of long-chain omega 3 fatty acids (w3), was associated with increased appendicular lean mass and decreased trunk adipose mass and insulin resistance. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
High performance pipelined multiplier with fast carry-save adder
NASA Technical Reports Server (NTRS)
Wu, Angus
1990-01-01
A high-performance pipelined multiplier is described. Its high performance results from the fast carry-save adder basic cell which has a simple structure and is suitable for the Gate Forest semi-custom environment. The carry-save adder computes the sum and carry within two gate delay. Results show that the proposed adder can operate at 200 MHz for a 2-micron CMOS process; better performance is expected in a Gate Forest realization.
Rayleigh-Bloch waves trapped by a periodic perturbation: exact solutions
NASA Astrophysics Data System (ADS)
Merzon, A.; Zhevandrov, P.; Romero Rodríguez, M. I.; De la Paz Méndez, J. E.
2018-06-01
Exact solutions describing the Rayleigh-Bloch waves for the two-dimensional Helmholtz equation are constructed in the case when the refractive index is a sum of a constant and a small amplitude function which is periodic in one direction and of finite support in the other. These solutions are quasiperiodic along the structure and exponentially decay in the orthogonal direction. A simple formula for the dispersion relation of these waves is obtained.
Josephson A/D Converter Development.
1981-10-01
by Zappe and A Landman [20]. They conclude that the simple model of the Josephson effect is applicable up to frequencies at least as high (a) as 300...GHz. B. Time-Domain Experiments 4ooF so The early high - frequency experiments with Josephson devices I .O suggested their use as very fast logic switches...exactly as for the phenomenological model . The tunneling pacitive current paths dominate the circuit at high frequencies . current is the sum of two
ECON-KG: A Code for Computation of Electrical Conductivity Using Density Functional Theory
2017-10-01
is presented. Details of the implementation and instructions for execution are presented, and an example calculation of the frequency- dependent ...shown to depend on carbon content,3 and electrical conductivity models have become a requirement for input into continuum-level simulations being... dependent electrical conductivity is computed as a weighted sum over k-points: () = ∑ () ∗ () , (2) where W(k) is
Electrostatic interaction energy and factor 1.23
NASA Astrophysics Data System (ADS)
Rubčić, A.; Arp, H.; Rubčić, J.
The factor F≫1.23 has originally been found in the redshift of quasars. Recently, it has been found in very different physical phenomena: the life-time of muonium, the masses of elementary particles (leptons, quarks,...), the correlation of atomic weight (A) and atomic number (Z) and the correlation of the sum of masses of all orbiting bodies with the mass of the central body in gravitational systems.
Interface Evaluation for Open System Architectures
2014-03-01
maker (SDM) is responsible for balancing all of the influences of the IPT when making decisions. Coalescing the IPT perspectives for a single IIM...factors are considered in IIM decisions and that decisions are consistent with the preferences of the SDM, ultimately leading to a balance of schedule... board to perform ranking and weighting determinations. Rank sum, rank exponent, rank reciprocal and ROC leverage a subjective assessment of the
David Frankman; Brent W. Webb; Bret W. Butler
2007-01-01
Thermal radiation emission from a simulated black flame surface to a fuel bed is analyzed by a ray-tracing technique, tracking emission from points along the flame to locations along the fuel bed while accounting for absorption by environmental water vapor in the intervening medium. The Spectral Line Weighted-sum-of-gray-gases approach was adopted for treating the...
ERIC Educational Resources Information Center
Kaycheng, Soh
2015-01-01
World university ranking systems used the weight-and-sum approach to combined indicator scores into overall scores on which the universities are then ranked. This approach assumes that the indicators all independently contribute to the overall score in the specified proportions. In reality, this assumption is doubtful as the indicators tend to…
Prioritizing the Components of Vulnerability: A Genetic Algorithm Minimization of Flood Risk
NASA Astrophysics Data System (ADS)
Bongolan, Vena Pearl; Ballesteros, Florencio; Baritua, Karessa Alexandra; Junne Santos, Marie
2013-04-01
We define a flood resistant city as an optimal arrangement of communities according to their traits, with the goal of minimizing the flooding vulnerability via a genetic algorithm. We prioritize the different components of flooding vulnerability, giving each component a weight, thus expressing vulnerability as a weighted sum. This serves as the fitness function for the genetic algorithm. We also allowed non-linear interactions among related but independent components, viz, poverty and mortality rate, and literacy and radio/ tv penetration. The designs produced reflect the relative importance of the components, and we observed a synchronicity between the interacting components, giving us a more consistent design.
Optimization of Compton-suppression and summing schemes for the TIGRESS HPGe detector array
NASA Astrophysics Data System (ADS)
Schumaker, M. A.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.
2007-04-01
Methods of optimizing the performance of an array of Compton-suppressed, segmented HPGe clover detectors have been developed which rely on the physical position sensitivity of both the HPGe crystals and the Compton-suppression shields. These relatively simple analysis procedures promise to improve the precision of experiments with the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS). Suppression schemes will improve the efficiency and peak-to-total ratio of TIGRESS for high γ-ray multiplicity events by taking advantage of the 20-fold segmentation of the Compton-suppression shields, while the use of different summing schemes will improve results for a wide range of experimental conditions. The benefits of these methods are compared for many γ-ray energies and multiplicities using a GEANT4 simulation, and the optimal physical configuration of the TIGRESS array under each set of conditions is determined.
Method of detecting system function by measuring frequency response
NASA Technical Reports Server (NTRS)
Morrison, John L. (Inventor); Morrison, William H. (Inventor); Christophersen, Jon P. (Inventor)
2012-01-01
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
Method of detecting system function by measuring frequency response
Morrison, John L [Butte, MT; Morrison, William H [Manchester, CT; Christophersen, Jon P [Idaho Falls, ID
2012-04-03
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
NASA Astrophysics Data System (ADS)
Hu, Jie; Luo, Meng; Jiang, Feng; Xu, Rui-Xue; Yan, YiJing
2011-06-01
Padé spectrum decomposition is an optimal sum-over-poles expansion scheme of Fermi function and Bose function [J. Hu, R. X. Xu, and Y. J. Yan, J. Chem. Phys. 133, 101106 (2010)], 10.1063/1.3484491. In this work, we report two additional members to this family, from which the best among all sum-over-poles methods could be chosen for different cases of application. Methods are developed for determining these three Padé spectrum decomposition expansions at machine precision via simple algorithms. We exemplify the applications of present development with optimal construction of hierarchical equations-of-motion formulations for nonperturbative quantum dissipation and quantum transport dynamics. Numerical demonstrations are given for two systems. One is the transient transport current to an interacting quantum-dots system, together with the involved high-order co-tunneling dynamics. Another is the non-Markovian dynamics of a spin-boson system.
Fractional compartmental models and multi-term Mittag-Leffler response functions.
Verotta, Davide
2010-04-01
Systems of fractional differential equations (SFDE) have been increasingly used to represent physical and control system, and have been recently proposed for use in pharmacokinetics (PK) by (J Pharmacokinet Pharmacodyn 36:165-178, 2009) and (J Phamacokinet Pharmacodyn, 2010). We contribute to the development of a theory for the use of SFDE in PK by, first, further clarifying the nature of systems of FDE, and in particular point out the distinction and properties of commensurate versus non-commensurate ones. The second purpose is to show that for both types of systems, relatively simple response functions can be derived which satisfy the requirements to represent single-input/single-output PK experiments. The response functions are composed of sums of single- (for commensurate) or two-parameters (for non-commensurate) Mittag-Leffler functions, and establish a direct correspondence with the familiar sums of exponentials used in PK.
12 CFR 3.152 - Simple risk weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... applicable risk weight in this section. (1) Zero percent risk weight equity exposures. An equity exposure to... assigned a zero percent risk weight. (2) 20 percent risk weight equity exposures. An equity exposure to a... equals zero. If RVC is negative and greater than or equal to -1 (that is, between zero and -1), then E...
Coutand, Catherine; Chevolot, Malia; Lacointe, André; Rowe, Nick; Scotti, Ivan
2010-02-01
In rain forests, sapling survival is highly dependent on the regulation of trunk slenderness (height/diameter ratio): shade-intolerant species have to grow in height as fast as possible to reach the canopy but also have to withstand mechanical loadings (wind and their own weight) to avoid buckling. Recent studies suggest that mechanosensing is essential to control tree dimensions and stability-related morphogenesis. Differences in species slenderness have been observed among rainforest trees; the present study thus investigates whether species with different slenderness and growth habits exhibit differences in mechanosensitivity. Recent studies have led to a model of mechanosensing (sum-of-strains model) that predicts a quantitative relationship between the applied sum of longitudinal strains and the plant's responses in the case of a single bending. Saplings of five different neotropical species (Eperua falcata, E. grandiflora, Tachigali melinonii, Symphonia globulifera and Bauhinia guianensis) were subjected to a regimen of controlled mechanical loading phases (bending) alternating with still phases over a period of 2 months. Mechanical loading was controlled in terms of strains and the five species were subjected to the same range of sum of strains. The application of the sum-of-strain model led to a dose-response curve for each species. Dose-response curves were then compared between tested species. The model of mechanosensing (sum-of-strain model) applied in the case of multiple bending as long as the bending frequency was low. A comparison of dose-response curves for each species demonstrated differences in the stimulus threshold, suggesting two groups of responses among the species. Interestingly, the liana species B. guianensis exhibited a higher threshold than other Leguminosae species tested. This study provides a conceptual framework to study variability in plant mechanosensing and demonstrated interspecific variability in mechanosensing.
NASA Astrophysics Data System (ADS)
Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua
2018-01-01
Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.
Jenkinson, C; Mant, J; Carter, J; Wade, D; Winner, S
2000-03-01
To assess the validity of the London handicap scale (LHS) using a simple unweighted scoring system compared with traditional weighted scoring 323 patients admitted to hospital with acute stroke were followed up by interview 6 months after their stroke as part of a trial looking at the impact of a family support organiser. Outcome measures included the six item LHS, the Dartmouth COOP charts, the Frenchay activities index, the Barthel index, and the hospital anxiety and depression scale. Patients' handicap score was calculated both using the standard procedure (with weighting) for the LHS, and using a simple summation procedure without weighting (U-LHS). Construct validity of both LHS and U-LHS was assessed by testing their correlations with the other outcome measures. Cronbach's alpha for the LHS was 0.83. The U-LHS was highly correlated with the LHS (r=0.98). Correlation of U-LHS with the other outcome measures gave very similar results to correlation of LHS with these measures. Simple summation scoring of the LHS does not lead to any change in the measurement properties of the instrument compared with standard weighted scoring. Unweighted scores are easier to calculate and interpret, so it is recommended that these are used.
Multisensor Arrays for Greater Reliability and Accuracy
NASA Technical Reports Server (NTRS)
Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff
2004-01-01
Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that approximate each other sufficiently closely to constitute a majority for the purpose of quantifying reliability. This criterion is, simply, that if there do not exist at least three sensors having weights greater than a prescribed minimum acceptable value, then the array as a whole is deemed to have failed.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
A simple smoothness indicator for the WENO scheme with adaptive order
NASA Astrophysics Data System (ADS)
Huang, Cong; Chen, Li Li
2018-01-01
The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.
Modeling the radiation pattern of LEDs.
Moreno, Ivan; Sun, Ching-Cherng
2008-02-04
Light-emitting diodes (LEDs) come in many varieties and with a wide range of radiation patterns. We propose a general, simple but accurate analytic representation for the radiation pattern of the light emitted from an LED. To accurately render both the angular intensity distribution and the irradiance spatial pattern, a simple phenomenological model takes into account the emitting surfaces (chip, chip array, or phosphor surface), and the light redirected by both the reflecting cup and the encapsulating lens. Mathematically, the pattern is described as the sum of a maximum of two or three Gaussian or cosine-power functions. The resulting equation is widely applicable for any kind of LED of practical interest. We accurately model a wide variety of radiation patterns from several world-class manufacturers.
Slow Relaxation in Anderson Critical Systems
NASA Astrophysics Data System (ADS)
Choi, Soonwon; Yao, Norman; Choi, Joonhee; Kucsko, Georg; Lukin, Mikhail
2016-05-01
We study the single particle dynamics in disordered systems with long range hopping, focusing on the critical cases, i.e., the hopping amplitude decays as 1 /rd in d-dimension. We show that with strong on-site potential disorder, the return probability of the particle decays as power-law in time. As on-site potential disorder decreases, the temporal profile smoothly changes from a simple power-law to the sum of multiple power-laws with exponents ranged from 0 to νmax. We analytically compute the decay exponents using a simple resonance counting argument, which quantitatively agrees with exact numerical results. Our result implies that the dynamics in Anderson Critical systems are dominated by resonances. Harvard-MIT CUA, Kwanjeong Educational Fellowship, AFOSR MURI, Samsung Scholarship.
Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations
Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán
2016-01-01
Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165
Multi-source apportionment of polycyclic aromatic hydrocarbons using simultaneous linear equations
NASA Astrophysics Data System (ADS)
Marinaite, Irina; Semenov, Mikhail
2014-05-01
A new approach to identify multiple sources of polycyclic aromatic hydrocarbons (PAHs) and to evaluate the source contributions to atmospheric deposition of particulate PAHs is proposed. The approach is based on differences in concentrations of sums of PAHs with the same molecular weight among the sources. The data on PAHs accumulation in snow as well as the source profiles were used for calculations. Contributions of aluminum production plant, oil-fired central heating boilers, and residential wood and coal combustion were calculated using the linear mixing models. The concentrations of PAH pairs such as Benzo[b]fluorantene + Benzo[k]fluorantene and Benzo[g,h,i]perylene + Indeno[1,2,3-c,d]pyrene normalized to Benzo[a]antracene + Chrysene were used as tracers in mixing equations. The results obtained using ratios of sums of PAHs were compared with those obtained using molecular diagnostic ratios such as Benzo[a]antracene/Chrysene and Benzo[g,h,i]perylene/Indeno[1,2,3-c,d]pyrene. It was shown that the results obtained using diagnostic ratios as tracers are less reliable than results obtained using ratios of sums of PAHs. Funding was provided by Siberian Branch of Russian Academy of Sciences grant No. 8 (2012-2014).
Newton’s Rotating Water Bucket: A Simple Model
2013-01-01
surface of the water must be an equipotential relative to the sum of the gravitational and centrifugal potential energies [9].) The value of z0 can be...twisted and the bucket is then released, it begins to spin and the surface of the water acquires a paraboloidal shape. In this paper, the parabolic...adopts a curved surface . Ernst Mach, for example, postulated that the parabolic shape must be due to the existence of matter in the universe
NASA Astrophysics Data System (ADS)
2010-02-01
With their passion for analysing the world by breaking it down into ever-smaller pieces, most physicists are "reductionists" at heart. Whether through tradition or instinct, our natural inclination is to reduce matter first to molecules and then to atoms and on to nuclei, nucleons, quarks and beyond. But this approach, while astonishingly successful in terms of understanding the fundamental particles and forces of nature, does not always work. In other words, the whole can often be more than the sum of the parts.
Orbits on a Concave Frictionless Surface
2007-01-01
resistance. Because mechanical energy is conserved (for the system of ball and earth), the sum of the kinetic (K) and gravitational potential (U) energies...effects occur when a ball rolls without slipping on the surface of a rotating flat plate ,7 on the inner surface of a vertical cylinder such as a golf...The simple example of a ball in vertical freefall illustrates why this is necessary and how to perform the conversion. The method is then applied to
The Complexity of Quantitative Concurrent Parity Games
2004-11-01
for each player. In this paper we study only zero-sum games [20, 11], where the objectives of the two players are strictly competitive . In other words...Aided Verification, volume 1102 of LNCS, pages 75–86. Springer, 1996. [14] R.J. Lipton, E . Markakis, and A. Mehta. Playing large games using simple...strategies. In EC 03: Electronic Commerce, pages 36–41. ACM Press, 2003. 28 [15] D.A. Martin. The determinacy of Blackwell games . The Journal of Symbolic
2016-05-26
York, NY: Ballantine Books, 1991), 27-58. Examples include Sun Tzu , Genghis Kahn, Napoleon, the post WWI Soviet military, and the WWII-era German...it to a position of advantage.26 In simple terms, maneuver warfare is summed up by the idea of a force dichotomy, explained by Sun Tzu as separate...ordinary” and “extraordinary” forces within the army. Sun Tzu said that in battle, the ordinary force should seek to pin down the enemy’s front line
Castagna, Antonella; Csepregi, Kristóf; Neugart, Susanne; Zipoli, Gaetano; Večeřová, Kristýna; Jakab, Gábor; Jug, Tjaša; Llorens, Laura; Martínez-Abaigar, Javier; Martínez-Lüscher, Johann; Núñez-Olivera, Encarnación; Ranieri, Annamaria; Schoedl-Hummel, Katharina; Schreiner, Monika; Teszlák, Péter; Tittmann, Susanne; Urban, Otmar; Verdaguer, Dolors; Jansen, Marcel A K; Hideg, Éva
2017-11-01
A 2-year study explored metabolic and phenotypic plasticity of sun-acclimated Vitis vinifera cv. Pinot noir leaves collected from 12 locations across a 36.69-49.98°N latitudinal gradient. Leaf morphological and biochemical parameters were analysed in the context of meteorological parameters and the latitudinal gradient. We found that leaf fresh weight and area were negatively correlated with both global and ultraviolet (UV) radiation, cumulated global radiation being a stronger correlator. Cumulative UV radiation (sumUVR) was the strongest correlator with most leaf metabolites and pigments. Leaf UV-absorbing pigments, total antioxidant capacities, and phenolic compounds increased with increasing sumUVR, whereas total carotenoids and xanthophylls decreased. Despite of this reallocation of metabolic resources from carotenoids to phenolics, an increase in xanthophyll-cycle pigments (the sum of the amounts of three xanthophylls: violaxanthin, antheraxanthin, and zeaxanthin) with increasing sumUVR indicates active, dynamic protection for the photosynthetic apparatus. In addition, increased amounts of flavonoids (quercetin glycosides) and constitutive β-carotene and α-tocopherol pools provide antioxidant protection against reactive oxygen species. However, rather than a continuum of plant acclimation responses, principal component analysis indicates clusters of metabolic states across the explored 1,500-km-long latitudinal gradient. This study emphasizes the physiological component of plant responses to latitudinal gradients and reveals the physiological plasticity that may act to complement genetic adaptations. © 2017 John Wiley & Sons Ltd.
Verge and Foliot Clock Escapement: A Simple Dynamical System
NASA Astrophysics Data System (ADS)
Denny, Mark
2010-09-01
The earliest mechanical clocks appeared in Europe in the 13th century. From about 1250 CE to 1670 CE, these simple clocks consisted of a weight suspended from a rope or chain that was wrapped around a horizontal axle. To tell time, the weight must fall with a slow uniform speed, but, under the action of gravity alone, such a suspended weight would accelerate. To prevent this acceleration, an escapement mechanism was required. The best such escapement mechanism was called the verge and foliot escapement, and it was so successful that it lasted until about 1800 CE. These simple weight-driven clocks with verge and foliot escapements were accurate enough to mark the hours but not minutes or seconds. From 1670, significant improvements were made (principally by introducing pendulums and the newly invented anchor escapement) that justified the introduction of hands to mark minutes, and then seconds. By the end of the era of mechanical clocks, in the first half of the 20th century, these much-studied and much-refined machines were accurate to a millisecond a day.
Determinations of Vus using inclusive hadronic τ decay data
NASA Astrophysics Data System (ADS)
Maltman, Kim; Hudspith, Renwick James; Lewis, Randy; Izubuchi, Taku; Ohki, Hiroshi; Zanotti, James M.
2016-08-01
Two methods for determining |Vus| employing inclusive hadronic τ decay data are discussed. The first is the conventional flavor-breaking sum rule determination whose usual implementation produces results ˜ 3σ low compared to three-family unitary expectations. The second is a novel approach combining experimental strange hadronic τ distributions with lattice light-strange current-current two-point function data. Preliminary explorations of the latter show the method promises |Vus| determinations competitive with those from Kℓ3 and Γ[Kμ2]/Γ[πμ2]. For the former, systematic issues in the conventional implementation are investigated. Unphysical dependences of |Vus| on the choice of sum rule weight, w, and upper limit, s0, of the weighted experimental spectral integrals are observed, the source of these problems identified and a new implementation which overcomes these problems developed. Lattice results are shown to provide a tool for quantitatively assessing truncation uncertainties for the slowly converging D = 2 OPE series. The results for |Vus| from this new implementation are shown to be free of unphysical w- and s0-dependences, and ˜ 0.0020 higher than those produced by the conventional implementation. With preliminary new Kπ branching fraction results as input, we find |Vus| in excellent agreement with that obtained from Kℓ3, and compatible within errors with expectations from three-family unitarity.
Linear error analysis of slope-area discharge determinations
Kirby, W.H.
1987-01-01
The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.
NASA Astrophysics Data System (ADS)
S. Al-Kaltakchi, Musab T.; Woo, Wai L.; Dlay, Satnam; Chambers, Jonathon A.
2017-12-01
In this study, a speaker identification system is considered consisting of a feature extraction stage which utilizes both power normalized cepstral coefficients (PNCCs) and Mel frequency cepstral coefficients (MFCC). Normalization is applied by employing cepstral mean and variance normalization (CMVN) and feature warping (FW), together with acoustic modeling using a Gaussian mixture model-universal background model (GMM-UBM). The main contributions are comprehensive evaluations of the effect of both additive white Gaussian noise (AWGN) and non-stationary noise (NSN) (with and without a G.712 type handset) upon identification performance. In particular, three NSN types with varying signal to noise ratios (SNRs) were tested corresponding to street traffic, a bus interior, and a crowded talking environment. The performance evaluation also considered the effect of late fusion techniques based on score fusion, namely, mean, maximum, and linear weighted sum fusion. The databases employed were TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3600 speech utterances. As recommendations from the study, mean fusion is found to yield overall best performance in terms of speaker identification accuracy (SIA) with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings.
Bock, Otmar; Bury, Nils
2018-03-01
Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.
Generalized minimum dominating set and application in automatic text summarization
NASA Astrophysics Data System (ADS)
Xu, Yi-Zhi; Zhou, Hai-Jun
2016-03-01
For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.
Smisc - A collection of miscellaneous functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landon Sego, PNNL
2015-08-31
A collection of functions for statistical computing and data manipulation. These include routines for rapidly aggregating heterogeneous matrices, manipulating file names, loading R objects, sourcing multiple R files, formatting datetimes, multi-core parallel computing, stream editing, specialized plotting, etc. Smisc-package A collection of miscellaneous functions allMissing Identifies missing rows or columns in a data frame or matrix as.numericSilent Silent wrapper for coercing a vector to numeric comboList Produces all possible combinations of a set of linear model predictors cumMax Computes the maximum of the vector up to the current index cumsumNA Computes the cummulative sum of a vector without propogating NAsmore » d2binom Probability functions for the sum of two independent binomials dataIn A flexible way to import data into R. dbb The Beta-Binomial Distribution df2list Row-wise conversion of a data frame to a list dfplapply Parallelized single row processing of a data frame dframeEquiv Examines the equivalence of two dataframes or matrices dkbinom Probability functions for the sum of k independent binomials factor2character Converts all factor variables in a dataframe to character variables findDepMat Identify linearly dependent rows or columns in a matrix formatDT Converts date or datetime strings into alternate formats getExtension Filename manipulations: remove the extension or path, extract the extension or path getPath Filename manipulations: remove the extension or path, extract the extension or path grabLast Filename manipulations: remove the extension or path, extract the extension or path ifelse1 Non-vectorized version of ifelse integ Simple numerical integration routine interactionPlot Two-way Interaction Plot with Error Bar linearMap Linear mapping of a numerical vector or scalar list2df Convert a list to a data frame loadObject Loads and returns the object(s) in an ".Rdata" file more Display the contents of a file to the R terminal movAvg2 Calculate the moving average using a 2-sided window openDevice Opens a graphics device based on the filename extension p2binom Probability functions for the sum of two independent binomials padZero Pad a vector of numbers with zeros parseJob Parses a collection of elements into (almost) equal sized groups pbb The Beta-Binomial Distribution pcbinom A continuous version of the binomial cdf pkbinom Probability functions for the sum of k independent binomials plapply Simple parallelization of lapply plotFun Plot one or more functions on a single plot PowerData An example of power data pvar Prints the name and value of one or more objects qbb The Beta-Binomial Distribution rbb And numerous others (space limits reporting).« less
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Yang, Xiaoli; Xu, Junhai; Cao, Linjing; Li, Xianglin; Wang, Peiyuan; Wang, Bin; Liu, Baolin
2018-01-01
Our human brain can rapidly and effortlessly perceive a person’s emotional state by integrating the isolated emotional faces and bodies into a whole. Behavioral studies have suggested that the human brain encodes whole persons in a holistic rather than part-based manner. Neuroimaging studies have also shown that body-selective areas prefer whole persons to the sum of their parts. The body-selective areas played a crucial role in representing the relationships between emotions expressed by different parts. However, it remains unclear in which regions the perception of whole persons is represented by a combination of faces and bodies, and to what extent the combination can be influenced by the whole person’s emotions. In the present study, functional magnetic resonance imaging data were collected when participants performed an emotion distinction task. Multi-voxel pattern analysis was conducted to examine how the whole person-evoked responses were associated with the face- and body-evoked responses in several specific brain areas. We found that in the extrastriate body area (EBA), the whole person patterns were most closely correlated with weighted sums of face and body patterns, using different weights for happy expressions but equal weights for angry and fearful ones. These results were unique for the EBA. Our findings tentatively support the idea that the whole person patterns are represented in a part-based manner in the EBA, and modulated by emotions. These data will further our understanding of the neural mechanism underlying perceiving emotional persons. PMID:29375348
Thompson, Amanda L; Adair, Linda S; Bentley, Margaret E
2012-01-01
The prevalence of overweight among infants and toddlers has increased dramatically in the past three decades, highlighting the importance of identifying factors contributing to early excess weight gain, particularly in high-risk groups. Parental feeding styles, the attitudes and behaviors that characterize parental approaches to maintaining or modifying children’s eating behavior, are an important behavioral component shaping early obesity risk. Using longitudinal data from the Infant Care and Risk of Obesity Study, a cohort study of 217 African-American mother-infant pairs with feeding styles, dietary recalls and anthropometry collected from 3-18 months of infant age, we examined the relationship between feeding styles, infant diet and weight–for-age and sum of skinfolds. Longitudinal mixed models indicated that higher pressuring and indulgent feeding style scores were positively associated with greater infant energy intake, reduced odds of breastfeeding and higher levels of age-inappropriate feeding of liquids and solids while restrictive feeding styles were associated with lower energy intake, higher odds of breastfeeding and reduced odds of inappropriate feeding. Pressuring and restriction were also oppositely related to infant size with pressuring associated with lower infant weight-for-age and restriction with higher weight-for-age and sum of skinfolds. Infant size also predicted maternal feeding styles in subsequent visits indicating that the relationship between size and feeding styles is likely bidirectional. Our results suggest that the degree to which parents are pressuring or restrictive during feeding shapes the early feeding environment and, consequently, may be an important environmental factor in the development of obesity. PMID:23592664
Horton, Megan K; Blount, Benjamin C; Valentin-Blasini, Liza; Wapner, Ronald; Whyatt, Robin; Gennings, Chris; Factor-Litvak, Pam
2015-11-01
Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New York City using weighted quantile sum (WQS) regression. We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (±2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Individual analyte concentrations in urine were significantly correlated (Spearman's r 0.4-0.5, p<0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Horton, Megan K.; Blount, Benjamin C.; Valentin-Blasini, Liza; Wapner, Ronald; Whyatt, Robin; Gennings, Chris; Factor-Litvak, Pam
2015-01-01
Background Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy. Objectives We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New York City using weighted quantile sum (WQS) regression. Methods We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (± 2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Results Individual analyte concentrations in urine were significantly correlated (Spearman’s r 0.4–0.5, p < 0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Conclusions Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. PMID:26408806
Analysis of Environmental Chemical Mixtures and Non-Hodgkin Lymphoma Risk in the NCI-SEER NHL Study
Czarnota, Jenna; Gennings, Chris; Colt, Joanne S.; De Roos, Anneclaire J.; Cerhan, James R.; Severson, Richard K.; Hartge, Patricia; Ward, Mary H.
2015-01-01
Background There are several suspected environmental risk factors for non-Hodgkin lymphoma (NHL). The associations between NHL and environmental chemical exposures have typically been evaluated for individual chemicals (i.e., one-by-one). Objectives We determined the association between a mixture of 27 correlated chemicals measured in house dust and NHL risk. Methods We conducted a population-based case–control study of NHL in four National Cancer Institute–Surveillance, Epidemiology, and End Results centers—Detroit, Michigan; Iowa; Los Angeles County, California; and Seattle, Washington—from 1998 to 2000. We used weighted quantile sum (WQS) regression to model the association of a mixture of chemicals and risk of NHL. The WQS index was a sum of weighted quartiles for 5 polychlorinated biphenyls (PCBs), 7 polycyclic aromatic hydrocarbons (PAHs), and 15 pesticides. We estimated chemical mixture weights and effects for study sites combined and for each site individually, and also for histologic subtypes of NHL. Results The WQS index was statistically significantly associated with NHL overall [odds ratio (OR) = 1.30; 95% CI: 1.08, 1.56; p = 0.006; for one quartile increase] and in the study sites of Detroit (OR = 1.71; 95% CI: 1.02, 2.92; p = 0.045), Los Angeles (OR = 1.44; 95% CI: 1.00, 2.08; p = 0.049), and Iowa (OR = 1.76; 95% CI: 1.23, 2.53; p = 0.002). The index was marginally statistically significant in Seattle (OR = 1.39; 95% CI: 0.97, 1.99; p = 0.071). The most highly weighted chemicals for predicting risk overall were PCB congener 180 and propoxur. Highly weighted chemicals varied by study site; PCBs were more highly weighted in Detroit, and pesticides were more highly weighted in Iowa. Conclusions An index of chemical mixtures was significantly associated with NHL. Our results show the importance of evaluating chemical mixtures when studying cancer risk. Citation Czarnota J, Gennings C, Colt JS, De Roos AJ, Cerhan JR, Severson RK, Hartge P, Ward MH, Wheeler DC. 2015. Analysis of environmental chemical mixtures and non-Hodgkin lymphoma risk in the NCI-SEER NHL Study. Environ Health Perspect 123:965–970; http://dx.doi.org/10.1289/ehp.1408630 PMID:25748701
Weight distribution in the current annual twigs of barclay willow.
John F. Thilenius
1988-01-01
The current annual twigs of unbrowsed Barclay willow (Salix barclayi Anderss.) grow as gently tapering cylinders. Consequently, the distal half of the twig has only 33 to 41 percent of the total weight. Longer twigs have proportionally less weight in the distal end. The total weight of an unbrowsed twig can be estimated by simple regression of...
[Effect of Acupuncture Therapy on Body Compositions in Patients with Obesity].
Zhang, Hui-Min; Wu, Xue-Liang; Jiang, Chao; Shi, Rong-Xing
2017-04-25
To observe the clinical effectiveness of acupuncture intervention in weight reduction by modulating body compositions in obesity patients. A total of 71 obesity patients during weight-loss procedure were allocated to acupuncture+nutrition-consultation group ( n =40) and simple nutrition-consultation group ( n =31). The patients of the acupuncture +nutrition-consultation group were treated by acupuncture stimulation of Zhongwan (CV 12), Xiawan (CV 10), Tianshu (ST 25), Wailing (ST 26), Qihai (CV 6), Guanyuan (CV 4), etc. for 30 min, once every other day, 3 times per week, 12 times altogether, and also given with weekly nutrition consultation (including subjective query, objective measurement, analysis, program for nutrition support) at the same time. The patients of the simple nutrition-consultation group were treated by only weekly nutrition consultation for 4 weeks. Before and after the treatment, the patients' body weight, body mass index (BMI), fat mass, percentage of body fat, muscle mass, protein quality, water quality and bone mass were measured by using a composition analyzer. After 4 weeks' treatment, the body mass, BMI, fat mass and fat percentage in both acupuncture+nutrition-consultation and simple nutrition-consultation groups were significantly decreased ( P <0.01), while the weight levels of muscle, protein, bone and water content had no apparent changes ( P >0.05). The therapeutic effect of acupuncture+nutrition-consultation group was markedly superior to that of the simple nutrition-consultation group in increasing the improved degrees of body weight, BMI, fat mass and fat percentage ( P <0.01). Acupuncture plus nutrition consultation is effective in reducing body mass, fat mass and percentage of body fat in obesity patients.
Recommendations on Future Science and Engineering Studies for Ocean Color
NASA Technical Reports Server (NTRS)
Mannino, Antonio
2015-01-01
The Ocean Health Index measured Ecological Integrity as the relative condition of assessed species in a given location. This was calculated as the weighted sum of the International Union for Conservation of Natures (IUCN) assessments of species. Weights used were based on the level of extinction risk following Butchart et al.2007: EX (extinct) 0.0, CR (critically endangered) 0.2, EN (endangered) 0.5, VU (vulnerable) 0.7, NT (not threatened) 0.9, and LC (least concern) 0.99. For primarily coastal goals, the spatial average of these per pixel scores was based on a 3nmi buffer; for goals derived from all ocean waters, the spatial average was computed for the entire EEZ.
Organohalogen contaminants and trace metals in North-East Atlantic porbeagle shark (Lamna nasus).
Bendall, Victoria A; Barber, Jonathan L; Papachlimitzou, Alexandra; Bolam, Thi; Warford, Lee; Hetherington, Stuart J; Silva, Joana F; McCully, Sophy R; Losada, Sara; Maes, Thomas; Ellis, Jim R; Law, Robin J
2014-08-15
The North-East Atlantic porbeagle (Lamna nasus) population has declined dramatically over the last few decades and is currently classified as 'Critically Endangered'. As long-lived, apex predators, they may be vulnerable to bioaccumulation of contaminants. In this study organohalogen compounds and trace elements were analysed in 12 specimens caught as incidental bycatch in commercial gillnet fisheries in the Celtic Sea in 2011. Levels of organohalogen contaminants were low or undetectable (summed CB and BDE concentrations 0.04-0.85 mg kg(-1)wet weight). A notably high Cd concentration (7.2 mg kg(-1)wet weight) was observed in one mature male, whereas the range observed in the other samples was much lower (0.04-0.26 mg kg(-1)wet weight). Hg and Pb concentrations were detected only in single animals, at 0.34 and 0.08 mg kg(-1)wet weight, respectively. These contaminant levels were low in comparison to other published studies for shark species. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Mehrotra, Sanjay; Kim, Kibaek
2011-12-01
We consider the problem of outcomes based budget allocations to chronic disease prevention programs across the United States (US) to achieve greater geographical healthcare equity. We use Diabetes Prevention and Control Programs (DPCP) by the Center for Disease Control and Prevention (CDC) as an example. We present a multi-criteria robust weighted sum model for such multi-criteria decision making in a group decision setting. The principal component analysis and an inverse linear programming techniques are presented and used to study the actual 2009 budget allocation by CDC. Our results show that the CDC budget allocation process for the DPCPs is not likely model based. In our empirical study, the relative weights for different prevalence and comorbidity factors and the corresponding budgets obtained under different weight regions are discussed. Parametric analysis suggests that money should be allocated to states to promote diabetes education and to increase patient-healthcare provider interactions to reduce disparity across the US.
Application of a modified selection index for honey bees (Hymenoptera: Apidae).
van Engelsdorp, D; Otis, G W
2000-12-01
Nine different genetic families of honey bees (Apis mellifera L.) were compared using summed z-scores (phenotypic values) and a modified selection index (Imod). Imod values incorporated both the phenotypic scores of the different traits and the economic weightings of these traits, as determined by a survey of commercial Ontario beekeepers. Largely because of the high weight all beekeepers place on honey production, a distinct difference between line rankings based on phenotypic scores and Imod scores was apparent, thereby emphasizing the need to properly weight the traits being evaluated to select bee stocks most valuable for beekeepers. Furthermore, when beekeepers who made >10% of their income from queen and nucleus colony sales assigned relative values to the traits used in the Imod calculations, the results differed from those based on weightings assigned by honey producers. Our results underscore the difficulties the North American beekeeping industry must overcome to devise effective methods of evaluating colonies for breeding purposes.
Modeling the solute transport by particle-tracing method with variable weights
NASA Astrophysics Data System (ADS)
Jiang, J.
2016-12-01
Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.
Effect of starch structure on glucose and insulin responses in adults.
Behall, K M; Scholfield, D J; Canary, J
1988-03-01
Twelve women and 13 men were given meals containing cornstarch with 70% of the starch in the form of amylopectin or amylose to determine if differences in glycemic response result from different chemical structure. Blood was drawn before and 30, 60, 120, and 180 min after each meal. The meals consisted of starch crackers fed at the rate of 1 g carbohydrate from starch per kilogram body weight. The amylose meal resulted in a significantly lower glucose peak at 30 min than did the amylopectin meal. Plasma insulin response was significantly lower 30 and 60 min after amylose than after the amylopectin meal. Summed insulin above fasting was significantly lower after amylose while summed glucose was not significantly different between the two meals. The sustained plasma glucose levels after the amylose meal with reduced insulin requirement suggest amylose starch may be of potential benefit to carbohydrate-sensitive or diabetic individuals.
Nonlinear system theory: another look at dependence.
Wu, Wei Biao
2005-10-04
Based on the nonlinear system theory, we introduce previously undescribed dependence measures for stationary causal processes. Our physical and predictive dependence measures quantify the degree of dependence of outputs on inputs in physical systems. The proposed dependence measures provide a natural framework for a limit theory for stationary processes. In particular, under conditions with quite simple forms, we present limit theorems for partial sums, empirical processes, and kernel density estimates. The conditions are mild and easily verifiable because they are directly related to the data-generating mechanisms.
Free-free opacity in dense plasmas with an average atom model
Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick; ...
2017-02-28
A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.
Selected bibliography on the modeling and control of plant processes
NASA Technical Reports Server (NTRS)
Viswanathan, M. M.; Julich, P. M.
1972-01-01
A bibliography of information pertinent to the problem of simulating plants is presented. Detailed simulations of constituent pieces are necessary to justify simple models which may be used for analysis. Thus, this area of study is necessary to support the Earth Resources Program. The report sums up the present state of the problem of simulating vegetation. This area holds the hope of major benefits to mankind through understanding the ecology of a region and in improving agricultural yield.
Free-free opacity in dense plasmas with an average atom model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick
A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.
Summing up the noise in gene networks
NASA Astrophysics Data System (ADS)
Paulsson, Johan
2004-01-01
Random fluctuations in genetic networks are inevitable as chemical reactions are probabilistic and many genes, RNAs and proteins are present in low numbers per cell. Such `noise' affects all life processes and has recently been measured using green fluorescent protein (GFP). Two studies show that negative feedback suppresses noise, and three others identify the sources of noise in gene expression. Here I critically analyse these studies and present a simple equation that unifies and extends both the mathematical and biological perspectives.
Topological vertex formalism with O5-plane
NASA Astrophysics Data System (ADS)
Kim, Sung-Soo; Yagi, Futoshi
2018-01-01
We propose a new topological vertex formalism for a type IIB (p ,q ) 5-brane web with an O5-plane. We apply our proposal to five-dimensional N =1 Sp(1) gauge theory with Nf=0 , 1, 8 flavors to compute the topological string partition functions and check the agreement with the known results. Especially for the Nf=8 case, which corresponds to E-string theory on a circle, we obtain a new, yet simple, expression of the partition function with a two Young diagram sum.
NASA Technical Reports Server (NTRS)
Teich, M. C.
1980-01-01
The history of heterodyne detection is reviewed from the radiowave to the optical regions of the electromagnetic spectrum with emphasion the submillimeter/far infrared. The transition from electric field to photon absorption detection in a simple system is investigated. The response of an isolated two level detector to a coherent source of incident radiation is calculated for both heterodyne and video detection. When the processes of photon absorption and photon emission cannot be distinguished, the relative detected power at double- and sum-frequencies is found to be multiplied by a coefficient, which is less than or equal to unity, and which depends on the incident photon energy and on the effective temperature of the system.
A Simple Framework for Evaluating Authorial Contributions for Scientific Publications.
Warrender, Jeffrey M
2016-10-01
A simple tool is provided to assist researchers in assessing contributions to a scientific publication, for ease in evaluating which contributors qualify for authorship, and in what order the authors should be listed. The tool identifies four phases of activity leading to a publication-Conception and Design, Data Acquisition, Analysis and Interpretation, and Manuscript Preparation. By comparing a project participant's contribution in a given phase to several specified thresholds, a score of up to five points can be assigned; the contributor's scores in all four phases are summed to yield a total "contribution score", which is compared to a threshold to determine which contributors merit authorship. This tool may be useful in a variety of contexts in which a systematic approach to authorial credit is desired.
Optimal generalized multistep integration formulae for real-time digital simulation
NASA Technical Reports Server (NTRS)
Moerder, D. D.; Halyo, N.
1985-01-01
The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.
2014-04-01
For assessing comfort reaction, the overall vibration total value (oVTV) was calculated as the vector sum of the weighted triaxial seat pan and...the health symptoms require investigation in order to develop or improve effective exposure criteria, ergonomic design requirements, and mitigation...effects, seat design , and validation testing. However, appropriate science- and technology-based guidelines on exposure, seat design , and validation
12 CFR 217.52 - Simple risk-weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... greater than or equal to −1 (that is, between zero and −1), then E equals the absolute value of RVC. If... this section) by the lowest applicable risk weight in this paragraph (b). (1) Zero percent risk weight... credit exposures receive a zero percent risk weight under § 217.32 may be assigned a zero percent risk...
Hietala, P; Wolfová, M; Wolf, J; Kantanen, J; Juga, J
2014-02-01
Improving the feed efficiency of dairy cattle has a substantial effect on the economic efficiency and on the reduction of harmful environmental effects of dairy production through lower feeding costs and emissions from dairy farming. To assess the economic importance of feed efficiency in the breeding goal for dairy cattle, the economic values for the current breeding goal traits and the additional feed efficiency traits for Finnish Ayrshire cattle under production circumstances in 2011 were determined. The derivation of economic values was based on a bioeconomic model in which the profit of the production system was calculated, using the generated steady state herd structure. Considering beef production from dairy farms, 2 marketing strategies for surplus calves were investigated: (A) surplus calves were sold at a young age and (B) surplus calves were fattened on dairy farms. Both marketing strategies were unprofitable when subsidies were not included in the revenues. When subsidies were taken into account, a positive profitability was observed in both marketing strategies. The marginal economic values for residual feed intake (RFI) of breeding heifers and cows were -25.5 and -55.8 €/kg of dry matter per day per cow and year, respectively. The marginal economic value for RFI of animals in fattening was -29.5 €/kg of dry matter per day per cow and year. To compare the economic importance among traits, the standardized economic weight of each trait was calculated as the product of the marginal economic value and the genetic standard deviation; the standardized economic weight expressed as a percentage of the sum of all standardized economic weights was called relative economic weight. When not accounting for subsidies, the highest relative economic weight was found for 305-d milk yield (34% in strategy A and 29% in strategy B), which was followed by protein percentage (13% in strategy A and 11% in strategy B). The third most important traits were calving interval (9%) and mature weight of cows (11%) in strategy A and B, respectively. The sums of the relative economic weights over categories for RFI were 6 and 7% in strategy A and B, respectively. Under production conditions in 2011, the relative economic weights for the studied feed efficiency traits were low. However, it is possible that the relative importance of feed efficiency traits in the breeding goal will increase in the future due to increasing requirements to mitigate the environmental impact of milk production. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivasseau, Vincent, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org
The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property tomore » lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.« less
Zheng, Yelong; Lu, Hongyu; Yin, Wei; Tao, Dashuai; Shi, Lichun; Tian, Yu
2016-10-07
Forces acted on legs of water-walking arthropods with weights in dynes are of great interest for entomologist, physicists, and engineers. While their floating mechanism has been recognized, the in vivo leg forces stationary have not yet been simultaneously achieved. In this study, their elegant bright-edged leg shadows are used to make the tiny forces visible and measurable based on the updated Archimedes' principle. The force was approximately proportional to the shadow area with a resolution from nanonewton to piconewton/pixel. The sum of leg forces agreed well with the body weight measured with an accurate electronic balance, which verified updated Archimedes' principle at the arthropod level. The slight changes of vertical body weight focus position and the body pitch angle have also been revealed for the first time. The visualization of tiny force by shadow is cost-effective and very sensitive and could be used in many other applications.
Gain weighted eigenspace assignment
NASA Technical Reports Server (NTRS)
Davidson, John B.; Andrisani, Dominick, II
1994-01-01
This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.
[General growth patterns and simple mathematic models of height and weight of Chinese children].
Zong, Xin-nan; Li, Hui
2009-05-01
To explore the growth patterns and simple mathematic models of height and weight of Chinese children. The original data had been obtained from two national representative cross-sectional surveys which were 2005 National Survey of Physical Development of Children (under 7 years of age) and 2005 Chinese National Survey on Students Constitution and Health (6 - 18 years). Reference curves of height and weight of children under 7 years of age was constructed by LMS method, and data of children from 6 to 18 years of age were smoothed by cubic spline function and transformed by modified LMS procedure. Growth velocity was calculated by smoothed values of height and weight. Simple linear model was fitted for children 1 to 10 years of age, for which smoothed height and weight values were used. (1) Birth length of Chinese children was about 50 cm, average length 61 cm, 67 cm, 76 cm and 88 cm at the 3rd, 6th, 12th and 24th month. Height gain was stable from 2 to 10 years of age, average 6 - 7 cm each year. Birth length doubles by 3.5 years, and triples by 12 years. The formula estimating average height of normal children aged 2 - 10 years was, height (cm) = age (yr) x 6.5 + 76 (cm). (2) Birth weight was about 3.3 kg. Growth velocity was at peak about 1.0 - 1.1 kg/mon in the first 3 months, decreased by half and was about 0.5 - 0.6 kg/mon in the second 3 months, and was reduced by a quarter, which was about 0.25 - 0.30 kg/mon, in the last 6 months of the first year. Body mass was up to doubles, triples and quadruple of birth weight at about the 3rd, 12th and 24th month. Average annual gain was about 2 kg and 3 kg from 1 - 6 years and 7 - 10 years, respectively. The estimated formula for children 1 to 6 years of age was weight (kg) = age (yr) x 2 + 8 (kg), but for those 7 - 10 years old, weight (kg) = age (yr) x 3 + 2 (kg). Growth patterns of height and weight at the different age stages were summarized for Chinese children, and simple reference data of height and weight velocity from 0 to 18 years and approximate estimation formula from 1 - 10 years was presented for clinical practice.
Almagro, Lorena; García-Pérez, Pascual; Belchí-Navarro, Sarai; Sánchez-Pujante, Pedro Joaquín; Pedreño, M A
2016-02-01
In this work, suspension-cultured cells of Linum usitatissimum L. were used to evaluate the effect of two types of cyclodextrins, β-glucan and (Z)-3-hexenol separately or in combination on phytosterol and tocopherol production. Suspension-cultured cells of L. usitatissimum were able to produce high levels of phytosterols in the presence of 50 mM methylated-β-cyclodextrins (1325.96 ± 107.06 μg g dry weight(-1)) separately or in combination with β-glucan (1278.57 ± 190.10 μg g dry weight(-1)) or (Z)-3-hexenol (1507.88 ± 173.02 μg g dry weight(-1)), being cyclodextrins able to increase both the secretion and accumulation of phytosterols in the spent medium, whereas β-glucan and (Z)-3-hexenol themselves only increased its intracellular accumulation. Moreover, the phytosterol values found in the presence of hydroxypropylated-β-cyclodextrins were lower than those found in the presence of methylated-β-cyclodextrins in all cases studied. However, the results showed that the presence of methylated-β-cyclodextrins did not increase the tocopherols production and only an increase in tocopherol levels was observed when cells were elicited with 50 mM hydroxypropylated-β-cyclodextrins in combination with β-glucan (174 μg g dry weight(-1)) or (Z)-3-hexenol (257 μg g dry weight(-1)). Since the levels of tocopherol produced in the combined treatment were higher than the sum of the individual treatments, a synergistic effect between both elicitors was assumed. To sum up, flax cell cultures elicited with cyclodextrins alone or in combination with β-glucan or (Z)-3-hexenol were able produce phytosterols and tocopherols, and therefore, these elicited suspension-cultured cells of L. usitatissimum can provide an alternative system, which is at the same time more sustainable, economical and ecological for their production. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Lawton, Teri B.
1989-01-01
A cortical neural network that computes the visibility of shifts in the direction of movement is proposed. The network computes: (1) the magnitude of the position difference between the test and background patterns, (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern, and (3) using global processes that pool the output from paired even- and odd-symmetric simple and complex cells across the spatial extent of the background frame of reference the direction a test pattern moved relative to a textured background. Evidence that magnocellular pathways are used to discriminate the direction of movement is presented. Since magnocellular pathways are used to discriminate the direction of movement, this task is not affected by small pattern changes such as jitter, short presentations, blurring, and different background contrasts that result when the veiling illumination in a scene changes.
Stickiness in Hamiltonian systems: From sharply divided to hierarchical phase space
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; Motter, Adilson E.; Kantz, Holger
2006-02-01
We investigate the dynamics of chaotic trajectories in simple yet physically important Hamiltonian systems with nonhierarchical borders between regular and chaotic regions with positive measures. We show that the stickiness to the border of the regular regions in systems with such a sharply divided phase space occurs through one-parameter families of marginally unstable periodic orbits and is characterized by an exponent γ=2 for the asymptotic power-law decay of the distribution of recurrence times. Generic perturbations lead to systems with hierarchical phase space, where the stickiness is apparently enhanced due to the presence of infinitely many regular islands and Cantori. In this case, we show that the distribution of recurrence times can be composed of a sum of exponentials or a sum of power laws, depending on the relative contribution of the primary and secondary structures of the hierarchy. Numerical verification of our main results are provided for area-preserving maps, mushroom billiards, and the newly defined magnetic mushroom billiards.
New Angles on Standard Force Fields: Toward a General Approach for Treating Atomic-Level Anisotropy
Van Vleet, Mary J.; Misquitta, Alston J.; Schmidt, J. R.
2017-12-21
Nearly all standard force fields employ the “sum-of-spheres” approximation, which models intermolecular interactions purely in terms of interatomic distances. Nonetheless, atoms in molecules can have significantly nonspherical shapes, leading to interatomic interaction energies with strong orientation dependencies. Neglecting this “atomic-level anisotropy” can lead to significant errors in predicting interaction energies. Herein, we propose a simple, transferable, and computationally efficient model (MASTIFF) whereby atomic-level orientation dependence can be incorporated into ab initio intermolecular force fields. MASTIFF includes anisotropic exchange-repulsion, charge penetration, and dispersion effects, in conjunction with a standard treatment of anisotropic long-range (multipolar) electrostatics. To validate our approach, we benchmarkmore » MASTIFF against various sum-of-spheres models over a large library of intermolecular interactions between small organic molecules. MASTIFF achieves quantitative accuracy, with respect to both high-level electronic structure theory and experiment, thus showing promise as a basis for “next-generation” force field development.« less
Method, system and computer-readable media for measuring impedance of an energy storage device
Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.
2016-01-26
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. A time profile of this sampled signal has a duration that is a few periods of the lowest frequency. A voltage response of the battery, average deleted, is an impedance of the battery in a time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time profile by rectifying relative to sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.