Energy-weighted sum rules connecting ΔZ = 2 nuclei within the SO(8) model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Štefánik, Dušan; Šimkovic, Fedor; Faessler, Amand
2013-12-30
Energy-weighted sum rules associated with ΔZ = 2 nuclei are obtained for the Fermi and the Gamow-Teller operators within the SO(8) model. It is found that there is a dominance of contribution of a single state of the intermediate nucleus to the sum rule. The results confirm founding obtained within the SO(5) model that the energy-weighted sum rules of ΔZ = 2 nuclei are governed by the residual interactions of nuclear Hamiltonian. A short discussion concerning some aspects of energy weighted sum rules in the case of realistic nuclei is included.
Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models
2002-03-01
such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most
A Groupwise Association Test for Rare Mutations Using a Weighted Sum Statistic
Madsen, Bo Eskerod; Browning, Sharon R.
2009-01-01
Resequencing is an emerging tool for identification of rare disease-associated mutations. Rare mutations are difficult to tag with SNP genotyping, as genotyping studies are designed to detect common variants. However, studies have shown that genetic heterogeneity is a probable scenario for common diseases, in which multiple rare mutations together explain a large proportion of the genetic basis for the disease. Thus, we propose a weighted-sum method to jointly analyse a group of mutations in order to test for groupwise association with disease status. For example, such a group of mutations may result from resequencing a gene. We compare the proposed weighted-sum method to alternative methods and show that it is powerful for identifying disease-associated genes, both on simulated and Encode data. Using the weighted-sum method, a resequencing study can identify a disease-associated gene with an overall population attributable risk (PAR) of 2%, even when each individual mutation has much lower PAR, using 1,000 to 7,000 affected and unaffected individuals, depending on the underlying genetic model. This study thus demonstrates that resequencing studies can identify important genetic associations, provided that specialised analysis methods, such as the weighted-sum method, are used. PMID:19214210
How Does Sequence Structure Affect the Judgment of Time? Exploring a Weighted Sum of Segments Model
ERIC Educational Resources Information Center
Matthews, William J.
2013-01-01
This paper examines the judgment of segmented temporal intervals, using short tone sequences as a convenient test case. In four experiments, we investigate how the relative lengths, arrangement, and pitches of the tones in a sequence affect judgments of sequence duration, and ask whether the data can be described by a simple weighted sum of…
Diagrams for the Free Energy and Density Weight Factors of the Ising Models.
1983-01-01
sum to zero . The associated R. A. Farrell, T. Morita, and P. H. E. Meijer, "Cluster Expan- also, "_ ratum: New Generating Functions and Results for the...given for the cubic lattices. We employ a theorem that states that a certain sum of diagrams is zero in order to obtain the density-dependent weight...these diagrams are given for the cubic lattices. We employ a theorem that states that a certain sum of diagrams is zero in order to obtain the density
A new enhanced index tracking model in portfolio optimization with sum weighted approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng
2017-04-01
Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.
Transition sum rules in the shell model
NASA Astrophysics Data System (ADS)
Lu, Yi; Johnson, Calvin W.
2018-03-01
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.
Comparison of two weighted integration models for the cueing task: linear and likelihood
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2003-01-01
In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.
Transition sum rules in the shell model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Yi; Johnson, Calvin W.
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy- weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, in the case of the EWSR a double commutator. While most prior applications of the double-commutator have been to special cases, we derive general formulas for matrix elements of bothmore » operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We then apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E1) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground state electric quadrupole (E2) centroids in the $sd$-shell.« less
Transition sum rules in the shell model
Lu, Yi; Johnson, Calvin W.
2018-03-29
An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy- weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, in the case of the EWSR a double commutator. While most prior applications of the double-commutator have been to special cases, we derive general formulas for matrix elements of bothmore » operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We then apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E1) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground state electric quadrupole (E2) centroids in the $sd$-shell.« less
Harel, Daphna; Hudson, Marie; Iliescu, Alexandra; Baron, Murray; Steele, Russell
2016-08-01
To develop a weighted summary score for the Medsger Disease Severity Scale (DSS) and to compare its measurement properties with those of a summed DSS score and a physician's global assessment (PGA) of severity score in systemic sclerosis (SSc). Data from 875 patients with SSc enrolled in a multisite observational research cohort were extracted from a central database. Item response theory was used to estimate weights for the DSS weighted score. Intraclass correlation coefficients (ICC) and convergent, discriminative, and predictive validity of the 3 summary measures in relation to patient-reported outcomes (PRO) and mortality were compared. Mean PGA was 2.69 (SD 2.16, range 0-10), mean DSS summed score was 8.60 (SD 4.02, range 0-36), and mean DSS weighted score was 8.11 (SD 4.05, range 0-36). ICC were similar for all 3 measures [PGA 6.9%, 95% credible intervals (CrI) 2.1-16.2; DSS summed score 2.5%, 95% CrI 0.4-6.7; DSS weighted score 2.0%, 95% CrI 0.1-5.6]. Convergent and discriminative validity of the 3 measures for PRO were largely similar. In Cox proportional hazards models adjusting for age and sex, the 3 measures had similar predictive ability for mortality (adjusted R(2) 13.9% for PGA, 12.3% for DSS summed score, and 10.7% DSS weighted score). The 3 summary scores appear valid and perform similarly. However, there were some concerns with the weights computed for individual DSS scales, with unexpected low weights attributed to lung, heart, and kidney, leading the PGA to be the preferred measure at this time. Further work refining the DSS could improve the measurement properties of the DSS summary scores.
On the time-weighted quadratic sum of linear discrete systems
NASA Technical Reports Server (NTRS)
Jury, E. I.; Gutman, S.
1975-01-01
A method is proposed for obtaining the time-weighted quadratic sum for linear discrete systems. The formula of the weighted quadratic sum is obtained from matrix z-transform formulation. In addition, it is shown that this quadratic sum can be derived in a recursive form for several useful weighted functions. The discussion presented parallels that of MacFarlane (1963) for weighted quadratic integral for linear continuous systems.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our “worst-case weighted multi-objective game” model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call “robust-weighted Nash equilibrium”. We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications. PMID:26820512
Adopting epidemic model to optimize medication and surgical intervention of excess weight
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2017-01-01
We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.
Evolving cell models for systems and synthetic biology.
Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio
2010-03-01
This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
NASA Astrophysics Data System (ADS)
Zhao, Yumin
1997-07-01
By the techniques of the Wick theorem for coupled clusters, the no-energy-weighted electromagnetic sum-rule calculations are presented in the sdg neutron-proton interacting boson model, the nuclear pair shell model and the fermion-dynamical symmetry model. The project supported by Development Project Foundation of China, National Natural Science Foundation of China, Doctoral Education Fund of National Education Committee, Fundamental Research Fund of Southeast University
A 2-categorical state sum model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baratin, Aristide, E-mail: abaratin@uwaterloo.ca; Freidel, Laurent, E-mail: lfreidel@perimeterinstitute.ca
It has long been argued that higher categories provide the proper algebraic structure underlying state sum invariants of 4-manifolds. This idea has been refined recently, by proposing to use 2-groups and their representations as specific examples of 2-categories. The challenge has been to make these proposals fully explicit. Here, we give a concrete realization of this program. Building upon our earlier work with Baez and Wise on the representation theory of 2-groups, we construct a four-dimensional state sum model based on a categorified version of the Euclidean group. We define and explicitly compute the simplex weights, which may be viewedmore » a categorified analogue of Racah-Wigner 6j-symbols. These weights solve a hexagon equation that encodes the formal invariance of the state sum under the Pachner moves of the triangulation. This result unravels the combinatorial formulation of the Feynman amplitudes of quantum field theory on flat spacetime proposed in A. Baratin and L. Freidel [Classical Quantum Gravity 24, 2027–2060 (2007)] which was shown to lead after gauge-fixing to Korepanov’s invariant of 4-manifolds.« less
An assessment of some non-gray global radiation models in enclosures
NASA Astrophysics Data System (ADS)
Meulemans, J.
2016-01-01
The accuracy of several non-gray global gas/soot radiation models, namely the Wide-Band Correlated-K (WBCK) model, the Spectral Line Weighted-sum-of-gray-gases model with one optimized gray gas (SLW-1), the (non-gray) Weighted-Sum-of-Gray-Gases (WSGG) model with different sets of coefficients (Smith et al., Soufiani and Djavdan, Taylor and Foster) was assessed on several test cases from the literature. Non-isothermal (or isothermal) participating media containing non-homogeneous (or homogeneous) mixtures of water vapor, carbon dioxide and soot in one-dimensional planar enclosures and multi-dimensional rectangular enclosures were investigated. For all the considered test cases, a benchmark solution (LBL or SNB) was used in order to compute the relative error of each model on the predicted radiative source term and the wall net radiative heat flux.
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Karniel, Amir; Avraham, Guy; Peles, Bat-Chen; Levy-Tzedek, Shelly; Nisky, Ilana
2010-01-01
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake. PMID:21206462
NASA Astrophysics Data System (ADS)
Crnomarkovic, Nenad; Belosevic, Srdjan; Tomanovic, Ivan; Milicevic, Aleksandar
2017-12-01
The effects of the number of significant figures (NSF) in the interpolation polynomial coefficients (IPCs) of the weighted sum of gray gases model (WSGM) on results of numerical investigations and WSGM optimization were investigated. The investigation was conducted using numerical simulations of the processes inside a pulverized coal-fired furnace. The radiative properties of the gas phase were determined using the simple gray gas model (SG), two-term WSGM (W2), and three-term WSGM (W3). Ten sets of the IPCs with the same NSF were formed for every weighting coefficient in both W2 and W3. The average and maximal relative difference values of the flame temperatures, wall temperatures, and wall heat fluxes were determined. The investigation showed that the results of numerical investigations were affected by the NSF unless it exceeded certain value. The increase in the NSF did not necessarily lead to WSGM optimization. The combination of the NSF (CNSF) was the necessary requirement for WSGM optimization.
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Neyman-Pearson biometric score fusion as an extension of the sum rule
NASA Astrophysics Data System (ADS)
Hube, Jens Peter
2007-04-01
We define the biometric performance invariance under strictly monotonic functions on match scores as normalization symmetry. We use this symmetry to clarify the essential difference between the standard score-level fusion approaches of sum rule and Neyman-Pearson. We then express Neyman-Pearson fusion assuming match scores defined using false acceptance rates on a logarithmic scale. We show that by stating Neyman-Pearson in this form, it reduces to sum rule fusion for ROC curves with logarithmic slope. We also introduce a one parameter model of biometric performance and use it to express Neyman-Pearson fusion as a weighted sum rule.
Sommer, Christine; Sletner, Line; Mørkrid, Kjersti; Jenum, Anne Karen; Birkeland, Kåre Inge
2015-04-03
Maternal glucose and lipid levels are associated with neonatal anthropometry of the offspring, also independently of maternal body mass index (BMI). Gestational weight gain, however, is often not accounted for. The objective was to explore whether the effects of maternal glucose and lipid levels on offspring's birth weight and subcutaneous fat were independent of early pregnancy BMI and mid-gestational weight gain. In a population-based, multi-ethnic, prospective cohort of 699 women and their offspring, maternal anthropometrics were collected in gestational week 15 and 28. Maternal fasting plasma lipids, fasting and 2-hour glucose post 75 g glucose load, were collected in gestational week 28. Maternal risk factors were standardized using z-scores. Outcomes were neonatal birth weight and sum of skinfolds in four different regions. Mean (standard deviation) birth weight was 3491 ± 498 g and mean sum of skinfolds was 18.2 ± 3.9 mm. Maternal fasting glucose and HDL-cholesterol were predictors of birth weight, and fasting and 2-hour glucose were predictors of neonatal sum of skinfolds, independently of weight gain as well as early pregnancy BMI, gestational week at inclusion, maternal age, parity, smoking status, ethnic origin, gestational age and offspring's sex. However, weight gain was the strongest independent predictor of both birth weight and neonatal sum of skinfolds, with a 0.21 kg/week increased weight gain giving a 110.7 (95% confidence interval 76.6-144.9) g heavier neonate, and with 0.72 (0.38-1.06) mm larger sum of skinfolds. The effect size of mother's early pregnancy BMI on birth weight was higher in non-Europeans than in Europeans. Maternal fasting glucose and HDL-cholesterol were predictors of offspring's birth weight, and fasting and 2-hour glucose were predictors of neonatal sum of skinfolds, independently of weight gain. Mid-gestational weight gain was a stronger predictor of both birth weight and neonatal sum of skinfolds than early pregnancy BMI, maternal glucose and lipid levels.
Parameter Set Cloning Based on Catchment Similarity for Large-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Liu, Z.; Kaheil, Y.; McCollum, J.
2016-12-01
Parameter calibration is a crucial step to ensure the accuracy of hydrological models. However, streamflow gauges are not available everywhere for calibrating a large-scale hydrologic model globally. Thus, assigning parameters appropriately for regions where the calibration cannot be performed directly has been a challenge for large-scale hydrologic modeling. Here we propose a method to estimate the model parameters in ungauged regions based on the values obtained through calibration in areas where gauge observations are available. This parameter set cloning is performed according to a catchment similarity index, a weighted sum index based on four catchment characteristic attributes. These attributes are IPCC Climate Zone, Soil Texture, Land Cover, and Topographic Index. The catchments with calibrated parameter values are donors, while the uncalibrated catchments are candidates. Catchment characteristic analyses are first conducted for both donors and candidates. For each attribute, we compute a characteristic distance between donors and candidates. Next, for each candidate, weights are assigned to the four attributes such that higher weights are given to properties that are more directly linked to the hydrologic dominant processes. This will ensure that the parameter set cloning emphasizes the dominant hydrologic process in the region where the candidate is located. The catchment similarity index for each donor - candidate couple is then created as the sum of the weighted distance of the four properties. Finally, parameters are assigned to each candidate from the donor that is "most similar" (i.e. with the shortest weighted distance sum). For validation, we applied the proposed method to catchments where gauge observations are available, and compared simulated streamflows using the parameters cloned by other catchments to the results obtained by calibrating the hydrologic model directly using gauge data. The comparison shows good agreement between the two models for different river basins as we show here. This method has been applied globally to the Hillslope River Routing (HRR) model using gauge observations obtained from the Global Runoff Data Center (GRDC). As next step, more catchment properties can be taken into account to further improve the representation of catchment similarity.
Algebraic grid adaptation method using non-uniform rational B-spline surface modeling
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, B. K.
1992-01-01
An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.
NASA Astrophysics Data System (ADS)
Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam
2018-04-01
Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.
Slanger, W D; Marchello, M J; Busboom, J R; Meyer, H H; Mitchell, L A; Hendrix, W F; Mills, R R; Warnock, W D
1994-06-01
Data of sixty finished, crossbred lambs were used to develop prediction equations of total weight of retail-ready cuts (SUM). These cuts were the leg, sirloin, loin, rack, shoulder, neck, riblets, shank, and lean trim (85/15). Measurements were taken on live lambs and on both hot and cold carcasses. A four-terminal bioelectrical impedance analyzer (BIA) was used to measure resistance (Rs, ohms) and reactance (Xc, ohms). Distances between detector terminals (L, centimeters) were recorded. Carcass temperatures (T, degrees C) at time of BIA readings were also recorded. The equation predicting SUM from cold carcass measurements (n = 53, R2 = .97) was .093 + .621 x weight-.0219 x Rs + .0248 x Xc + .182 x L-.338 x T. Resistance accounted for variability in SUM over and above weight and L (P = .0016). The above equation was used to rank cold carcasses in descending order of predicted SUM. An analogous ranking was obtained from a prediction equation that used weight only (R2 = .88). These rankings were divided into five categories: top 25%, middle 50%, bottom 25%, top 50%, and bottom 50%. Within-category differences in average fat cover, yield grade, and SUM as a percentage of cold carcass weight of carcasses not placed in the same category by both prediction equations were quantified with independent t-tests. These differences were statistically significant for all categories except middle 50%. This shows that BIA located those lambs that could more efficiently contribute to SUM because a higher portion of their weight was lean.
NASA Astrophysics Data System (ADS)
Chu, Huaqiang; Liu, Fengshan; Consalvi, Jean-Louis
2014-08-01
The relationship between the spectral line based weighted-sum-of-gray-gases (SLW) model and the full-spectrum k-distribution (FSK) model in isothermal and homogeneous media is investigated in this paper. The SLW transfer equation can be derived from the FSK transfer equation expressed in the k-distribution function without approximation. It confirms that the SLW model is equivalent to the FSK model in the k-distribution function form. The numerical implementation of the SLW relies on a somewhat arbitrary discretization of the absorption cross section whereas the FSK model finds the spectrally integrated intensity by integration over the smoothly varying cumulative-k distribution function using a Gaussian quadrature scheme. The latter is therefore in general more efficient as a fewer number of gray gases is required to achieve a prescribed accuracy. Sample numerical calculations were conducted to demonstrate the different efficiency of these two methods. The FSK model is found more accurate than the SLW model in radiation transfer in H2O; however, the SLW model is more accurate in media containing CO2 as the only radiating gas due to its explicit treatment of ‘clear gas.’
Diet models with linear goal programming: impact of achievement functions.
Gerdessen, J C; de Vries, J H M
2015-11-01
Diet models based on goal programming (GP) are valuable tools in designing diets that comply with nutritional, palatability and cost constraints. Results derived from GP models are usually very sensitive to the type of achievement function that is chosen.This paper aims to provide a methodological insight into several achievement functions. It describes the extended GP (EGP) achievement function, which enables the decision maker to use either a MinSum achievement function (which minimizes the sum of the unwanted deviations) or a MinMax achievement function (which minimizes the largest unwanted deviation), or a compromise between both. An additional advantage of EGP models is that from one set of data and weights multiple solutions can be obtained. We use small numerical examples to illustrate the 'mechanics' of achievement functions. Then, the EGP achievement function is demonstrated on a diet problem with 144 foods, 19 nutrients and several types of palatability constraints, in which the nutritional constraints are modeled with fuzzy sets. Choice of achievement function affects the results of diet models. MinSum achievement functions can give rise to solutions that are sensitive to weight changes, and that pile all unwanted deviations on a limited number of nutritional constraints. MinMax achievement functions spread the unwanted deviations as evenly as possible, but may create many (small) deviations. EGP comprises both types of achievement functions, as well as compromises between them. It can thus, from one data set, find a range of solutions with various properties.
Weighted Scaling in Non-growth Random Networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li
2012-09-01
We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.
Weight and cost forecasting for advanced manned space vehicles
NASA Technical Reports Server (NTRS)
Williams, Raymond
1989-01-01
A mass and cost estimating computerized methology for predicting advanced manned space vehicle weights and costs was developed. The user friendly methology designated MERCER (Mass Estimating Relationship/Cost Estimating Relationship) organizes the predictive process according to major vehicle subsystem levels. Design, development, test, evaluation, and flight hardware cost forecasting is treated by the study. This methodology consists of a complete set of mass estimating relationships (MERs) which serve as the control components for the model and cost estimating relationships (CERs) which use MER output as input. To develop this model, numerous MER and CER studies were surveyed and modified where required. Additionally, relationships were regressed from raw data to accommodate the methology. The models and formulations which estimated the cost of historical vehicles to within 20 percent of the actual cost were selected. The result of the research, along with components of the MERCER Program, are reported. On the basis of the analysis, the following conclusions were established: (1) The cost of a spacecraft is best estimated by summing the cost of individual subsystems; (2) No one cost equation can be used for forecasting the cost of all spacecraft; (3) Spacecraft cost is highly correlated with its mass; (4) No study surveyed contained sufficient formulations to autonomously forecast the cost and weight of the entire advanced manned vehicle spacecraft program; (5) No user friendly program was found that linked MERs with CERs to produce spacecraft cost; and (6) The group accumulation weight estimation method (summing the estimated weights of the various subsystems) proved to be a useful method for finding total weight and cost of a spacecraft.
ERIC Educational Resources Information Center
Soh, Kaycheng
2014-01-01
World university rankings (WUR) use the weight-and-sum approach to arrive at an overall measure which is then used to rank the participating universities of the world. Although the weight-and-sum procedure seems straightforward and accords with common sense, it has hidden methodological or statistical problems which render the meaning of the…
49 CFR 393.42 - Brakes required on all wheels.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subject to this part is not required to be equipped with brakes if the axle weight of the towed vehicle does not exceed 40 percent of the sum of the axle weights of the towing vehicle. (4) Any full trailer... of the towed vehicle does not exceed 40 percent of the sum of the axle weights of the towing vehicle...
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Coherence analysis of a class of weighted networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; He, Jiaojiao; Zong, Yue; Ju, Tingting; Sun, Yu; Su, Weiyi
2018-04-01
This paper investigates consensus dynamics in a dynamical system with additive stochastic disturbances that is characterized as network coherence by using the Laplacian spectrum. We introduce a class of weighted networks based on a complete graph and investigate the first- and second-order network coherence quantifying as the sum and square sum of reciprocals of all nonzero Laplacian eigenvalues. First, the recursive relationship of its eigenvalues at two successive generations of Laplacian matrix is deduced. Then, we compute the sum and square sum of reciprocal of all nonzero Laplacian eigenvalues. The obtained results show that the scalings of first- and second-order coherence with network size obey four and five laws, respectively, along with the range of the weight factor. Finally, it indicates that the scalings of our studied networks are smaller than other studied networks when 1/√{d }
NASA Astrophysics Data System (ADS)
Di Francesco, P.; Zinn-Justin, P.
2005-12-01
We prove higher rank analogues of the Razumov Stroganov sum rule for the ground state of the O(1) loop model on a semi-infinite cylinder: we show that a weighted sum of components of the ground state of the Ak-1 IRF model yields integers that generalize the numbers of alternating sign matrices. This is done by constructing minimal polynomial solutions of the level 1 U_q(\\widehat{\\frak{sl}(k)}) quantum Knizhnik Zamolodchikov equations, which may also be interpreted as quantum incompressible q-deformations of quantum Hall effect wavefunctions at filling fraction ν = k. In addition to the generalized Razumov Stroganov point q = -eiπ/k+1, another combinatorially interesting point is reached in the rational limit q → -1, where we identify the solution with extended Joseph polynomials associated with the geometry of upper triangular matrices with vanishing kth power.
Urbanek, Margrit; Hayes, M Geoffrey; Armstrong, Loren L; Morrison, Jean; Lowe, Lynn P; Badon, Sylvia E; Scheftner, Doug; Pluzhnikov, Anna; Levine, David; Laurie, Cathy C; McHugh, Caitlin; Ackerman, Christine M; Mirel, Daniel B; Doheny, Kimberly F; Guo, Cong; Scholtens, Denise M; Dyer, Alan R; Metzger, Boyd E; Reddy, Timothy E; Cox, Nancy J; Lowe, William L
2013-09-01
Newborns characterized as large and small for gestational age are at risk for increased mortality and morbidity during the first year of life as well as for obesity and dysglycemia as children and adults. The intrauterine environment and fetal genes contribute to the fetal size at birth. To define the genetic architecture underlying the newborn size, we performed a genome-wide association study (GWAS) in 4281 newborns in four ethnic groups from the Hyperglycemia and Adverse Pregnancy Outcome Study. We tested for association with newborn anthropometric traits (birth length, head circumference, birth weight, percent fat mass and sum of skinfolds) and newborn metabolic traits (cord glucose and C-peptide) under three models. Model 1 adjusted for field center, ancestry, neonatal gender, gestational age at delivery, parity, maternal age at oral glucose tolerance test (OGTT); Model 2 adjusted for Model 1 covariates, maternal body mass index (BMI) at OGTT, maternal height at OGTT, maternal mean arterial pressure at OGTT, maternal smoking and drinking; Model 3 adjusted for Model 2 covariates, maternal glucose and C-peptide at OGTT. Strong evidence for association was observed with measures of newborn adiposity (sum of skinfolds model 3 Z-score 7.356, P = 1.90×10⁻¹³, and to a lesser degree fat mass and birth weight) and a region on Chr3q25.31 mapping between CCNL and LEKR1. These findings were replicated in an independent cohort of 2296 newborns. This region has previously been shown to be associated with birth weight in Europeans. The current study suggests that association of this locus with birth weight is secondary to an effect on fat as opposed to lean body mass.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
NASA Astrophysics Data System (ADS)
Tohara, Takashi; Liang, Haichao; Tanaka, Hirofumi; Igarashi, Makoto; Samukawa, Seiji; Endo, Kazuhiko; Takahashi, Yasuo; Morie, Takashi
2016-03-01
A nanodisk array connected with a fin field-effect transistor is fabricated and analyzed for spiking neural network applications. This nanodevice performs weighted sums in the time domain using rising slopes of responses triggered by input spike pulses. The nanodisk arrays, which act as a resistance of several giga-ohms, are fabricated using a self-assembly bio-nano-template technique. Weighted sums are achieved with an energy dissipation on the order of 1 fJ, where the number of inputs can be more than one hundred. This amount of energy is several orders of magnitude lower than that of conventional digital processors.
Complex networks in the Euclidean space of communicability distances
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
Brookes, V J; Hernández-Jover, M; Neslo, R; Cowled, B; Holyoake, P; Ward, M P
2014-01-01
We describe stakeholder preference modelling using a combination of new and recently developed techniques to elicit criterion weights to incorporate into a multi-criteria decision analysis framework to prioritise exotic diseases for the pig industry in Australia. Australian pig producers were requested to rank disease scenarios comprising nine criteria in an online questionnaire. Parallel coordinate plots were used to visualise stakeholder preferences, which aided identification of two diverse groups of stakeholders - one group prioritised diseases with impacts on livestock, and the other group placed more importance on diseases with zoonotic impacts. Probabilistic inversion was used to derive weights for the criteria to reflect the values of each of these groups, modelling their choice using a weighted sum value function. Validation of weights against stakeholders' rankings for scenarios based on real diseases showed that the elicited criterion weights for the group who prioritised diseases with livestock impacts were a good reflection of their values, indicating that the producers were able to consistently infer impacts from the disease information in the scenarios presented to them. The highest weighted criteria for this group were attack rate and length of clinical disease in pigs, and market loss to the pig industry. The values of the stakeholders who prioritised zoonotic diseases were less well reflected by validation, indicating either that the criteria were inadequate to consistently describe zoonotic impacts, the weighted sum model did not describe stakeholder choice, or that preference modelling for zoonotic diseases should be undertaken separately from livestock diseases. Limitations of this study included sampling bias, as the group participating were not necessarily representative of all pig producers in Australia, and response bias within this group. The method used to elicit criterion weights in this study ensured value trade-offs between a range of potential impacts, and that the weights were implicitly related to the scale of measurement of disease criteria. Validation of the results of the criterion weights against real diseases - a step rarely used in MCDA - added scientific rigour to the process. The study demonstrated that these are useful techniques for elicitation of criterion weights for disease prioritisation by stakeholders who are not disease experts. Preference modelling for zoonotic diseases needs further characterisation in this context. Copyright © 2013 Elsevier B.V. All rights reserved.
2014-12-01
Primary Military Occupational Specialty PRO Proficiency Q-Q Quantile - Quantile RSS Residual Sum of Squares SI Shop Information T&R Training and...construct multivariate linear regression models to estimate Marines’ Computed Tier Score and time to achieve E-4 based on their individual personal...Science (GS) score, ASVAB Mathematics Knowledge (MK) score, ASVAB Paragraph Comprehension (PC) score, weight , and whether a Marine receives a weight
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
Effects of Preseason Training on the Sleep Characteristics of Professional Rugby League Players.
Thornton, Heidi R; Delaney, Jace A; Duthie, Grant M; Dascombe, Ben J
2018-02-01
To investigate the influence of daily and exponentially weighted moving training loads on subsequent nighttime sleep. Sleep of 14 professional rugby league athletes competing in the National Rugby League was recorded using wristwatch actigraphy. Physical demands were quantified using GPS technology, including total distance, high-speed distance, acceleration/deceleration load (SumAccDec; AU), and session rating of perceived exertion (AU). Linear mixed models determined effects of acute (daily) and subacute (3- and 7-d) exponentially weighted moving averages (EWMA) on sleep. Higher daily SumAccDec was associated with increased sleep efficiency (effect-size correlation; ES = 0.15; ±0.09) and sleep duration (ES = 0.12; ±0.09). Greater 3-d EWMA SumAccDec was associated with increased sleep efficiency (ES = 0.14; ±0.09) and an earlier bedtime (ES = 0.14; ±0.09). An increase in 7-d EWMA SumAccDec was associated with heightened sleep efficiency (ES = 0.15; ±0.09) and earlier bedtimes (ES = 0.15; ±0.09). The direction of the associations between training loads and sleep varied, but the strongest relationships showed that higher training loads increased various measures of sleep. Practitioners should be aware of the increased requirement for sleep during intensified training periods, using this information in the planning and implementation of training and individualized recovery modalities.
On the number of infinite geodesics and ground states in disordered systems
NASA Astrophysics Data System (ADS)
Wehr, Jan
1997-04-01
We study first-passage percolation models and their higher dimensional analogs—models of surfaces with random weights. We prove that under very general conditions the number of lines or, in the second case, hypersurfaces which locally minimize the sum of the random weights is with probability one equal to 0 or with probability one equal to +∞. As corollaries we show that in any dimension d≥2 the number of ground states of an Ising ferromagnet with random coupling constants equals (with probability one) 2 or +∞. Proofs employ simple large-deviation estimates and ergodic arguments.
Roles of antinucleon degrees of freedom in the relativistic random phase approximation
NASA Astrophysics Data System (ADS)
Kurasawa, Haruki; Suzuki, Toshio
2015-11-01
The roles of antinucleon degrees of freedom in the relativistic random phase approximation (RPA) are investigated. The energy-weighted sum of the RPA transition strengths is expressed in terms of the double commutator between the excitation operator and the Hamiltonian, as in nonrelativistic models. The commutator, however, should not be calculated in the usual way in the local field theory, because, otherwise, the sum vanishes. The sum value obtained correctly from the commutator is infinite, owing to the Dirac sea. Most of the previous calculations take into account only some of the nucleon-antinucleon states, in order to avoid divergence problems. As a result, RPA states with negative excitation energy appear, which make the sum value vanish. Moreover, disregarding the divergence changes the sign of nuclear interactions in the RPA equation that describes the coupling of the nucleon particle-hole states with the nucleon-antinucleon states. Indeed, the excitation energies of the spurious state and giant monopole states in the no-sea approximation are dominated by these unphysical changes. The baryon current conservation can be described without touching the divergence problems. A schematic model with separable interactions is presented, which makes the structure of the relativistic RPA transparent.
Probabilistic model of bridge vehicle loads in port area based on in-situ load testing
NASA Astrophysics Data System (ADS)
Deng, Ming; Wang, Lei; Zhang, Jianren; Wang, Rei; Yan, Yanhong
2017-11-01
Vehicle load is an important factor affecting the safety and usability of bridges. An statistical analysis is carried out in this paper to investigate the vehicle load data of Tianjin Haibin highway in Tianjin port of China, which are collected by the Weigh-in- Motion (WIM) system. Following this, the effect of the vehicle load on test bridge is calculated, and then compared with the calculation result according to HL-93(AASHTO LRFD). Results show that the overall vehicle load follows a distribution with a weighted sum of four normal distributions. The maximum vehicle load during the design reference period follows a type I extremum distribution. The vehicle load effect also follows a weighted sum of four normal distributions, and the standard value of the vehicle load is recommended as 1.8 times that of the calculated value according to HL-93.
Sum Rule for a Schiff-Like Dipole Moment
NASA Astrophysics Data System (ADS)
Raduta, A. A.; Budaca, R.
The energy-weighted sum rule for an electric dipole transition operator of a Schiff type differs from the Thomas-Reiche-Kuhn (TRK) sum rule by several corrective terms which depend on the number of system components, N. For illustration the formalism was applied to the case of Na clusters. One concludes that the random phase approximation (RPA) results for Na clusters obey the modified TRK sum rule.
A Solution to Weighted Sums of Squares as a Square
ERIC Educational Resources Information Center
Withers, Christopher S.; Nadarajah, Saralees
2012-01-01
For n = 1, 2, ... , we give a solution (x[subscript 1], ... , x[subscript n], N) to the Diophantine integer equation [image omitted]. Our solution has N of the form n!, in contrast to other solutions in the literature that are extensions of Euler's solution for N, a sum of squares. More generally, for given n and given integer weights m[subscript…
Coutand, Catherine; Chevolot, Malia; Lacointe, André; Rowe, Nick; Scotti, Ivan
2010-02-01
In rain forests, sapling survival is highly dependent on the regulation of trunk slenderness (height/diameter ratio): shade-intolerant species have to grow in height as fast as possible to reach the canopy but also have to withstand mechanical loadings (wind and their own weight) to avoid buckling. Recent studies suggest that mechanosensing is essential to control tree dimensions and stability-related morphogenesis. Differences in species slenderness have been observed among rainforest trees; the present study thus investigates whether species with different slenderness and growth habits exhibit differences in mechanosensitivity. Recent studies have led to a model of mechanosensing (sum-of-strains model) that predicts a quantitative relationship between the applied sum of longitudinal strains and the plant's responses in the case of a single bending. Saplings of five different neotropical species (Eperua falcata, E. grandiflora, Tachigali melinonii, Symphonia globulifera and Bauhinia guianensis) were subjected to a regimen of controlled mechanical loading phases (bending) alternating with still phases over a period of 2 months. Mechanical loading was controlled in terms of strains and the five species were subjected to the same range of sum of strains. The application of the sum-of-strain model led to a dose-response curve for each species. Dose-response curves were then compared between tested species. The model of mechanosensing (sum-of-strain model) applied in the case of multiple bending as long as the bending frequency was low. A comparison of dose-response curves for each species demonstrated differences in the stimulus threshold, suggesting two groups of responses among the species. Interestingly, the liana species B. guianensis exhibited a higher threshold than other Leguminosae species tested. This study provides a conceptual framework to study variability in plant mechanosensing and demonstrated interspecific variability in mechanosensing.
NASA Astrophysics Data System (ADS)
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
NASA Technical Reports Server (NTRS)
Hinson, E. W.
1981-01-01
The preliminary analysis and data analysis system development for the shuttle upper atmosphere mass spectrometer (SUMS) experiment are discussed. The SUMS experiment is designed to provide free stream atmospheric density, pressure, temperature, and mean molecular weight for the high altitude, high Mach number region.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Motkova, A. V.
2018-01-01
A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.
Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower
NASA Astrophysics Data System (ADS)
Fujii, Kenzo; Yamamoto, Toru
In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.
Multiple-path model of spectral reflectance of a dyed fabric.
Rogers, Geoffrey; Dalloz, Nicolas; Fournel, Thierry; Hebert, Mathieu
2017-05-01
Experimental results are presented of the spectral reflectance of a dyed fabric as analyzed by a multiple-path model of reflection. The multiple-path model provides simple analytic expressions for reflection and transmission of turbid media by applying the Beer-Lambert law to each path through the medium and summing over all paths, each path weighted by its probability. The path-length probability is determined by a random-walk analysis. The experimental results presented here show excellent agreement with predictions made by the model.
Wiley, A S; Lubree, H G; Joshi, S M; Bhat, D S; Ramdas, L V; Rao, A S; Thuse, N V; Deshpande, V U; Yajnik, C S
2016-04-01
Indian newborns have been described as 'thin-fat' compared with European babies, but little is known about how this phenotype relates to the foetal growth factor IGF-I (insulin-like growth factor I) or its binding protein IGFBP-3. To assess cord IGF-I and IGFBP-3 concentrations in a sample of Indian newborns and evaluate their associations with neonatal adiposity and maternal factors. A prospective cohort study of 146 pregnant mothers with dietary, anthropometric and biochemical measurements at 28 and 34 weeks gestation. Neonatal weight, length, skin-folds, circumferences, and cord blood IGF-I and IGFBP-3 concentrations were measured at birth. Average cord IGF-I and IGFBP-3 concentrations were 46.6 (2.2) and 1269.4 (41) ng mL(-1) , respectively. Girls had higher mean IGF-I than boys (51.4 ng mL(-1) vs. 42.9 ng mL(-1) ; P < 0.03), but IGFBP-3 did not differ. Cord IGF-I was positively correlated with all birth size measures except length, and most strongly with neonatal sum-of-skin-folds (r = 0.50, P < 0.001). IGFBP-3 was positively correlated with ponderal index, sum-of-skin-folds and placenta weight (r = 0.21, 0.19, 0.16, respectively; P < 0.05). Of maternal demographic and anthropometric characteristics, only parity was correlated with cord IGF-I (r = 0.27, P < 0.001). Among dietary behaviours, maternal daily milk intake at 34 weeks gestation predicted higher cord IGF-I compared to no-milk intake (51.8 ng mL(-1) vs. 36.5 ng mL(-1) , P < 0.01) after controlling for maternal characteristics, placental weight, and newborn gestational age, sex, weight and sum-of-skin-folds. Sum-of-skin-folds were positively associated with cord IGF-I in this multivariate model (57.3 ng mL(-1) vs. 35.1 ng mL(-1) for highest and lowest sum-of skin-fold quartile, P < 0.001). IGFBP-3 did not show significant relationships with these covariates. In this Indian study, cord IGF-I concentration was associated with greater adiposity among newborns. Maternal milk intake may play a role in this relationship. © 2015 World Obesity.
An evaluation of ozone exposure metrics for a seasonally drought-stressed ponderosa pine ecosystem.
Panek, Jeanne A; Kurpius, Meredith R; Goldstein, Allen H
2002-01-01
Ozone stress has become an increasingly significant factor in cases of forest decline reported throughout the world. Current metrics to estimate ozone exposure for forest trees are derived from atmospheric concentrations and assume that the forest is physiologically active at all times of the growing season. This may be inaccurate in regions with a Mediterranean climate, such as California and the Pacific Northwest, where peak physiological activity occurs early in the season to take advantage of high soil moisture and does not correspond to peak ozone concentrations. It may also misrepresent ecosystems experiencing non-average climate conditions such as drought years. We compared direct measurements of ozone flux into a ponderosa pine canopy with a suite of the most common ozone exposure metrics to determine which best correlated with actual ozone uptake by the forest. Of the metrics we assessed, SUM0 (the sum of all daytime ozone concentrations > 0) best corresponded to ozone uptake by ponderosa pine, however the correlation was only strong at times when the stomata were unconstrained by site moisture conditions. In the early growing season (May and June). SUM0 was an adequate metric for forest ozone exposure. Later in the season, when stomatal conductance was limited by drought. SUM0 overestimated ozone uptake. A better metric for seasonally drought-stressed forests would be one that incorporates forest physiological activity, either through mechanistic modeling, by weighting ozone concentrations by stomatal conductance, or by weighting concentrations by site moisture conditions.
Zhao, Tanfeng; Zhang, Qingyou; Long, Hailin; Xu, Lu
2014-01-01
In order to explore atomic asymmetry and molecular chirality in 2D space, benzenoids composed of 3 to 11 hexagons in 2D space were enumerated in our laboratory. These benzenoids are regarded as planar connected polyhexes and have no internal holes; that is, their internal regions are filled with hexagons. The produced dataset was composed of 357,968 benzenoids, including more than 14 million atoms. Rather than simply labeling the huge number of atoms as being either symmetric or asymmetric, this investigation aims at exploring a quantitative graph theoretical descriptor of atomic asymmetry. Based on the particular characteristics in the 2D plane, we suggested the weighted atomic sum as the descriptor of atomic asymmetry. This descriptor is measured by circulating around the molecule going in opposite directions. The investigation demonstrates that the weighted atomic sums are superior to the previously reported quantitative descriptor, atomic sums. The investigation of quantitative descriptors also reveals that the most asymmetric atom is in a structure with a spiral ring with the convex shape going in clockwise direction and concave shape going in anticlockwise direction from the atom. Based on weighted atomic sums, a weighted F index is introduced to quantitatively represent molecular chirality in the plane, rather than merely regarding benzenoids as being either chiral or achiral. By validating with enumerated benzenoids, the results indicate that the weighted F indexes were in accordance with their chiral classification (achiral or chiral) over the whole benzenoids dataset. Furthermore, weighted F indexes were superior to previously available descriptors. Benzenoids possess a variety of shapes and can be extended to practically represent any shape in 2D space—our proposed descriptor has thus the potential to be a general method to represent 2D molecular chirality based on the difference between clockwise and anticlockwise sums around a molecule. PMID:25032832
NASA Astrophysics Data System (ADS)
S. Al-Kaltakchi, Musab T.; Woo, Wai L.; Dlay, Satnam; Chambers, Jonathon A.
2017-12-01
In this study, a speaker identification system is considered consisting of a feature extraction stage which utilizes both power normalized cepstral coefficients (PNCCs) and Mel frequency cepstral coefficients (MFCC). Normalization is applied by employing cepstral mean and variance normalization (CMVN) and feature warping (FW), together with acoustic modeling using a Gaussian mixture model-universal background model (GMM-UBM). The main contributions are comprehensive evaluations of the effect of both additive white Gaussian noise (AWGN) and non-stationary noise (NSN) (with and without a G.712 type handset) upon identification performance. In particular, three NSN types with varying signal to noise ratios (SNRs) were tested corresponding to street traffic, a bus interior, and a crowded talking environment. The performance evaluation also considered the effect of late fusion techniques based on score fusion, namely, mean, maximum, and linear weighted sum fusion. The databases employed were TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3600 speech utterances. As recommendations from the study, mean fusion is found to yield overall best performance in terms of speaker identification accuracy (SIA) with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings.
Xu, Gongxian; Liu, Ying; Gao, Qunwang
2016-02-10
This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kahana, Michael J.; Sederberg, Per B.; Howard, Marc W.
2008-01-01
The temporal context model posits that search through episodic memory is driven by associations between the multiattribute representations of items and context. Context, in turn, is a recency weighted sum of previous experiences or memories. Because recently processed items are most similar to the current representation of context, M. Usher, E. J.…
Multi Criteria Evaluation Module for RiskChanges Spatial Decision Support System
NASA Astrophysics Data System (ADS)
Olyazadeh, Roya; Jaboyedoff, Michel; van Westen, Cees; Bakker, Wim
2015-04-01
Multi-Criteria Evaluation (MCE) module is one of the five modules of RiskChanges spatial decision support system. RiskChanges web-based platform aims to analyze changes in hydro-meteorological risk and provides tools for selecting the best risk reduction alternative. It is developed under CHANGES framework (changes-itn.eu) and INCREO project (increo-fp7.eu). MCE tool helps decision makers and spatial planners to evaluate, sort and rank the decision alternatives. The users can choose among different indicators that are defined within the system using Risk and Cost Benefit analysis results besides they can add their own indicators. Subsequently the system standardizes and prioritizes them. Finally, the best decision alternative is selected by using the weighted sum model (WSM). The Application of this work is to facilitate the effect of MCE for analyzing changing risk over the time under different scenarios and future years by adopting a group decision making into practice and comparing the results by numeric and graphical view within the system. We believe that this study helps decision-makers to achieve the best solution by expressing their preferences for strategies under future scenarios. Keywords: Multi-Criteria Evaluation, Spatial Decision Support System, Weighted Sum Model, Natural Hazard Risk Management
Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk
NASA Astrophysics Data System (ADS)
Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi
2016-09-01
Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.
Quantitative Structure Retention Relationships of Polychlorinated Dibenzodioxins and Dibenzofurans
1991-08-01
be a projection onto the X-Y plane. The algorithm for this calculation can be found in Stouch and Jurs (22), but was further refined by Rohrbaugh and...throughspace distances. WPSA2 (c) Weighted positive charged surface area. MOMH2 (c) Second major moment of inertia with hydrogens attached. CSTR 3 (d) Sum...of the models. The robust regression analysis method calculates a regression model using a least median squares algorithm which is not as susceptible
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
NASA Astrophysics Data System (ADS)
Hetényi, Balázs
2014-03-01
The Drude weight, the quantity which distinguishes metals from insulators, is proportional to the second derivative of the ground state energy with respect to a flux at zero flux. The same expression also appears in the definition of the Meissner weight, the quantity which indicates superconductivity, as well as in the definition of non-classical rotational inertia of bosonic superfluids. It is shown that the difference between these quantities depends on the interpretation of the average momentum term, which can be understood as the expectation value of the total momentum (Drude weight), the sum of the expectation values of single momenta (rotational inertia of a superfluid), or the sum over expectation values of momentum pairs (Meissner weight). This distinction appears naturally when the current from which the particular transport quantity is derived is cast in terms of shift operators.
Prakash, J; Srinivasan, K
2009-07-01
In this paper, the authors have represented the nonlinear system as a family of local linear state space models, local PID controllers have been designed on the basis of linear models, and the weighted sum of the output from the local PID controllers (Nonlinear PID controller) has been used to control the nonlinear process. Further, Nonlinear Model Predictive Controller using the family of local linear state space models (F-NMPC) has been developed. The effectiveness of the proposed control schemes has been demonstrated on a CSTR process, which exhibits dynamic nonlinearity.
ECON-KG: A Code for Computation of Electrical Conductivity Using Density Functional Theory
2017-10-01
is presented. Details of the implementation and instructions for execution are presented, and an example calculation of the frequency- dependent ...shown to depend on carbon content,3 and electrical conductivity models have become a requirement for input into continuum-level simulations being... dependent electrical conductivity is computed as a weighted sum over k-points: () = ∑ () ∗ () , (2) where W(k) is
Auditory alert systems with enhanced detectability
NASA Technical Reports Server (NTRS)
Begault, Durand R. (Inventor)
2008-01-01
Methods and systems for distinguishing an auditory alert signal from a background of one or more non-alert signals. In a first embodiment, a prefix signal, associated with an existing alert signal, is provided that has a signal component in each of three or more selected frequency ranges, with each signal component in each of three or more selected level at least 3-10 dB above an estimated background (non-alert) level in that frequency range. The alert signal may be chirped within one or more frequency bands. In another embodiment, an alert signal moves, continuously or discontinuously, from one location to another over a short time interval, introducing a perceived spatial modulation or jitter. In another embodiment, a weighted sum of background signals adjacent to each ear is formed, and the weighted sum is delivered to each ear as a uniform background; a distinguishable alert signal is presented on top of this weighted sum signal at one ear, or distinguishable first and second alert signals are presented at two ears of a subject.
Multiple Interactive Pollutants in Water Quality Trading
NASA Astrophysics Data System (ADS)
Sarang, Amin; Lence, Barbara J.; Shamsai, Abolfazl
2008-10-01
Efficient environmental management calls for the consideration of multiple pollutants, for which two main types of transferable discharge permit (TDP) program have been described: separate permits that manage each pollutant individually in separate markets, with each permit based on the quantity of the pollutant or its environmental effects, and weighted-sum permits that aggregate several pollutants as a single commodity to be traded in a single market. In this paper, we perform a mathematical analysis of TDP programs for multiple pollutants that jointly affect the environment (i.e., interactive pollutants) and demonstrate the practicality of this approach for cost-efficient maintenance of river water quality. For interactive pollutants, the relative weighting factors are functions of the water quality impacts, marginal damage function, and marginal treatment costs at optimality. We derive the optimal set of weighting factors required by this approach for important scenarios for multiple interactive pollutants and propose using an analytical elasticity of substitution function to estimate damage functions for these scenarios. We evaluate the applicability of this approach using a hypothetical example that considers two interactive pollutants. We compare the weighted-sum permit approach for interactive pollutants with individual permit systems and TDP programs for multiple additive pollutants. We conclude by discussing practical considerations and implementation issues that result from the application of weighted-sum permit programs.
Minimizing the Sum of Completion Times with Resource Dependant Times
NASA Astrophysics Data System (ADS)
Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe
2008-10-01
We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.
On the Hardness of Subset Sum Problem from Different Intervals
NASA Astrophysics Data System (ADS)
Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke
The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.
Warburton, William K.; Momayezi, Michael
2006-06-20
A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.
Mehrotra, Sanjay; Kim, Kibaek
2011-12-01
We consider the problem of outcomes based budget allocations to chronic disease prevention programs across the United States (US) to achieve greater geographical healthcare equity. We use Diabetes Prevention and Control Programs (DPCP) by the Center for Disease Control and Prevention (CDC) as an example. We present a multi-criteria robust weighted sum model for such multi-criteria decision making in a group decision setting. The principal component analysis and an inverse linear programming techniques are presented and used to study the actual 2009 budget allocation by CDC. Our results show that the CDC budget allocation process for the DPCPs is not likely model based. In our empirical study, the relative weights for different prevalence and comorbidity factors and the corresponding budgets obtained under different weight regions are discussed. Parametric analysis suggests that money should be allocated to states to promote diabetes education and to increase patient-healthcare provider interactions to reduce disparity across the US.
On the Latent Variable Interpretation in Sum-Product Networks.
Peharz, Robert; Gens, Robert; Pernkopf, Franz; Domingos, Pedro
2017-10-01
One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets.
Optimal weighted averaging of event related activity from acquisitions with artifacts.
Vollero, Luca; Petrichella, Sara; Innello, Giulio
2016-08-01
In several biomedical applications that require the signal processing of biological data, the starting procedure for noise reduction is the ensemble averaging of multiple repeated acquisitions (trials). This method is based on the assumption that each trial is composed of two additive components: (i) a time-locked activity related to some sensitive/stimulation phenomenon (ERA, Event Related Activity in the following) and (ii) a sum of several other non time-locked background activities. The averaging aims at estimating the ERA activity under very low Signal to Noise and Interference Ratio (SNIR). Although averaging is a well established tool, its performance can be improved in the presence of high-power disturbances (artifacts) by a trials classification and removal stage. In this paper we propose, model and evaluate a new approach that avoids trials removal, managing trials classified as artifact-free and artifact-prone with two different weights. Based on the model, a weights tuning is possible and through modeling and simulations we show that, when optimally configured, the proposed solution outperforms classical approaches.
Prediction equation for calculating fat mass in young Indian adults.
Sandhu, Jaspal Singh; Gupta, Giniya; Shenoy, Shweta
2010-06-01
Accurate measurement or prediction of fat mass is useful in physiology, nutrition and clinical medicine. Most predictive equations currently used to assess percentage of body fat or fat mass, using simple anthropometric measurements were derived from people in western societies and they may not be appropriate for individuals with other genotypic and phenotypic characteristics. We developed equations to predict fat mass from anthropometric measurements in young Indian adults. Fat mass was measured in 60 females and 58 males, aged 20 to 29 yrs by using hydrostatic weighing and by simultaneous measurement of residual lung volume. Anthropometric measure included weight (kg), height (m) and 4 skinfold thickness [STs (mm)]. Sex specific linear regression model was developed with fat mass as the dependent variable and all anthropometric measures as independent variables. The prediction equation obtained for fat mass (kg) for males was 8.46+0.32 (weight) - 15.16 (height) + 9.54 (log of sum of 4 STs) (R2= 0. 53, SEE=3.42 kg) and - 20.22 + 0.33 (weight) + 3.44 (height) + 7.66 (log of sum of 4 STs) (R2=0.72, SEE=3.01kg) for females. A new prediction equation for the measurement of fat mass was derived and internally validated in young Indian adults using simple anthropometric measurements.
Cho, Jae Heon; Ha, Sung Ryong
2010-03-15
An influence coefficient algorithm and a genetic algorithm (GA) were introduced to develop an automatic calibration model for QUAL2K, the latest version of the QUAL2E river and stream water-quality model. The influence coefficient algorithm was used for the parameter optimization in unsteady state, open channel flow. The GA, used in solving the optimization problem, is very simple and comprehensible yet still applicable to any complicated mathematical problem, where it can find the global-optimum solution quickly and effectively. The previously established model QUAL2Kw was used for the automatic calibration of the QUAL2K. The parameter-optimization method using the influence coefficient and genetic algorithm (POMIG) developed in this study and QUAL2Kw were each applied to the Gangneung Namdaecheon River, which has multiple reaches, and the results of the two models were compared. In the modeling, the river reach was divided into two parts based on considerations of the water quality and hydraulic characteristics. The calibration results by POMIG showed a good correspondence between the calculated and observed values for most of water-quality variables. In the application of POMIG and QUAL2Kw, relatively large errors were generated between the observed and predicted values in the case of the dissolved oxygen (DO) and chlorophyll-a (Chl-a) in the lowest part of the river; therefore, two weighting factors (1 and 5) were applied for DO and Chl-a in the lower river. The sums of the errors for DO and Chl-a with a weighting factor of 5 were slightly lower compared with the application of a factor of 1. However, with a weighting factor of 5 the sums of errors for other water-quality variables were slightly increased in comparison to the case with a factor of 1. Generally, the results of the POMIG were slightly better than those of the QUAL2Kw.
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Real time pipelined system for forming the sum of products in the processing of video data
NASA Technical Reports Server (NTRS)
Wilcox, Brian (Inventor)
1988-01-01
A 3-by-3 convolver utilizes 9 binary arithmetic units connected in cascade for multiplying 12-bit binary pixel values P sub i which are positive or two's complement binary numbers by 5-bit magnitide (plus sign) weights W sub i which may be positive or negative. The weights are stored in registers including the sign bits. For a negative weight, the one's complement of the pixel value to be multiplied is formed at each unit by a bank of 17 exclusive or gates G sub i under control of the sign of the corresponding weight W sub i, and a correction is made by adding the sum of the absolute values of all the negative weights for each 3-by-3 kernel. Since this correction value remains constant as long as the weights are constant, it can be precomputed and stored in a register as a value to be added to the product PW of the first arithmetic unit.
[Theoretical model study about the application risk of high risk medical equipment].
Shang, Changhao; Yang, Fenghui
2014-11-01
Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting
Ming-jun, Deng; Shi-ru, Qu
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting. PMID:26779258
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting.
Deng, Ming-jun; Qu, Shi-ru
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting.
NASA Astrophysics Data System (ADS)
Douthett, Elwood (Jack) Moser, Jr.
1999-10-01
Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can only be determined by calculation. A Desirability Function is constructed that simultaneously measures how well multiple intervals fit in a given equal-tempered system. These measurements are made for octave (base 2) and tritave systems (base 3). Combinatorial properties important to music modulation are considered. These considerations lead These considerations lead to the construction of maximally even scales as partitions of an equal-tempered system.
Interpretation of body residues for natural resources damage assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubitz, J.A.; Markarian, R.K.; Lauren, D.J.
1995-12-31
A 28-day caged mussel study using Corbicula sp. was conducted on Sugarland Run and the Potomac River following a spill of No. 2 fuel oil. In addition, resident Corbicula sp. from the Potomac River were sampled at the beginning and end of the study. The summed body residues of 39 polycyclic aromatic hydrocarbons (PAHs) ranged from 0.56 to 41 mg/kg dry weight within the study area. The summed body residues of the 18 PAHs that are routinely measured in the national oceanic and Atmospheric Administration Status and Trends Program (NST) ranged from 0.5 to 20 mg/kg dry weight for musselsmore » in this study. These data were similar to summed PAH concentrations reported in the NST for mussels from a variety of US coastal waters, which ranged from 0.4 to 24.5 mg/kg dry weight. This paper will discuss interpretation of PAH residues in Corbicula sp. to determine the spatial extent of the area affected by the oil spill. The toxicological significance of the PAH residues in both resident and caged mussels will also be presented.« less
Knutsen, Helle K; Kvalem, Helen E; Thomsen, Cathrine; Frøshaug, May; Haugen, Margaretha; Becher, Georg; Alexander, Jan; Meltzer, Helle M
2008-02-01
This study investigates dietary exposure and serum levels of polybrominated diphenyl ethers (PBDEs) and hexabromocyclododecane (HBCD) in a group of Norwegians (n = 184) with a wide range of seafood consumption (4-455 g/day). Mean dietary exposure to Sum 5 PBDEs (1.5 ng/kg body weight/day) is among the highest reported. Since concentrations in foods were similar to those found elsewhere in Europe, this may be explained by high seafood consumption among Norwegians. Oily fish was the main dietary contributor both to Sum PBDEs and to the considerably lower HBCD intake (0.3 ng/kg body weight/day). Milk products appeared to contribute most to the BDE-209 intake (1.4 ng/kg body weight/day). BDE-209 and HBCD exposures are based on few food samples and need to be confirmed. Serum levels (mean Sum 7 PBDEs = 5.2 ng/g lipid) and congener patterns (BDE-47 > BDE-153 > BDE-99) were comparable with other European reports. Correlations between individual congeners were higher for the calculated dietary exposure than for serum levels. Further, significant but weak correlations were found between dietary exposure and serum levels for Sum PBDEs, BDE-47, and BDE-28 in males. This indicates that other sources in addition to diet need to be addressed.
Delivering both sum and difference beam distributions to a planar monopulse antenna array
Strassner, II, Bernd H.
2015-12-22
A planar monopulse radar apparatus includes a planar distribution matrix coupled to a planar antenna array having a linear configuration of antenna elements. The planar distribution matrix is responsive to first and second pluralities of weights applied thereto for providing both sum and difference beam distributions across the antenna array.
Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065
Ising Critical Behavior of Inhomogeneous Curie-Weiss Models and Annealed Random Graphs
NASA Astrophysics Data System (ADS)
Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa
2016-11-01
We study the critical behavior for inhomogeneous versions of the Curie-Weiss model, where the coupling constant {J_{ij}(β)} for the edge {ij} on the complete graph is given by {J_{ij}(β)=β w_iw_j/( {sum_{kin[N]}w_k})}. We call the product form of these couplings the rank-1 inhomogeneous Curie-Weiss model. This model also arises [with inverse temperature {β} replaced by {sinh(β)} ] from the annealed Ising model on the generalized random graph. We assume that the vertex weights {(w_i)_{iin[N]}} are regular, in the sense that their empirical distribution converges and the second moment converges as well. We identify the critical temperatures and exponents for these models, as well as a non-classical limit theorem for the total spin at the critical point. These depend sensitively on the number of finite moments of the weight distribution. When the fourth moment of the weight distribution converges, then the critical behavior is the same as on the (homogeneous) Curie-Weiss model, so that the inhomogeneity is weak. When the fourth moment of the weights converges to infinity, and the weights satisfy an asymptotic power law with exponent {τ} with {τin(3,5)}, then the critical exponents depend sensitively on {τ}. In addition, at criticality, the total spin {S_N} satisfies that {S_N/N^{(τ-2)/(τ-1)}} converges in law to some limiting random variable whose distribution we explicitly characterize.
Complex-energy approach to sum rules within nuclear density functional theory
Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...
2015-04-27
The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less
Measures with locally finite support and spectrum.
Meyer, Yves F
2016-03-22
The goal of this paper is the construction of measures μ on R(n)enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ f μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order.
Measures with locally finite support and spectrum
Meyer, Yves F.
2016-01-01
The goal of this paper is the construction of measures μ on Rn enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ^ of μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order. PMID:26929358
Design of Probabilistic Random Forests with Applications to Anticancer Drug Sensitivity Prediction
Rahman, Raziur; Haider, Saad; Ghosh, Souparno; Pal, Ranadip
2015-01-01
Random forests consisting of an ensemble of regression trees with equal weights are frequently used for design of predictive models. In this article, we consider an extension of the methodology by representing the regression trees in the form of probabilistic trees and analyzing the nature of heteroscedasticity. The probabilistic tree representation allows for analytical computation of confidence intervals (CIs), and the tree weight optimization is expected to provide stricter CIs with comparable performance in mean error. We approached the ensemble of probabilistic trees’ prediction from the perspectives of a mixture distribution and as a weighted sum of correlated random variables. We applied our methodology to the drug sensitivity prediction problem on synthetic and cancer cell line encyclopedia dataset and illustrated that tree weights can be selected to reduce the average length of the CI without increase in mean error. PMID:27081304
Near-Optimal Operation of Dual-Fuel Launch Vehicles
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Chou, H. C.; Bowles, J. V.
1996-01-01
A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.
Precipitation Efficiency in the Tropical Deep Convective Regime
NASA Technical Reports Server (NTRS)
Li, Xiaofan; Sui, C.-H.; Lau, K.-M.; Lau, William K. M. (Technical Monitor)
2001-01-01
Precipitation efficiency in the tropical deep convective regime is analyzed based on a 2-D cloud resolving simulation. The cloud resolving model is forced by the large-scale vertical velocity and zonal wind and large-scale horizontal advections derived from TOGA COARE for a 20-day period. Precipitation efficiency may be defined as a ratio of surface rain rate to sum of surface evaporation and moisture convergence (LSPE) or a ratio of surface rain rate to sum of condensation and deposition rates of supersaturated vapor (CMPE). Moisture budget shows that the atmosphere is moistened (dryed) when the LSPE is less (more) than 100 %. The LSPE could be larger than 100 % for strong convection. This indicates that the drying processes should be included in cumulus parameterization to avoid moisture bias. Statistical analysis shows that the sum of the condensation and deposition rates is bout 80 % of the sum of the surface evaporation rate and moisture convergence, which ads to proportional relation between the two efficiencies when both efficiencies are less han 100 %. The CMPE increases with increasing mass-weighted mean temperature and creasing surface rain rate. This suggests that precipitation is more efficient for warm environment and strong convection. Approximate balance of rates among the condensation, deposition, rain, and the raindrop evaporation is used to derive an analytical solution of the CMPE.
NASA Astrophysics Data System (ADS)
Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju
2014-04-01
Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.
Extension of the momentum transfer model to time-dependent pipe turbulence.
Calzetta, Esteban
2012-02-01
We analyze a possible extension of Gioia and Chakraborty's momentum transfer model of friction in steady turbulent pipe flows [Phys. Rev. Lett. 96, 044502 (2006)] to the case of time- and/or space-dependent turbulent flows. The end result is an expression for the stress at the wall as the sum of a steady and a dynamic component. The steady part is obtained by using the instantaneous velocity in the expression for the stress at the wall of a stationary flow. The unsteady part is a weighted average over the history of the flow acceleration, with a weighting function similar to that proposed by Vardy and Brown [J. Sound Vibr. 259, 1011 (2003); J. Sound Vibr. 270, 233 (2004)], but naturally including the effect of spatial derivatives of the mean flow, as in the Brunone model [Brunone et al., J. Water Res. Plan. Manage. 126, 236 (2000)].
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Prediction Equation for Calculating Fat Mass in Young Indian Adults
Sandhu, Jaspal Singh; Gupta, Giniya; Shenoy, Shweta
2010-01-01
Purpose Accurate measurement or prediction of fat mass is useful in physiology, nutrition and clinical medicine. Most predictive equations currently used to assess percentage of body fat or fat mass, using simple anthropometric measurements were derived from people in western societies and they may not be appropriate for individuals with other genotypic and phenotypic characteristics. We developed equations to predict fat mass from anthropometric measurements in young Indian adults. Methods Fat mass was measured in 60 females and 58 males, aged 20 to 29 yrs by using hydrostatic weighing and by simultaneous measurement of residual lung volume. Anthropometric measure included weight (kg), height (m) and 4 skinfold thickness [STs (mm)]. Sex specific linear regression model was developed with fat mass as the dependent variable and all anthropometric measures as independent variables. Results The prediction equation obtained for fat mass (kg) for males was 8.46+0.32 (weight) − 15.16 (height) + 9.54 (log of sum of 4 STs) (R2= 0. 53, SEE=3.42 kg) and − 20.22 + 0.33 (weight) + 3.44 (height) + 7.66 (log of sum of 4 STs) (R2=0.72, SEE=3.01kg) for females. Conclusion A new prediction equation for the measurement of fat mass was derived and internally validated in young Indian adults using simple anthropometric measurements. PMID:22375197
Ceelen, Manon; van Weissenbruch, Mirjam M; Prein, Janneke; Smit, Judith J; Vermeiden, Jan P W; Spreeuwenberg, Marieke; van Leeuwen, Flora E; Delemarre-van de Waal, Henriette A
2009-11-01
Little is known about post-natal growth in IVF offspring and the effects of rates of early post-natal growth on blood pressure and body fat composition during childhood and adolescence. The follow-up study comprised 233 IVF children aged 8-18 years and 233 spontaneously conceived controls born to subfertile parents. Growth data from birth to 4 years of age, available for 392 children (n = 193 IVF, n = 199 control), were used to study early post-natal growth. Furthermore, early post-natal growth velocity (weight gain) was related to blood pressure and skinfold measurements at follow-up. We found significantly lower weight, height and BMI standard deviation scores (SDSs) at 3 months, and weight SDS at 6 months of age in IVF children compared with controls. Likewise, IVF children demonstrated a greater gain in weight SDS (P < 0.001), height SDS (P = 0.013) and BMI SDS (P = 0.029) during late infancy (3 months to 1 year) versus controls. Weight gain during early childhood (1-3 years) was related to blood pressure in IVF children (P = 0.014 systolic, 0.04 diastolic) but not in controls. Growth during late infancy was not related to skinfold thickness in IVF children, unlike controls (P = 0.002 peripheral sum, 0.003 total sum). Growth during early childhood was related to skinfold thickness in both IVF and controls (P = 0.005 and 0.01 peripheral sum and P = 0.003 and 0.005 total sum, respectively). Late infancy growth velocity of IVF children was significantly higher compared with controls. Nevertheless, early childhood growth instead of infancy growth seemed to predict cardiovascular risk factors in IVF children. Further research is needed to confirm these findings and to follow-up growth and development of IVF children into adulthood.
Fisk, A T; Stern, G A; Hobson, K A; Strachan, W J; Loewen, M D; Norstrom, R J
2001-01-01
Samples of Calanus hyperboreus, a herbivorous copepod, were collected (n = 20) between April and July 1998, and water samples (n = 6) were collected in May 1998, in the Northwater Polynya (NOW) to examine persistent organic pollutants (POPs) in a high Arctic marine zooplankton. Lipid content (dry weight) doubled, water content (r2 = 0.88) and delta15N (r2 = 0.54) significantly decreased, and delta13C significantly increased (r2 = 0.30) in the C. hyperboreus over the collection period allowing an examination of the role of these variables in POP dynamics in this small pelagic zooplankton. The rank and concentrations of POP groups in C. hyperboreus over the entire sampling was sum of PCB (30.1 +/- 4.03 ng/g, dry weight) > sum of HCH (11.8 +/- 3.23) > sum of DDT (4.74 +/- 0.74), sum of CHLOR (4.44 +/- 1.0) > sum of CIBz (2.42 +/- 0.18), although these rankings varied considerably over the summer. The alpha- and gamma-HCH and lower chlorinated PCB congeners were the most common POPs in C. hyperboreus. The relationship between bioconcentration factor (BCF) and octanol-water partition coefficient (Kow) observed for the C. hyperboreus was linear and near 1:1 (slope = 0.72) for POPs with a log Kow between 3 and 6 but curvilinear when hydrophobic POPs (log Kow > 6) were included. Concentrations of sum of HCH. Sum of CHLOR and sum of CIBz increased over the sampling period, but no change in sum of PCB or sum of DDT was observed. After removing the effects of time, the variables lipid content, water content, delta15N and delta13C did not describe POP concentrations in C. hyperboreus. These results suggest that hydrophobic POP (log Kow = 3.86.0) concentrations in zooplankton are likely to reflect water concentrations and that POPs do not biomagnify in C. hyperboreus or likely in other small, herbivorous zooplankton.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2016-03-02
In this study, the momentum-weighted sum of the charges of tracks associated to a jet is sensitive to the charge of the initiating quark or gluon. This paper presents a measurement of the distribution of momentum-weighted sums, called jet charge, in dijet events using 20.3 fb -1 of data recorded with the ATLAS detector at √s = 8 TeV in pp collisions at the LHC. The jet charge distribution is unfolded to remove distortions from detector effects and the resulting particle-level distribution is compared with several models. The p T dependence of the jet charge distribution average and standard deviationmore » are compared to predictions obtained with several leading-order and next-to-leading-order parton distribution functions. The data are also compared to different Monte Carlo simulations of QCD dijet production using various settings of the free parameters within these models. The chosen value of the strong coupling constant used to calculate gluon radiation is found to have a significant impact on the predicted jet charge. There is evidence for a p T dependence of the jet charge distribution for a given jet flavor. In agreement with perturbative QCD predictions, the data show that the average jet charge of quark-initiated jets decreases in magnitude as the energy of the jet increases.« less
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Analysis of Environmental Chemical Mixtures and Non-Hodgkin Lymphoma Risk in the NCI-SEER NHL Study.
Czarnota, Jenna; Gennings, Chris; Colt, Joanne S; De Roos, Anneclaire J; Cerhan, James R; Severson, Richard K; Hartge, Patricia; Ward, Mary H; Wheeler, David C
2015-10-01
There are several suspected environmental risk factors for non-Hodgkin lymphoma (NHL). The associations between NHL and environmental chemical exposures have typically been evaluated for individual chemicals (i.e., one-by-one). We determined the association between a mixture of 27 correlated chemicals measured in house dust and NHL risk. We conducted a population-based case-control study of NHL in four National Cancer Institute-Surveillance, Epidemiology, and End Results centers--Detroit, Michigan; Iowa; Los Angeles County, California; and Seattle, Washington--from 1998 to 2000. We used weighted quantile sum (WQS) regression to model the association of a mixture of chemicals and risk of NHL. The WQS index was a sum of weighted quartiles for 5 polychlorinated biphenyls (PCBs), 7 polycyclic aromatic hydrocarbons (PAHs), and 15 pesticides. We estimated chemical mixture weights and effects for study sites combined and for each site individually, and also for histologic subtypes of NHL. The WQS index was statistically significantly associated with NHL overall [odds ratio (OR) = 1.30; 95% CI: 1.08, 1.56; p = 0.006; for one quartile increase] and in the study sites of Detroit (OR = 1.71; 95% CI: 1.02, 2.92; p = 0.045), Los Angeles (OR = 1.44; 95% CI: 1.00, 2.08; p = 0.049), and Iowa (OR = 1.76; 95% CI: 1.23, 2.53; p = 0.002). The index was marginally statistically significant in Seattle (OR = 1.39; 95% CI: 0.97, 1.99; p = 0.071). The most highly weighted chemicals for predicting risk overall were PCB congener 180 and propoxur. Highly weighted chemicals varied by study site; PCBs were more highly weighted in Detroit, and pesticides were more highly weighted in Iowa. An index of chemical mixtures was significantly associated with NHL. Our results show the importance of evaluating chemical mixtures when studying cancer risk.
Giant quadrupole and monopole resonances in /sup 28/Si
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lui, Y.; Bronson, J.D.; Youngblood, D.H.
1985-05-01
Inelastic alpha scattering measurements have been performed for /sup 28/Si at small angles including zero degrees. A total of 66% of the E0 energy-weighted sum rule was identified (using a Satchler version 2 form factor) centered at E/sub x/ = 17.9 MeV having a width of 4.8 MeV and 34% of the E2 energy-weighted sum rule was identified above E/sub x/ = 15.3 MeV centered at 19.0 MeV with a width of 4.4 MeV. The dependence of the extracted E0 strength on form factor and optical potential was explored.
Tuan, Pham Viet; Koo, Insoo
2017-10-06
In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.
Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal
2009-01-01
The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455
Association between anthropometric indices and cardiometabolic risk factors in pre-school children.
Aristizabal, Juan C; Barona, Jacqueline; Hoyos, Marcela; Ruiz, Marcela; Marín, Catalina
2015-11-06
The world health organization (WHO) and the Identification and prevention of dietary- and lifestyle-induced health effects in children and infants- study (IDEFICS), released anthropometric reference values obtained from normal body weight children. This study examined the relationship between WHO [body mass index (BMI) and triceps- and subscapular-skinfolds], and IDEFICS (waist circumference, waist to height ratio and fat mass index) anthropometric indices with cardiometabolic risk factors in pre-school children ranging from normal body weight to obesity. A cross-sectional study with 232 children (aged 4.1 ± 0.05 years) was performed. Anthropometric measurements were collected and BMI, waist circumference, waist to height ratio, triceps- and subscapular-skinfolds sum and fat mass index were calculated. Fasting glucose, fasting insulin, homeostasis model analysis insulin resistance (HOMA-IR), blood lipids and apolipoprotein (Apo) B-100 (Apo B) and Apo A-I were determined. Pearson's correlation coefficient, multiple regression analysis and the receiver-operating characteristic (ROC) curve analysis were run. 51% (n = 73) of the boys and 52% (n = 47) of the girls were of normal body weight, 49% (n = 69) of the boys and 48% (n = 43) of the girls were overweight or obese. Anthropometric indices correlated (p < 0.001) with insulin: [BMI (r = 0.514), waist circumference (r = 0.524), waist to height ratio (r = 0.304), triceps- and subscapular-skinfolds sum (r = 0.514) and fat mass index (r = 0.500)], and HOMA-IR: [BMI (r = 0.509), waist circumference (r = 0.521), waist to height ratio (r = 0.296), triceps- and subscapular-skinfolds sum (r = 0.483) and fat mass index (r = 0.492)]. Similar results were obtained after adjusting by age and sex. The areas under the curve (AUC) to identify children with insulin resistance were significant (p < 0.001) and similar among anthropometric indices (AUC > 0.68 to AUC < 0.76). WHO and IDEFICS anthropometric indices correlated similarly with fasting insulin and HOMA-IR. The diagnostic accuracy of the anthropometric indices as a proxy to identify children with insulin resistance was similar. These data do not support the use of waist circumference, waist to height ratio, triceps- and subscapular- skinfolds sum or fat mass index, instead of the BMI as a proxy to identify pre-school children with insulin resistance, the most frequent alteration found in children ranging from normal body weight to obesity.
Thompson, Amanda L; Adair, Linda S; Bentley, Margaret E
2013-03-01
The prevalence of overweight among infants and toddlers has increased dramatically in the past three decades, highlighting the importance of identifying factors contributing to early excess weight gain, particularly in high-risk groups. Parental feeding styles and the attitudes and behaviors that characterize parental approaches to maintaining or modifying children's eating behavior are an important behavioral component shaping early obesity risk. Using longitudinal data from the Infant Care and Risk of Obesity Study, a cohort study of 217 African-American mother-infant pairs with feeding styles, dietary recalls, and anthropometry collected from 3 to 18 months of infant age, we examined the relationship between feeding styles, infant diet, and weight-for-age and sum of skinfolds. Longitudinal mixed models indicated that higher pressuring and indulgent feeding style scores were positively associated with greater infant energy intake, reduced odds of breastfeeding, and higher levels of age-inappropriate feeding of liquids and solids, whereas restrictive feeding styles were associated with lower energy intake, higher odds of breastfeeding, and reduced odds of inappropriate feeding. Pressuring and restriction were also oppositely related to infant size with pressuring associated with lower infant weight-for-age and restriction with higher weight-for-age and sum of skinfolds. Infant size also predicted maternal feeding styles in subsequent visits indicating that the relationship between size and feeding styles is likely bidirectional. Our results suggest that the degree to which parents are pressuring or restrictive during feeding shapes the early feeding environment and, consequently, may be an important environmental factor in the development of obesity. Copyright © 2012 The Obesity Society.
The Weighted-Average Lagged Ensemble.
DelSole, T; Trenary, L; Tippett, M K
2017-11-01
A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Star-triangle and star-star relations in statistical mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baxter, R.J.
1997-01-20
The homogeneous three-layer Zamolodchikov model is equivalent to a four-state model on the checkerboard lattice which closely resembles the four-state critical Potts model, but with some of its Boltzmann weights negated. Here the author shows that it satisfies a star-to-reverse-star (or simply star-star) relation, even though they know of no star-triangle relation for this model. For any nearest-neighbor checkerboard model, they show that this star-star relation is sufficient to ensure that the decimated model (where half the spins have been summed over) satisfies a twisted Yang-Baxter relation. This ensures that the transfer matrices of the original model commute in pairs,more » which is an adequate condition for solvability.« less
A Biologically-Inspired Neural Network Architecture for Image Processing
1990-12-01
was organized into twelve groups of 8-by-8 node arrays. Weights were con- strained for each group of nodes, with each node "viewing" a 5-by-5 pixel...single wIndow * smk 0; for(J-0; j< 6 4 ; J++)( sum -sum +t %borfi][J]*rfarray[j]; ) /* Finished calculating ore block, one j:,,sition (first layer
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
40 CFR 86.094-2 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... methane. Non-Methane Hydrocarbon Equivalent means the sum of the carbon mass emissions of non-oxygenated... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Loaded Vehicle Weight means the numerical average of vehicle curb weight and GVWR. Bi-directional control...
[Hydrostatic weighing, skinfold thickness, body mass index relationships in high school girls].
Tahara, Y; Yukawa, K; Tsunawake, N; Saeki, S; Nishiyama, K; Urata, H; Katsuno, K; Fukuyama, Y; Michimukou, R; Uekata, M
1995-12-01
A study was conducted to evaluate body composition by hydrostatic weighing, skinfold thickness, and body mass index (BMI) in 102 senior high school girls, aged 15 to 18 in Nagasaki City. Body density measured by the underwater weighing method, was used to determine the fat weight (Fat) and lean body mass (LBM. or fat free weight: FFW) utilizing the formulas by Brozek et al. The results were as follows; 1. Mean values of body density were 1.04428 in the first grade girls, 1.04182 in the second grade, and 1.04185 in the third grade. 2. Mean values of percentage body fat (%Fat) were 23.5% in the first grade, 24.5% in the second and 24.5% in the third. 3. Percentage body fat (%Fat), lean body mass (LBM) and LBM/Height were not significantly with different advance of grade from the first to the third. 4. The correlation coefficients between percent body fat and the sum of two skinfold thicknesses, the sum of three skinfold thicknesses and the sum of seven skinfold thicknesses was 0.78, 0.79, and 0.80 respectively and were all statistically significant (p < 0.001). 5. The correlation coefficients between BMI and the sum of two skinfold thicknesses, the sum of three skinfold thicknesses and the sum of seven skinfold thicknesses was 0.74, 0.74, and 0.74 respectively and were all statistically significant (p < 0.001). 6. Mean values of BMI, Rohrer index and waist-hip ratio (WHR) in all subjects (n = 102) were 20.3, 128.2 and 0.72 respectively.
Quantum Hurwitz numbers and Macdonald polynomials
NASA Astrophysics Data System (ADS)
Harnad, J.
2016-11-01
Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.
Pipeline active filter utilizing a booth type multiplier
NASA Technical Reports Server (NTRS)
Nathan, Robert (Inventor)
1987-01-01
Multiplier units of the modified Booth decoder and carry-save adder/full adder combination are used to implement a pipeline active filter wherein pixel data is processed sequentially, and each pixel need only be accessed once and multiplied by a predetermined number of weights simultaneously, one multiplier unit for each weight. Each multiplier unit uses only one row of carry-save adders, and the results are shifted to less significant multiplier positions and one row of full adders to add the carry to the sum in order to provide the correct binary number for the product Wp. The full adder is also used to add this product Wp to the sum of products .SIGMA.Wp from preceding multiply units. If m.times.m multiplier units are pipelined, the system would be capable of processing a kernel array of m.times.m weighting factors.
Ultrasonic Imaging in Solids Using Wave Mode Beamforming.
di Scalea, Francesco Lanza; Sternini, Simone; Nguyen, Thompson Vu
2017-03-01
This paper discusses some improvements to ultrasonic synthetic imaging in solids with primary applications to nondestructive testing of materials and structures. Specifically, the study proposes new adaptive weights applied to the beamforming array that are based on the physics of the propagating waves, specifically the displacement structure of the propagating longitudinal (L) mode and shear (S) mode that are naturally coexisting in a solid. The wave mode structures can be combined with the wave geometrical spreading to better filter the array (in a matched filter approach) and improve its focusing ability compared to static array weights. This paper also proposes compounding, or summing, images obtained from the different wave modes to further improve the array gain without increasing its physical aperture. The wave mode compounding can be performed either incoherently or coherently, in analogy with compounding multiple frequencies or multiple excitations. Numerical simulations and experimental testing demonstrate the potential improvements obtainable by the wave structure adaptive weights compared to either static weights in conventional delay-and-sum focusing, or adaptive weights based on geometrical spreading alone in minimum-variance distortionless response focusing.
A novel beamformer design method for medical ultrasound. Part I: Theory.
Ranganathan, Karthik; Walker, William F
2003-01-01
The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.
Design of an automatic weight scale for an isolette
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Griffin, W.
1974-01-01
The design of an infant weight scale is reported that fits into an isolette without disturbing its controlled atmosphere. The scale platform uses strain gages to measure electronically deflections of cantilever beams positioned at its four corners. The weight of the infant is proportional to the sum of the output voltages produced by the gauges on each beam of the scale.
Soh, Nerissa L; Touyz, Stephen; Dobbins, Timothy A; Clarke, Simon; Kohn, Michael R; Lee, Ee Lian; Leow, Vincent; Ung, Ken E K; Walter, Garry
2009-01-01
To investigate the relationship between skinfold thickness and body mass index (BMI) in North European Caucasian and East Asian young women with and without anorexia nervosa (AN) in two countries. Height, weight and skinfold thicknesses were assessed in 137 young women with and without AN, in Australia and Singapore. The relationship between BMI and the sum of triceps, biceps, subscapular and iliac crest skinfolds was analysed with clinical status, ethnicity, age and country of residence as covariates. For the same BMI, women with AN had significantly smaller sums of skinfolds than women without AN. East Asian women both with and without AN had significantly greater skinfold sums than their North European Caucasian counterparts after adjusting for BMI. Lower BMI goals may be appropriate when managing AN patients of East Asian ancestry and the weight for height diagnostic criterion should be reconsidered for this group.
Practical optimization of Steiner trees via the cavity method
NASA Astrophysics Data System (ADS)
Braunstein, Alfredo; Muntoni, Anna
2016-07-01
The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.
Bayesian stock assessment of Pacific herring in Prince William Sound, Alaska.
Muradian, Melissa L; Branch, Trevor A; Moffitt, Steven D; Hulson, Peter-John F
2017-01-01
The Pacific herring (Clupea pallasii) population in Prince William Sound, Alaska crashed in 1993 and has yet to recover, affecting food web dynamics in the Sound and impacting Alaskan communities. To help researchers design and implement the most effective monitoring, management, and recovery programs, a Bayesian assessment of Prince William Sound herring was developed by reformulating the current model used by the Alaska Department of Fish and Game. The Bayesian model estimated pre-fishery spawning biomass of herring age-3 and older in 2013 to be a median of 19,410 mt (95% credibility interval 12,150-31,740 mt), with a 54% probability that biomass in 2013 was below the management limit used to regulate fisheries in Prince William Sound. The main advantages of the Bayesian model are that it can more objectively weight different datasets and provide estimates of uncertainty for model parameters and outputs, unlike the weighted sum-of-squares used in the original model. In addition, the revised model could be used to manage herring stocks with a decision rule that considers both stock status and the uncertainty in stock status.
Bayesian stock assessment of Pacific herring in Prince William Sound, Alaska
Moffitt, Steven D.; Hulson, Peter-John F.
2017-01-01
The Pacific herring (Clupea pallasii) population in Prince William Sound, Alaska crashed in 1993 and has yet to recover, affecting food web dynamics in the Sound and impacting Alaskan communities. To help researchers design and implement the most effective monitoring, management, and recovery programs, a Bayesian assessment of Prince William Sound herring was developed by reformulating the current model used by the Alaska Department of Fish and Game. The Bayesian model estimated pre-fishery spawning biomass of herring age-3 and older in 2013 to be a median of 19,410 mt (95% credibility interval 12,150–31,740 mt), with a 54% probability that biomass in 2013 was below the management limit used to regulate fisheries in Prince William Sound. The main advantages of the Bayesian model are that it can more objectively weight different datasets and provide estimates of uncertainty for model parameters and outputs, unlike the weighted sum-of-squares used in the original model. In addition, the revised model could be used to manage herring stocks with a decision rule that considers both stock status and the uncertainty in stock status. PMID:28222151
Monyeki, Kotsedi; Kemper, Han; Mogale, Alfred; Hay, Leon; Sekgala, Machoene; Mashiane, Tshephang; Monyeki, Suzan; Sebati, Betty
2017-08-29
The aim of this cross-sectional study was to investigate the association between birth weight, underweight, and blood pressure (BP) among Ellisras rural children aged between 5 and 15 years. Data were collected from 528 respondents who participated in the Ellisras Longitudinal Study (ELS) and had their birth weight recorded on their health clinic card. Standard procedure was used to measure the anthropometric measurements and BP. Linear regression was used to assess BP, underweight variables, and birth weight. Logistic regression was used to assess the association of hypertension risks, low birth weight, and underweight. The association between birth weight and BP was not statistically significant. There was a significant ( p < 0.05) association between mean BP and the sum of four skinfolds (β = 0.26, 95% CI 0.15-0.23) even after adjusting for age (β = 0.18, 95% CI 0.01-0.22). Hypertension was significantly associated with weight for age z-scores (OR = 5.13, 95% CI 1.89-13.92) even after adjusting for age and sex (OR = 5.26, 95% CI 1.93-14.34). BP was significantly associated with the sum of four skinfolds, but not birth weight. Hypertension was significantly associated with underweight. Longitudinal studies should confirm whether the changes in body weight we found can influence the risk of cardiovascular diseases.
NASA Astrophysics Data System (ADS)
Tape, Carl; Liu, Qinya; Tromp, Jeroen
2007-03-01
We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.
Scenario planning for water resource management in semi arid zone
NASA Astrophysics Data System (ADS)
Gupta, Rajiv; Kumar, Gaurav
2018-06-01
Scenario planning for water resource management in semi arid zone is performed using systems Input-Output approach of time domain analysis. This approach derived the future weights of input variables of the hydrological system from their precedent weights. Input variables considered here are precipitation, evaporation, population and crop irrigation. Ingles & De Souza's method and Thornthwaite model have been used to estimate runoff and evaporation respectively. Difference between precipitation inflow and the sum of runoff and evaporation has been approximated as groundwater recharge. Population and crop irrigation derived the total water demand. Compensation of total water demand by groundwater recharge has been analyzed. Further compensation has been evaluated by proposing efficient methods of water conservation. The best measure to be adopted for water conservation is suggested based on the cost benefit analysis. A case study for nine villages in Chirawa region of district Jhunjhunu, Rajasthan (India) validates the model.
Optimal Control Surface Layout for an Aeroservoelastic Wingbox
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2017-01-01
This paper demonstrates a technique for locating the optimal control surface layout of an aeroservoelastic Common Research Model wingbox, in the context of maneuver load alleviation and active utter suppression. The combinatorial actuator layout design is solved using ideas borrowed from topology optimization, where the effectiveness of a given control surface is tied to a layout design variable, which varies from zero (the actuator is removed) to one (the actuator is retained). These layout design variables are optimized concurrently with a large number of structural wingbox sizing variables and control surface actuation variables, in order to minimize the sum of structural weight and actuator weight. Results are presented that demonstrate interdependencies between structural sizing patterns and optimal control surface layouts, for both static and dynamic aeroelastic physics.
Multi-source apportionment of polycyclic aromatic hydrocarbons using simultaneous linear equations
NASA Astrophysics Data System (ADS)
Marinaite, Irina; Semenov, Mikhail
2014-05-01
A new approach to identify multiple sources of polycyclic aromatic hydrocarbons (PAHs) and to evaluate the source contributions to atmospheric deposition of particulate PAHs is proposed. The approach is based on differences in concentrations of sums of PAHs with the same molecular weight among the sources. The data on PAHs accumulation in snow as well as the source profiles were used for calculations. Contributions of aluminum production plant, oil-fired central heating boilers, and residential wood and coal combustion were calculated using the linear mixing models. The concentrations of PAH pairs such as Benzo[b]fluorantene + Benzo[k]fluorantene and Benzo[g,h,i]perylene + Indeno[1,2,3-c,d]pyrene normalized to Benzo[a]antracene + Chrysene were used as tracers in mixing equations. The results obtained using ratios of sums of PAHs were compared with those obtained using molecular diagnostic ratios such as Benzo[a]antracene/Chrysene and Benzo[g,h,i]perylene/Indeno[1,2,3-c,d]pyrene. It was shown that the results obtained using diagnostic ratios as tracers are less reliable than results obtained using ratios of sums of PAHs. Funding was provided by Siberian Branch of Russian Academy of Sciences grant No. 8 (2012-2014).
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
Solution to the sign problem in a frustrated quantum impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hann, Connor T., E-mail: connor.hann@yale.edu; Huffman, Emilie; Chandrasekharan, Shailesh
2017-01-15
In this work we solve the sign problem of a frustrated quantum impurity model consisting of three quantum spin-half chains interacting through an anti-ferromagnetic Heisenberg interaction at one end. We first map the model into a repulsive Hubbard model of spin-half fermions hopping on three independent one dimensional chains that interact through a triangular hopping at one end. We then convert the fermion model into an inhomogeneous one dimensional model and express the partition function as a weighted sum over fermion worldline configurations. By imposing a pairing of fermion worldlines in half the space we show that all negative weightmore » configurations can be eliminated. This pairing naturally leads to the original frustrated quantum spin model at half filling and thus solves its sign problem.« less
Large-Scale Multiantenna Multisine Wireless Power Transfer
NASA Astrophysics Data System (ADS)
Huang, Yang; Clerckx, Bruno
2017-11-01
Wireless Power Transfer (WPT) is expected to be a technology reshaping the landscape of low-power applications such as the Internet of Things, Radio Frequency identification (RFID) networks, etc. Although there has been some progress towards multi-antenna multi-sine WPT design, the large-scale design of WPT, reminiscent of massive MIMO in communications, remains an open challenge. In this paper, we derive efficient multiuser algorithms based on a generalizable optimization framework, in order to design transmit sinewaves that maximize the weighted-sum/minimum rectenna output DC voltage. The study highlights the significant effect of the nonlinearity introduced by the rectification process on the design of waveforms in multiuser systems. Interestingly, in the single-user case, the optimal spatial domain beamforming, obtained prior to the frequency domain power allocation optimization, turns out to be Maximum Ratio Transmission (MRT). In contrast, in the general weighted sum criterion maximization problem, the spatial domain beamforming optimization and the frequency domain power allocation optimization are coupled. Assuming channel hardening, low-complexity algorithms are proposed based on asymptotic analysis, to maximize the two criteria. The structure of the asymptotically optimal spatial domain precoder can be found prior to the optimization. The performance of the proposed algorithms is evaluated. Numerical results confirm the inefficiency of the linear model-based design for the single and multi-user scenarios. It is also shown that as nonlinear model-based designs, the proposed algorithms can benefit from an increasing number of sinewaves.
Averaged kick maps: less noise, more signal…and probably less bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor
2009-09-01
Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less
Bergert, F Bryan; Nosofsky, Robert M
2007-01-01
The authors develop and test generalized versions of take-the-best (TTB) and rational (RAT) models of multiattribute paired-comparison inference. The generalized models make allowances for subjective attribute weighting, probabilistic orders of attribute inspection, and noisy decision making. A key new test involves a response-time (RT) approach. TTB predicts that RT is determined solely by the expected time required to locate the 1st discriminating attribute, whereas RAT predicts that RT is determined by the difference in summed evidence between the 2 alternatives. Critical test pairs are used that partially decouple these 2 factors. Under conditions in which ideal observer TTB and RAT strategies yield equivalent decisions, both the RT results and the estimated attribute weights suggest that the vast majority of subjects adopted the generalized TTB strategy. The RT approach is also validated in an experimental condition in which use of a RAT strategy is essentially forced upon subjects. (c) 2007 APA, all rights reserved.
Control method and system for hydraulic machines employing a dynamic joint motion model
Danko, George [Reno, NV
2011-11-22
A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.
Fusing metabolomics data sets with heterogeneous measurement errors
Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.
2018-01-01
Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490
QRPA plus phonon coupling model and the photoabsorption cross section for 18,20,22O
NASA Astrophysics Data System (ADS)
Colò, G.; Bortignon, P. F.
2001-12-01
We have calculated the electric dipole strength distributions in the unstable neutron-rich oxygen isotopes 18,20,22O, in a model which include up to four quasiparticle-type configurations. The model is the extension, to include the effect of the pairing correlations, of a previous model very successful around closed shell nuclei, and it is based on the quasiparticle-phonon coupling. Low-lying dipole strength is found, which exhausts between 5 and 10% of the Thomas-Reiche-Kuhn (TRK) energy-weighted sum rule (EWSR) below 15 MeV excitation energy, in rather good agreement with recent experimental data. The role of the phonon coupling is shown to be crucial in order to obtain this result.
NASA Astrophysics Data System (ADS)
Mucha, Piotr B.; Peszek, Jan
2018-01-01
The Cucker-Smale flocking model belongs to a wide class of kinetic models that describe a collective motion of interacting particles that exhibit some specific tendency, e.g. to aggregate, flock or disperse. This paper examines the kinetic Cucker-Smale equation with a singular communication weight. Given a compactly supported measure as an initial datum we construct a global in time weak measure-valued solution in the space {C_{weak}(0,∞M)}. The solution is defined as a mean-field limit of the empirical distributions of particles, the dynamics of which is governed by the Cucker-Smale particle system. The studied communication weight is {ψ(s)=|s|^{-α}} with {α \\in (0,1/2)}. This range of singularity admits the sticking of characteristics/trajectories. The second result concerns the weak-atomic uniqueness property stating that a weak solution initiated by a finite sum of atoms, i.e. Dirac deltas in the form {m_i δ_{x_i} ⊗ δ_{v_i}}, preserves its atomic structure. Hence these coincide with unique solutions to the system of ODEs associated with the Cucker-Smale particle system.
An electrophysiological signature of summed similarity in visual working memory.
van Vugt, Marieke K; Sekuler, Robert; Wilson, Hugh R; Kahana, Michael J
2013-05-01
Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth electrode recordings and scalp electroencephalography. On each experimental trial, participants judged whether a test face had been among a small set of recently studied faces. Consistent with summed-similarity theory, participants' tendency to endorse a test item increased as a function of its summed similarity to the items on the just-studied list. To characterize this behavioral effect of summed similarity, we successfully fit a summed-similarity model to individual participant data from each experiment. Using the parameters determined from fitting the summed-similarity model to the behavioral data, we examined the relation between summed similarity and brain activity. We found that 4-9 Hz theta activity in the medial temporal lobe and 2-4 Hz delta activity recorded from frontal and parietal cortices increased with summed similarity. These findings demonstrate direct neural correlates of the similarity computations that form the foundation of several major cognitive theories of human recognition memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Charlier, Ruben; Caspers, Maarten; Knaeps, Sara; Mertens, Evelien; Lambrechts, Diether; Lefevre, Johan; Thomis, Martine
2017-03-01
Since both muscle mass and strength performance are polygenic in nature, the current study compared four genetic predisposition scores (GPS) in their ability to predict these phenotypes. Data were gathered within the framework of the first-generation Flemish Policy Research Centre "Sport, Physical Activity and Health" (2002-2004). Results are based on muscle characteristics data of 565 Flemish Caucasians (19-73 yr, 365 men). Skeletal muscle mass was determined from bioelectrical impedance. The Biodex dynamometer was used to measure isometric (PT static120° ) and isokinetic strength (PT dynamic60° and PT dynamic240° ), ballistic movement speed (S 20% ), and muscular endurance (Work) of the knee extensors. Genotyping was done for 153 gene variants, selected on the basis of a literature search and the expression quantitative trait loci of selected genes. Four GPS were designed: a total GPS (based on the sum of all 153 variants, each favorable allele = score 1), a data-driven and weighted GPS [respectively, the sum of favorable alleles of those variants with significant b-coefficients in stepwise regression (GPS dd ), and the sum of these variants weighted with their respective partial r 2 (GPS w )], and an elastic net GPS (based on the variants that were selected by an elastic net regularization; GPS en ). It was found that four different models for a GPS were able to significantly predict up to ~7% of the variance in strength performance. GPS en made the best prediction of SMM and Work. However, this was not the case for the remaining strength performance parameters, where best predictions were made by GPS dd and GPS w . Copyright © 2017 the American Physiological Society.
Construction of an Exome-Wide Risk Score for Schizophrenia Based on a Weighted Burden Test.
Curtis, David
2018-01-01
Polygenic risk scores obtained as a weighted sum of associated variants can be used to explore association in additional data sets and to assign risk scores to individuals. The methods used to derive polygenic risk scores from common SNPs are not suitable for variants detected in whole exome sequencing studies. Rare variants, which may have major effects, are seen too infrequently to judge whether they are associated and may not be shared between training and test subjects. A method is proposed whereby variants are weighted according to their frequency, their annotations and the genes they affect. A weighted sum across all variants provides an individual risk score. Scores constructed in this way are used in a weighted burden test and are shown to be significantly different between schizophrenia cases and controls using a five-way cross-validation procedure. This approach represents a first attempt to summarise exome sequence variation into a summary risk score, which could be combined with risk scores from common variants and from environmental factors. It is hoped that the method could be developed further. © 2017 John Wiley & Sons Ltd/University College London.
Sandau, Courtney D; Ayotte, Pierre; Dewailly, Eric; Duffe, Jason; Norstrom, Ross J
2002-01-01
Concentrations of polychlorinated biphenyls (PCBs), hydroxylated metabolites of PCBs (HO-PCBs) and octachlorostyrene (4-HO-HpCS), and pentachlorophenol (PCP) were determined in umbilical cord plasma samples from three different regions of Québec. The regions studied included two coastal areas where exposure to PCBs is high because of marine-food-based diets--Nunavik (Inuit people) and the Lower North Shore of the Gulf of St. Lawrence (subsistence fishermen)--and a southern Québec urban center where PCB exposure is at background levels (Québec City). The main chlorinated phenolic compound in all regions was PCP. Concentrations of PCP were not significantly different among regions (geometric mean concentration 1,670 pg/g, range 628-7,680 pg/g wet weight in plasma). The ratio of PCP to polychlorinated biphenyl congener number 153 (CB153) concentration ranged from 0.72 to 42.3. Sum HO-PCB (sigma HO-PCBs) concentrations were different among regions, with geometric mean concentrations of 553 (range 238-1,750), 286 (103-788), and 234 (147-464) pg/g wet weight plasma for the Lower North Shore, Nunavik, and the southern Québec groups, respectively. Lower North Shore samples also had the highest geometric mean concentration of sum PCBs (sum of 49 congeners; sigma PCBs), 2,710 (525-7,720) pg/g wet weight plasma. sigma PCB concentrations for Nunavik samples and southern samples were 1,510 (309-6,230) and 843 (290-1,650) pg/g wet weight plasma. Concentrations (log transformed) of sigma HO-PCBs and sigma PCBs were significantly correlated (r = 0.62, p < 0.001), as were concentrations of all major individual HO-PCB congeners and individual PCB congeners. In Nunavik and Lower North Shore samples, free thyroxine (T4) concentrations (log transformed) were negatively correlated with the sum of quantitated chlorinated phenolic compounds (sum PCP and sigma HO-PCBs; r = -0.47, p = 0.01, n = 20) and were not correlated with any PCB congeners or sigma PCBs. This suggests that PCP and HO-PCBs are possibly altering thyroid hormone status in newborns, which could lead to neurodevelopmental effects in infants. Further studies are needed to examine the effects of chlorinated phenolic compounds on thyroid hormone status in newborns. PMID:11940460
Halford, Keith J.
2006-01-01
MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.
Determination of total dissolved solids in water analysis
Howard, C.S.
1933-01-01
The figure for total dissolved solids, based on the weight of the residue on evaporation after heating for 1 hour at 180??C., is reasonably close to the sum of the determined constituents for most natural waters. Waters of the carbonate type that are high in magnesium may give residues that weigh less than the sum. Natural waters of the sulfate type usually give residues that are too high on account of incomplete drying.
Sykes-Muskett, Bianca J; Prestwich, Andrew; Lawton, Rebecca J; Armitage, Christopher J
2015-01-01
Financial incentives to improve health have received increasing attention, but are subject to ethical concerns. Monetary Contingency Contracts (MCCs), which require individuals to deposit money that is refunded contingent on reaching a goal, are a potential alternative strategy. This review evaluates systematically the evidence for weight loss-related MCCs. Randomised controlled trials testing the effect of weight loss-related MCCs were identified in online databases. Random-effects meta-analyses were used to calculate overall effect sizes for weight loss and participant retention. The association between MCC characteristics and weight loss/participant retention effects was calculated using meta-regression. There was a significant small-to-medium effect of MCCs on weight loss during treatment when one outlier study was removed. Group refunds, deposit not paid as lump sum, participants setting their own deposit size and additional behaviour change techniques were associated with greater weight loss during treatment. Post-treatment, there was no significant effect of MCCs on weight loss. There was a significant small-to-medium effect of MCCs on participant retention during treatment. Researcher-set deposits paid as one lump sum, refunds delivered on an all-or-nothing basis and refunds contingent on attendance at classes were associated with greater retention during treatment. Post-treatment, there was no significant effect of MCCs on participant retention. The results support the use of MCCs to promote weight loss and participant retention up to the point that the incentive is removed and identifies the conditions under which MCCs work best.
40 CFR 60.562-1 - Standards: Process emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methane and ethane) (TOC) by 98 weight percent, or to a concentration of 20 parts per million by volume (ppmv) on a dry basis, whichever is less stringent. The TOC is expressed as the sum of the actual... Polypropylene and Polyethylene Affected Facilities Procedure /a/ Applicable TOC weight percent range Control/no...
A Decision Support System for Solving Multiple Criteria Optimization Problems
ERIC Educational Resources Information Center
Filatovas, Ernestas; Kurasova, Olga
2011-01-01
In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…
The Seven Deadly Sins of World University Ranking: A Summary from Several Papers
ERIC Educational Resources Information Center
Soh, Kaycheng
2017-01-01
World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…
Transition probability functions for applications of inelastic electron scattering
Löffler, Stefan; Schattschneider, Peter
2012-01-01
In this work, the transition matrix elements for inelastic electron scattering are investigated which are the central quantity for interpreting experiments. The angular part is given by spherical harmonics. For the weighted radial wave function overlap, analytic expressions are derived in the Slater-type and the hydrogen-like orbital models. These expressions are shown to be composed of a finite sum of polynomials and elementary trigonometric functions. Hence, they are easy to use, require little computation time, and are significantly more accurate than commonly used approximations. PMID:22560709
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1981-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-03-16
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-01-01
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012
[Levels and distribution of short chain chlorinated paraffins in seafood from Dalian, China].
Yu, Jun-Chao; Wang, Thanh; Wang, Ya-Wei; Meng, Mei; Chen, Ru; Jiang, Gui-Bin
2014-05-01
Seafood samples were collected from Dalian, China to study the accumulation and distribution characteristics of short chain chlorinated paraffins (SCCPs) by GC/ECNI-LRMS. Sum of SCCPs (dry weight) were in the range of 77-8 250 ng.g-1, with the lowest value in Scapharca subcrenata and highest concentration in Neptunea cumingi. The concentrations of sum of SCCPs (dry weight) in fish, shrimp/crab and shellfish were in the ranges of 100-3 510, 394-5 440, and 77-8 250 ng.g-1 , respectively. Overall, the C10 and C11 homologues were the most predominant carbon groups of SCCPs in seafood from this area,and a relatively higher proportion of C12-13 was observed in seafood with higher concentrations of sum of SCCPs . With regard to chlorine content, Cl1,, CI8 and CI6 were the major groups. Significant correlations were found among concentrations of different SCCP homologues (except C1, vs. Cl10 ) , which indicated that they might share the same sources and/or have similar accumulation, migration and transformation processes.
Jacob, Mathews; Blu, Thierry; Vaillant, Cedric; Maddocks, John H; Unser, Michael
2006-01-01
We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well.
Thrane, Jan-Erik; Kyle, Marcia; Striebel, Maren; Haande, Sigrid; Grung, Merete; Rohrlack, Thomas; Andersen, Tom
2015-01-01
The Gauss-peak spectra (GPS) method represents individual pigment spectra as weighted sums of Gaussian functions, and uses these to model absorbance spectra of phytoplankton pigment mixtures. We here present several improvements for this type of methodology, including adaptation to plate reader technology and efficient model fitting by open source software. We use a one-step modeling of both pigment absorption and background attenuation with non-negative least squares, following a one-time instrument-specific calibration. The fitted background is shown to be higher than a solvent blank, with features reflecting contributions from both scatter and non-pigment absorption. We assessed pigment aliasing due to absorption spectra similarity by Monte Carlo simulation, and used this information to select a robust set of identifiable pigments that are also expected to be common in natural samples. To test the method’s performance, we analyzed absorbance spectra of pigment extracts from sediment cores, 75 natural lake samples, and four phytoplankton cultures, and compared the estimated pigment concentrations with concentrations obtained using high performance liquid chromatography (HPLC). The deviance between observed and fitted spectra was generally very low, indicating that measured spectra could successfully be reconstructed as weighted sums of pigment and background components. Concentrations of total chlorophylls and total carotenoids could accurately be estimated for both sediment and lake samples, but individual pigment concentrations (especially carotenoids) proved difficult to resolve due to similarity between their absorbance spectra. In general, our modified-GPS method provides an improvement of the GPS method that is a fast, inexpensive, and high-throughput alternative for screening of pigment composition in samples of phytoplankton material. PMID:26359659
Modeling the solute transport by particle-tracing method with variable weights
NASA Astrophysics Data System (ADS)
Jiang, J.
2016-12-01
Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
Narasimhalu, Kaavya; Lee, June; Auchus, Alexander P; Chen, Christopher P L H
2008-01-01
Previous work combining the Mini-Mental State Examination (MMSE) and Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) has been conducted in western populations. We ascertained, in an Asian population, (1) the best method of combining the tests, (2) the effects of educational level, and (3) the effect of different dementia etiologies. Data from 576 patients were analyzed (407 nondemented controls, 87 Alzheimer's disease and 82 vascular dementia patients). Sensitivity, specificity and AUC values were obtained using three methods, the 'And' rule, the 'Or' rule, and the 'weighted sum' method. The 'weighted sum' rule had statistically superior AUC and specificity results, while the 'Or' rule had the best sensitivity results. The IQCODE outperformed the MMSE in all analyses. Patients with no education benefited more from combined tests. There was no difference between Alzheimer's disease and vascular dementia populations in the predictive value of any of the combined methods. We recommend that the IQCODE be used to supplement the MMSE whenever available and that the 'weighted sum' method be used to combine the MMSE and the IQCODE, particularly in populations with low education. As the study population selected may not be representative of the general population, further studies are required before generalization to nonclinical samples. (c) 2007 S. Karger AG, Basel.
The Quantification of Consistent Subjective Logic Tree Branch Weights for PSHA
NASA Astrophysics Data System (ADS)
Runge, A. K.; Scherbaum, F.
2012-04-01
The development of quantitative models for the rate of exceedance of seismically generated ground motion parameters is the target of probabilistic seismic hazard analysis (PSHA). In regions of low to moderate seismicity, the selection and evaluation of source- and/or ground-motion models is often a major challenge to hazard analysts and affected by large epistemic uncertainties. In PSHA this type of uncertainties is commonly treated within a logic tree framework in which the branch weights express the degree-of-belief values of an expert in the corresponding set of models. For the calculation of the distribution of hazard curves, these branch weights are subsequently used as subjective probabilities. However the quality of the results depends strongly on the "quality" of the expert knowledge. A major challenge for experts in this context is to provide weight estimates which are logically consistent (in the sense of Kolmogorov's axioms) and to be aware of and to deal with the multitude of heuristics and biases which affect human judgment under uncertainty. For example, people tend to give smaller weights to each branch of a logic tree the more branches it has, starting with equal weights for all branches and then adjusting this uniform distribution based on his/her beliefs about how the branches differ. This effect is known as pruning bias.¹ A similar unwanted effect, which may even wrongly suggest robustness of the corresponding hazard estimates, will appear in cases where all models are first judged according to some numerical quality measure approach and the resulting weights are subsequently normalized to sum up to one.2 To address these problems, we have developed interactive graphical tools for the determination of logic tree branch weights in form of logically consistent subjective probabilities, based on the concepts suggested in Curtis and Wood (2004).3 Instead of determining the set of weights for all the models in a single step, the computer driven elicitation process is performed as a sequence of evaluations of relative weights for small subsets of models which are presented to the analyst. From these, the distribution of logic tree weights for the whole model set is determined as solution of an optimization problem. The model subset presented to the analyst in each step is designed to maximize the expected information. The result of this process is a set of logically consistent weights together with a measure of confidence determined from the amount of conflicting information which is provided by the expert during the relative weighting process.
Thompson, Amanda L; Adair, Linda S; Bentley, Margaret E
2012-01-01
The prevalence of overweight among infants and toddlers has increased dramatically in the past three decades, highlighting the importance of identifying factors contributing to early excess weight gain, particularly in high-risk groups. Parental feeding styles, the attitudes and behaviors that characterize parental approaches to maintaining or modifying children’s eating behavior, are an important behavioral component shaping early obesity risk. Using longitudinal data from the Infant Care and Risk of Obesity Study, a cohort study of 217 African-American mother-infant pairs with feeding styles, dietary recalls and anthropometry collected from 3-18 months of infant age, we examined the relationship between feeding styles, infant diet and weight–for-age and sum of skinfolds. Longitudinal mixed models indicated that higher pressuring and indulgent feeding style scores were positively associated with greater infant energy intake, reduced odds of breastfeeding and higher levels of age-inappropriate feeding of liquids and solids while restrictive feeding styles were associated with lower energy intake, higher odds of breastfeeding and reduced odds of inappropriate feeding. Pressuring and restriction were also oppositely related to infant size with pressuring associated with lower infant weight-for-age and restriction with higher weight-for-age and sum of skinfolds. Infant size also predicted maternal feeding styles in subsequent visits indicating that the relationship between size and feeding styles is likely bidirectional. Our results suggest that the degree to which parents are pressuring or restrictive during feeding shapes the early feeding environment and, consequently, may be an important environmental factor in the development of obesity. PMID:23592664
NASA Astrophysics Data System (ADS)
Méchi, Rachid; Farhat, Habib; Said, Rachid
2016-01-01
Nongray radiation calculations are carried out for a case problem available in the literature. The problem is a non-isothermal and inhomogeneous CO2-H2O- N2 gas mixture confined within an axisymmetric cylindrical furnace. The numerical procedure is based on the zonal method associated with the weighted sum of gray gases (WSGG) model. The effect of the wall emissivity on the heat flux losses is discussed. It is shown that this property affects strongly the furnace efficiency and that the most important heat fluxes are those leaving through the circumferential boundary. The numerical procedure adopted in this work is found to be effective and may be relied on to simulate coupled turbulent combustion-radiation in fired furnaces.
Photonuclear sum rules and the tetrahedral configuration of He4
NASA Astrophysics Data System (ADS)
Gazit, Doron; Barnea, Nir; Bacca, Sonia; Leidemann, Winfried; Orlandini, Giuseppina
2006-12-01
Three well-known photonuclear sum rules (SR), i.e., the Thomas-Reiche-Kuhn, the bremsstrahlungs and the polarizability SR are calculated for He4 with the realistic nucleon-nucleon potential Argonne V18 and the three-nucleon force Urbana IX. The relation between these sum rules and the corresponding energy weighted integrals of the cross section is discussed. Two additional equivalences for the bremsstrahlungs SR are given, which connect it to the proton-neutron and neutron-neutron distances. Using them, together with our result for the bremsstrahlungs SR, we find a deviation from the tetrahedral symmetry of the spatial configuration of He4. The possibility to access this deviation experimentally is discussed.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
An anthropometric model to estimate neonatal fat mass using air displacement plethysmography
2012-01-01
Background Current validated neonatal body composition methods are limited/impractical for use outside of a clinical setting because they are labor intensive, time consuming, and require expensive equipment. The purpose of this study was to develop an anthropometric model to estimate neonatal fat mass (kg) using an air displacement plethysmography (PEA POD® Infant Body Composition System) as the criterion. Methods A total of 128 healthy term infants, 60 females and 68 males, from a multiethnic cohort were included in the analyses. Gender, race/ethnicity, gestational age, age (in days), anthropometric measurements of weight, length, abdominal circumference, skin-fold thicknesses (triceps, biceps, sub scapular, and thigh), and body composition by PEA POD® were collected within 1-3 days of birth. Backward stepwise linear regression was used to determine the model that best predicted neonatal fat mass. Results The statistical model that best predicted neonatal fat mass (kg) was: -0.012 -0.064*gender + 0.024*day of measurement post-delivery -0.150*weight (kg) + 0.055*weight (kg)2 + 0.046*ethnicity + 0.020*sum of three skin-fold thicknesses (triceps, sub scapular, and thigh); R2 = 0.81, MSE = 0.08 kg. Conclusions Our anthropometric model explained 81% of the variance in neonatal fat mass. Future studies with a greater variety of neonatal anthropometric measurements may provide equations that explain more of the variance. PMID:22436534
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
29 CFR 1917.71 - Terminals handling intermodal containers or roll-on roll-off operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... pounds; (2) The maximum cargo weight the container is designed to carry, in pounds; and (3) The sum of the weight of the container and the cargo, in pounds. (b) No container shall be hoisted by any crane... any, that such container is empty. Methods of identification may include cargo plans, manifests or...
NASA Astrophysics Data System (ADS)
Finnan, J. M.; Burke, J. I.; Jones, M. B.
A comparison of the performance of different ozone indices in exposure-response functions was made using crop yield and ozone monitoring data from spring wheat studies carried out within the framework of the European open-top chamber programme. Indices were calculated for a twelve-hour period (0900-2100 h, local time). An attempt was made to incorporate a measure of absorbed dose into current indices by weighting with simultaneous sunshine hour values. Both linear and Weibull models were fitted to the exposure-response data in order to evaluate index performance. Cumulative indices which employed continuous weighting functions (allometric or sigmoid) or which censored concentrations above threshold values performed best as they attributed increasing weight to higher concentrations. Indices which simply summed concentrations greater than or equal to a threshold value did not perform as well as equal weight was given to all concentrations greater than the threshold value. Model selection was found to be very important in determining the indices that best describe the relationship between exposure and response. In general weighting hourly ozone concentrations with the corresponding sunshine hour values in an attempt to incorporate this proposed measure of plant activity into current indices did not improve index performance. Ozone exposure indices accounted for a large proportion of the variability in data (91%) and it is suggested that a strong link exists between exposure and dose.
Superimposition of protein structures with dynamically weighted RMSD.
Wu, Di; Wu, Zhijun
2010-02-01
In protein modeling, one often needs to superimpose a group of structures for a protein. A common way to do this is to translate and rotate the structures so that the square root of the sum of squares of coordinate differences of the atoms in the structures, called the root-mean-square deviation (RMSD) of the structures, is minimized. While it has provided a general way of aligning a group of structures, this approach has not taken into account the fact that different atoms may have different properties and they should be compared differently. For this reason, when superimposed with RMSD, the coordinate differences of different atoms should be evaluated with different weights. The resulting RMSD is called the weighted RMSD (wRMSD). Here we investigate the use of a special wRMSD for superimposing a group of structures with weights assigned to the atoms according to certain thermal motions of the atoms. We call such an RMSD the dynamically weighted RMSD (dRMSD). We show that the thermal motions of the atoms can be obtained from several sources such as the mean-square fluctuations that can be estimated by Gaussian network model analysis. We show that the superimposition of structures with dRMSD can successfully identify protein domains and protein motions, and that it has important implications in practice, e.g., in aligning the ensemble of structures determined by nuclear magnetic resonance.
Optical implementation of inner product neural associative memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1995-01-01
An optical implementation of an inner-product neural associative memory is realized with a first spatial light modulator for entering an initial two-dimensional N-tuple vector and for entering a thresholded output vector image after each iteration until convergence is reached, and a second spatial light modulator for entering M weighted vectors of inner-product scalars multiplied with each of the M stored vectors, where the inner-product scalars are produced by multiplication of the initial input vector in the first iterative cycle (and thresholded vectors in subsequent iterative cycles) with each of the M stored vectors, and the weighted vectors are produced by multiplication of the scalars with corresponding ones of the stored vectors. A Hughes liquid crystal light valve is used for the dual function of summing the weighted vectors and thresholding the sum vector. The thresholded vector is then entered through the first spatial light modulator for reiteration of the process cycle until convergence is reached.
Optimizing data collection for public health decisions: a data mining approach
2014-01-01
Background Collecting data can be cumbersome and expensive. Lack of relevant, accurate and timely data for research to inform policy may negatively impact public health. The aim of this study was to test if the careful removal of items from two community nutrition surveys guided by a data mining technique called feature selection, can (a) identify a reduced dataset, while (b) not damaging the signal inside that data. Methods The Nutrition Environment Measures Surveys for stores (NEMS-S) and restaurants (NEMS-R) were completed on 885 retail food outlets in two counties in West Virginia between May and November of 2011. A reduced dataset was identified for each outlet type using feature selection. Coefficients from linear regression modeling were used to weight items in the reduced datasets. Weighted item values were summed with the error term to compute reduced item survey scores. Scores produced by the full survey were compared to the reduced item scores using a Wilcoxon rank-sum test. Results Feature selection identified 9 store and 16 restaurant survey items as significant predictors of the score produced from the full survey. The linear regression models built from the reduced feature sets had R2 values of 92% and 94% for restaurant and grocery store data, respectively. Conclusions While there are many potentially important variables in any domain, the most useful set may only be a small subset. The use of feature selection in the initial phase of data collection to identify the most influential variables may be a useful tool to greatly reduce the amount of data needed thereby reducing cost. PMID:24919484
Optimizing data collection for public health decisions: a data mining approach.
Partington, Susan N; Papakroni, Vasil; Menzies, Tim
2014-06-12
Collecting data can be cumbersome and expensive. Lack of relevant, accurate and timely data for research to inform policy may negatively impact public health. The aim of this study was to test if the careful removal of items from two community nutrition surveys guided by a data mining technique called feature selection, can (a) identify a reduced dataset, while (b) not damaging the signal inside that data. The Nutrition Environment Measures Surveys for stores (NEMS-S) and restaurants (NEMS-R) were completed on 885 retail food outlets in two counties in West Virginia between May and November of 2011. A reduced dataset was identified for each outlet type using feature selection. Coefficients from linear regression modeling were used to weight items in the reduced datasets. Weighted item values were summed with the error term to compute reduced item survey scores. Scores produced by the full survey were compared to the reduced item scores using a Wilcoxon rank-sum test. Feature selection identified 9 store and 16 restaurant survey items as significant predictors of the score produced from the full survey. The linear regression models built from the reduced feature sets had R2 values of 92% and 94% for restaurant and grocery store data, respectively. While there are many potentially important variables in any domain, the most useful set may only be a small subset. The use of feature selection in the initial phase of data collection to identify the most influential variables may be a useful tool to greatly reduce the amount of data needed thereby reducing cost.
Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A
2010-03-29
We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.
Martin, Molly A; Lippert, Adam M; Chandler, Kelly D; Lemmon, Megan
2018-04-01
Women's lives are marked by complex work and family routines - routines that have implications for their children's health. Prior research suggests a link between mothers' work hours and their children's weight, but few studies investigate the child health implications of increasingly common work arrangements, such as telecommuting and flexible work schedules. We examine whether changes in mothers' work arrangements are associated with changes in adolescents' weight, physical activity, and sedentary behavior using longitudinal data and fixed effects models to better account for mothers' social selection in to different work arrangements and children's underlying preferences. With data from the National Longitudinal Study of Adolescent to Adult Health ( N = 10,518), we find that changes in mothers' work arrangements are not significantly associated with adolescents' weight gain or physical activity but are significantly associated with adolescents' sedentary behavior. Adolescents' sedentary behavior declines when mothers become more available after school and increases when mothers work more hours or become unemployed. In sum, after accounting for unobserved, stable traits, including mothers' selection into jobs with more or less flexibility, mothers' work arrangements are most strongly associated with adolescents' sedentary behavior.
Bjelica, Dusko; Idrizovic, Kemal; Popovic, Stevo; Sisic, Nedim; Sekulic, Damir; Ostojic, Ljerka; Spasic, Miodrag; Zenic, Natasa
2016-01-01
Substance use and misuse (SUM) in adolescence is a significant public health problem and the extent to which adolescents exhibit SUM behaviors differs across ethnicity. This study aimed to explore the ethnicity-specific and gender-specific associations among sports factors, familial factors, and personal satisfaction with physical appearance (i.e., covariates) and SUM in a sample of adolescents from Federation of Bosnia and Herzegovina. In this cross-sectional study the participants were 1742 adolescents (17–18 years of age) from Bosnia and Herzegovina who were in their last year of high school education (high school seniors). The sample comprised 772 Croatian (558 females) and 970 Bosniak (485 females) adolescents. Variables were collected using a previously developed and validated questionnaire that included questions on SUM (alcohol drinking, cigarette smoking, and consumption of other drugs), sport factors, parental education, socioeconomic status, and satisfaction with physical appearance and body weight. The consumption of cigarettes remains high (37% of adolescents smoke cigarettes), with a higher prevalence among Croatians. Harmful drinking is also alarming (evidenced in 28.4% of adolescents). The consumption of illicit drugs remains low with 5.7% of adolescents who consume drugs, with a higher prevalence among Bosniaks. A higher likelihood of engaging in SUM is found among children who quit sports (for smoking and drinking), boys who perceive themselves to be good looking (for smoking), and girls who are not satisfied with their body weight (for smoking). Higher maternal education is systematically found to be associated with greater SUM in Bosniak girls. Information on the associations presented herein could be discretely disseminated as a part of regular school administrative functions. The results warrant future prospective studies that more precisely identify the causality among certain variables. PMID:27690078
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
NASA Astrophysics Data System (ADS)
Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia
2018-04-01
In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
Xiang, Yongqing; Yakushin, Sergei B; Cohen, Bernard; Raphan, Theodore
2006-12-01
A neural network model was developed to explain the gravity-dependent properties of gain adaptation of the angular vestibuloocular reflex (aVOR). Gain changes are maximal at the head orientation where the gain is adapted and decrease as the head is tilted away from that position and can be described by the sum of gravity-independent and gravity-dependent components. The adaptation process was modeled by modifying the weights and bias values of a three-dimensional physiologically based neural network of canal-otolith-convergent neurons that drive the aVOR. Model parameters were trained using experimental vertical aVOR gain values. The learning rule aimed to reduce the error between eye velocities obtained from experimental gain values and model output in the position of adaptation. Although the model was trained only at specific head positions, the model predicted the experimental data at all head positions in three dimensions. Altering the relative learning rates of the weights and bias improved the model-data fits. Model predictions in three dimensions compared favorably with those of a double-sinusoid function, which is a fit that minimized the mean square error at every head position and served as the standard by which we compared the model predictions. The model supports the hypothesis that gravity-dependent adaptation of the aVOR is realized in three dimensions by a direct otolith input to canal-otolith neurons, whose canal sensitivities are adapted by the visual-vestibular mismatch. The adaptation is tuned by how the weights from otolith input to the canal-otolith-convergent neurons are adapted for a given head orientation.
Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons
1989-03-25
error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given
Canis, Laure; Linkov, Igor; Seager, Thomas P
2010-11-15
The unprecedented uncertainty associated with engineered nanomaterials greatly expands the need for research regarding their potential environmental consequences. However, decision-makers such as regulatory agencies, product developers, or other nanotechnology stakeholders may not find the results of such research directly informative of decisions intended to mitigate environmental risks. To help interpret research findings and prioritize new research needs, there is an acute need for structured decision-analytic aids that are operable in a context of extraordinary uncertainty. Whereas existing stochastic decision-analytic techniques explore uncertainty only in decision-maker preference information, this paper extends model uncertainty to technology performance. As an illustrative example, the framework is applied to the case of single-wall carbon nanotubes. Four different synthesis processes (arc, high pressure carbon monoxide, chemical vapor deposition, and laser) are compared based on five salient performance criteria. A probabilistic rank ordering of preferred processes is determined using outranking normalization and a linear-weighted sum for different weighting scenarios including completely unknown weights and four fixed-weight sets representing hypothetical stakeholder views. No single process pathway dominates under all weight scenarios, but it is likely that some inferior process technologies could be identified as low priorities for further research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsky, K.J.; Miller, D.L.; Cernansky, N.P.
1994-09-01
A methodology was introduced for modeling the devolatilization characteristics of refuse-derived fuel (RFD) in terms of temperature-dependent weight loss. The basic premise of the methodology is that RDF is modeled as a combination of select municipal solid waste (MSW) components. Kinetic parameters are derived for each component from thermogravimetric analyzer (TGA) data measured at a specific set of conditions. These experimentally derived parameters, along with user-derived parameters, are inputted to model equations for the purpose of calculating thermograms for the components. The component thermograms are summed to create a composite thermogram that is an estimate of the devolatilization for themore » as-modeled RFD. The methodology has several attractive features as a thermal analysis tool for waste fuels. 7 refs., 10 figs., 3 tabs.« less
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clayton, Daniel James; Lipinski, Ronald J.; Bechtel, Ryan D.
As compact and light weight power sources with reliable, long lives, Radioisotope Power Systems (RPSs) have made space missions to explore the solar system possible. Due to the hazardous material that can be released during a launch accident, the potential health risk of an accident must be quantified, so that appropriate launch approval decisions can be made. One part of the risk estimation involves modeling the response of the RPS to potential accident environments. Due to the complexity of modeling the full RPS response deterministically on dynamic variables, the evaluation is performed in a stochastic manner with a Monte Carlomore » simulation. The potential consequences can be determined by modeling the transport of the hazardous material in the environment and in human biological pathways. The consequence analysis results are summed and weighted by appropriate likelihood values to give a collection of probabilistic results for the estimation of the potential health risk. This information is used to guide RPS designs, spacecraft designs, mission architecture, or launch procedures to potentially reduce the risk, as well as to inform decision makers of the potential health risks resulting from the use of RPSs for space missions.« less
CSF biomarkers of Alzheimer disease
Fagan, Anne M.; Grant, Elizabeth A.; Holtzman, David M.; Morris, John C.
2013-01-01
Objectives: To test whether CSF Alzheimer disease biomarkers (β-amyloid 42 [Aβ42], tau, phosphorylated tau at threonine 181 [ptau181], tau/Aβ42, and ptau181/Aβ42) predict future decline in noncognitive outcomes among individuals cognitively normal at baseline. Methods: Longitudinal data from participants (N = 430) who donated CSF within 1 year of a clinical assessment indicating normal cognition and were aged 50 years or older were analyzed. Mixed linear models were used to test whether baseline biomarker values predicted future decline in function (instrumental activities of daily living), weight, behavior, and mood. Clinical Dementia Rating Sum of Boxes and Mini-Mental State Examination scores were also examined. Results: Abnormal levels of each biomarker were related to greater impairment with time in behavior (p < 0.035) and mood (p < 0.012) symptoms, and more difficulties with independent activities of daily living (p < 0.012). However, biomarker levels were unrelated to weight change with time (p > 0.115). As expected, abnormal biomarker values also predicted more rapidly changing Mini-Mental State Examination (p < 0.041) and Clinical Dementia Rating Sum of Boxes (p < 0.001) scores compared with normal values. Conclusions: CSF biomarkers among cognitively normal individuals are associated with future decline in some, but not all, noncognitive Alzheimer disease symptoms studied. Additional work is needed to determine the extent to which these findings generalize to other samples. PMID:24212387
CSF biomarkers of Alzheimer disease: "noncognitive" outcomes.
Roe, Catherine M; Fagan, Anne M; Grant, Elizabeth A; Holtzman, David M; Morris, John C
2013-12-03
To test whether CSF Alzheimer disease biomarkers (β-amyloid 42 [Aβ42], tau, phosphorylated tau at threonine 181 [ptau181], tau/Aβ42, and ptau181/Aβ42) predict future decline in noncognitive outcomes among individuals cognitively normal at baseline. Longitudinal data from participants (N = 430) who donated CSF within 1 year of a clinical assessment indicating normal cognition and were aged 50 years or older were analyzed. Mixed linear models were used to test whether baseline biomarker values predicted future decline in function (instrumental activities of daily living), weight, behavior, and mood. Clinical Dementia Rating Sum of Boxes and Mini-Mental State Examination scores were also examined. Abnormal levels of each biomarker were related to greater impairment with time in behavior (p < 0.035) and mood (p < 0.012) symptoms, and more difficulties with independent activities of daily living (p < 0.012). However, biomarker levels were unrelated to weight change with time (p > 0.115). As expected, abnormal biomarker values also predicted more rapidly changing Mini-Mental State Examination (p < 0.041) and Clinical Dementia Rating Sum of Boxes (p < 0.001) scores compared with normal values. CSF biomarkers among cognitively normal individuals are associated with future decline in some, but not all, noncognitive Alzheimer disease symptoms studied. Additional work is needed to determine the extent to which these findings generalize to other samples.
Hybrid feedforward-feedback active noise reduction for hearing protection and communication.
Ray, Laura R; Solbeck, Jason A; Streeter, Alexander D; Collier, Robert D
2006-10-01
A hybrid active noise reduction (ANR) architecture is presented and validated for a circumaural earcup and a communication earplug. The hybrid system combines source-independent feedback ANR with a Lyapunov-tuned leaky LMS filter (LyLMS) improving gain stability margins over feedforward ANR alone. In flat plate testing, the earcup demonstrates an overall C-weighted total noise reduction of 40 dB and 30-32 dB, respectively, for 50-800 Hz sum-of-tones noise and for aircraft or helicopter cockpit noise, improving low frequency (<100 Hz) performance by up to 15 dB over either control component acting individually. For the earplug, a filtered-X implementation of the LyLMS accommodates its nonconstant cancellation path gain. A fast time-domain identification method provides a high-fidelity, computationally efficient, infinite impulse response cancellation path model, which is used for both the filtered-X implementation and communication feedthrough. Insertion loss measurements made with a manikin show overall C-weighted total noise reduction provided by the ANR earplug of 46-48 dB for sum-of-tones 80-2000 Hz and 40-41 dB from 63 to 3000 Hz for UH-60 helicopter noise, with negligible degradation in attenuation during speech communication. For both hearing protectors, a stability metric improves by a factor of 2 to several orders of magnitude through hybrid ANR.
Optimizing the integrated efficiency for water resource utilization:based on Economic perspective
NASA Astrophysics Data System (ADS)
Gao, L.; Yoshikawa, S.; Kanae, S.
2014-12-01
At present, total global water withdrawal is increasing and water shortage will become a crucial issue around the world. In the 2050, the water withdrawal will exceed the water which we can get it from the river and underground. One of the ways of alleviating water scarcity is increasing the efficiency of water use without development of additional water supplies. In previous literatures about water use efficiency, there are less discussion about the temporal efficiency change with corresponding characteristics of water resource. The main aim of this paper is to estimate the temporal efficiency of water use during 2011-2020 for proposing how to use efficiently the limited water. This paper used dynamic Data Envelope Analysis to estimate the efficiency which is the ratio of the sum of weighted outputs to the sum of weighted inputs. Our model uses cost of agricultural production as input indices and production value of the agriculture as output index,water withdrawal as temporal linkage. We mainly work on the two problems: Firstly, finding out the evident how much the value of water use efficiencies are in each target country; Secondly, adjusting the output value to make those countries which water use inefficiency reach to DEA efficient. The results provide a scientific reference to make rational allocation and the sustainable use of water resources would be realized.
Complete convergence of randomly weighted END sequences and its application.
Li, Penghua; Li, Xiaoqin; Wu, Kehan
2017-01-01
We investigate the complete convergence of partial sums of randomly weighted extended negatively dependent (END) random variables. Some results of complete moment convergence, complete convergence and the strong law of large numbers for this dependent structure are obtained. As an application, we study the convergence of the state observers of linear-time-invariant systems. Our results extend the corresponding earlier ones.
Analog hardware for delta-backpropagation neural networks
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P. (Inventor)
1992-01-01
This is a fully parallel analog backpropagation learning processor which comprises a plurality of programmable resistive memory elements serving as synapse connections whose values can be weighted during learning with buffer amplifiers, summing circuits, and sample-and-hold circuits arranged in a plurality of neuron layers in accordance with delta-backpropagation algorithms modified so as to control weight changes due to circuit drift.
Least-Squares Analysis of Data with Uncertainty in "y" and "x": Algorithms in Excel and KaleidaGraph
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2018-01-01
For the least-squares analysis of data having multiple uncertain variables, the generally accepted best solution comes from minimizing the sum of weighted squared residuals over all uncertain variables, with, for example, weights in x[subscript i] taken as inversely proportional to the variance [delta][subscript xi][superscript 2]. A complication…
Luan, Sheng; Luo, Kun; Chai, Zhan; Cao, Baoxiang; Meng, Xianhong; Lu, Xia; Liu, Ning; Xu, Shengyu; Kong, Jie
2015-12-14
Our aim was to estimate the genetic parameters for the direct genetic effect (DGE) and indirect genetic effects (IGE) on adult body weight in the Pacific white shrimp. IGE is the heritable effect of an individual on the trait values of its group mates. To examine IGE on body weight, 4725 shrimp from 105 tagged families were tested in multiple small test groups (MSTG). Each family was separated into three groups (15 shrimp per group) that were randomly assigned to 105 concrete tanks with shrimp from two other families. To estimate breeding values, one large test group (OLTG) in a 300 m(2) circular concrete tank was used for the communal rearing of 8398 individuals from 105 families. Body weight was measured after a growth-test period of more than 200 days. Variance components for body weight in the MSTG programs were estimated using an animal model excluding or including IGE whereas variance components in the OLTG programs were estimated using a conventional animal model that included only DGE. The correlation of DGE between MSTG and OLTG programs was estimated by a two-trait animal model that included or excluded IGE. Heritability estimates for body weight from the conventional animal model in MSTG and OLTG programs were 0.26 ± 0.13 and 0.40 ± 0.06, respectively. The log likelihood ratio test revealed significant IGE on body weight. Total heritable variance was the sum of direct genetic variance (43.5%), direct-indirect genetic covariance (2.1%), and indirect genetic variance (54.4%). It represented 73% of the phenotypic variance and was more than two-fold greater than that (32%) obtained by using a classical heritability model for body weight. Correlations of DGE on body weight between MSTG and OLTG programs were intermediate regardless of whether IGE were included or not in the model. Our results suggest that social interactions contributed to a large part of the heritable variation in body weight. Small and non-significant direct-indirect genetic correlations implied that neutral or slightly cooperative heritable interactions, rather than competition, were dominant in this population but this may be due to the low rearing density.
Making objective decisions in mechanical engineering problems
NASA Astrophysics Data System (ADS)
Raicu, A.; Oanta, E.; Sabau, A.
2017-08-01
Decision making process has a great influence in the development of a given project, the goal being to select an optimal choice in a given context. Because of its great importance, the decision making was studied using various science methods, finally being conceived the game theory that is considered the background for the science of logical decision making in various fields. The paper presents some basic ideas regarding the game theory in order to offer the necessary information to understand the multiple-criteria decision making (MCDM) problems in engineering. The solution is to transform the multiple-criteria problem in a one-criterion decision problem, using the notion of utility, together with the weighting sum model or the weighting product model. The weighted importance of the criteria is computed using the so-called Step method applied to a relation of preferences between the criteria. Two relevant examples from engineering are also presented. The future directions of research consist of the use of other types of criteria, the development of computer based instruments for decision making general problems and to conceive a software module based on expert system principles to be included in the Wiki software applications for polymeric materials that are already operational.
Fusion of classifiers for REIS-based detection of suspicious breast lesions
NASA Astrophysics Data System (ADS)
Lederman, Dror; Wang, Xingwei; Zheng, Bin; Sumkin, Jules H.; Tublin, Mitchell; Gur, David
2011-03-01
After developing a multi-probe resonance-frequency electrical impedance spectroscopy (REIS) system aimed at detecting women with breast abnormalities that may indicate a developing breast cancer, we have been conducting a prospective clinical study to explore the feasibility of applying this REIS system to classify younger women (< 50 years old) into two groups of "higher-than-average risk" and "average risk" of having or developing breast cancer. The system comprises one central probe placed in contact with the nipple, and six additional probes uniformly distributed along an outside circle to be placed in contact with six points on the outer breast skin surface. In this preliminary study, we selected an initial set of 174 examinations on participants that have completed REIS examinations and have clinical status verification. Among these, 66 examinations were recommended for biopsy due to findings of a highly suspicious breast lesion ("positives"), and 108 were determined as negative during imaging based procedures ("negatives"). A set of REIS-based features, extracted using a mirror-matched approach, was computed and fed into five machine learning classifiers. A genetic algorithm was used to select an optimal subset of features for each of the five classifiers. Three fusion rules, namely sum rule, weighted sum rule and weighted median rule, were used to combine the results of the classifiers. Performance evaluation was performed using a leave-one-case-out cross-validation method. The results indicated that REIS may provide a new technology to identify younger women with higher than average risk of having or developing breast cancer. Furthermore, it was shown that fusion rule, such as a weighted median fusion rule and a weighted sum fusion rule may improve performance as compared with the highest performing single classifier.
NASA Technical Reports Server (NTRS)
Desai, S. D.; Yuan, D. -N.
2006-01-01
A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.
Windows(Registered Trademark)-Based Software Models Cyclic Oxidation Behavior
NASA Technical Reports Server (NTRS)
Smialek, J. L.; Auping, J. V.
2004-01-01
Oxidation of high-temperature aerospace materials is a universal issue for combustion-path components in turbine or rocket engines. In addition to the question of the consumption of material due to growth of protective scale at use temperatures, there is also the question of cyclic effects and spallation of scale on cooldown. The spallation results in the removal of part of the protective oxide in a discontinuous step and thereby opens the way for more rapid oxidation upon reheating. In experiments, cyclic oxidation behavior is most commonly characterized by measuring changes in weight during extended time intervals that include hundreds or thousands of heating and cooling cycles. Weight gains occurring during isothermal scale-growth processes have been well characterized as being parabolic or nearly parabolic functions of time because diffusion controls reaction rates. In contrast, the net weight change in cyclic oxidation is the sum of the effects of the growth and spallation of scale. Typically, the net weight gain in cyclic oxidation is determined only empirically (that is, by measurement), with no unique or straightforward mathematical connection to either the rate of growth or the amount of metal consumed. Thus, there is a need for mathematical modeling to infer spallation mechanisms. COSP is a computer program that models the growth and spallation processes of cyclic oxidation on the basis of a few elementary assumptions that were discussed in COSP: A Computer Model of Cyclic Oxidation, Oxidation of Metals, vol. 36, numbers 1 and 2, 1991, pages 81-112. Inputs to the model include the selection of an oxidation-growth law and a spalling geometry, plus oxide-phase, growth-rate, cycle-duration, and spall-constant parameters. (The spalling fraction is often shown to be a constant factor times the existing amount of scale.) The output of COSP includes the net change in weight, the amounts of retained and spalled oxide, the total amounts of oxygen and metal consumed, and the terminal rates of weight loss and metal consumption.
NASA Astrophysics Data System (ADS)
Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.
2016-05-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Nodal weighting factor method for ex-core fast neutron fluence evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, R. T.
The nodal weighting factor method is developed for evaluating ex-core fast neutron flux in a nuclear reactor by utilizing adjoint neutron flux, a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV, the unit fission source, and relative assembly nodal powers. The method determines each nodal weighting factor for ex-core neutron fast flux evaluation by solving the steady-state adjoint neutron transport equation with a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV as the adjoint source, by integrating the unit fission source with a typical fission spectrum to the solved adjointmore » flux over all energies, all angles and given nodal volume, and by dividing it with the sum of all nodal weighting factors, which is a normalization factor. Then, the fast neutron flux can be obtained by summing the various relative nodal powers times the corresponding nodal weighting factors of the adjacent significantly contributed peripheral assembly nodes and times a proper fast neutron attenuation coefficient over an operating period. A generic set of nodal weighting factors can be used to evaluate neutron fluence at the same location for similar core design and fuel cycles, but the set of nodal weighting factors needs to be re-calibrated for a transition-fuel-cycle. This newly developed nodal weighting factor method should be a useful and simplified tool for evaluating fast neutron fluence at selected locations of interest in ex-core components of contemporary nuclear power reactors. (authors)« less
Bariatric embolization for suppression of the hunger hormone ghrelin in a porcine model.
Paxton, Ben E; Kim, Charles Y; Alley, Christopher L; Crow, Jennifer H; Balmadrid, Bryan; Keith, Christopher G; Kankotia, Ravi J; Stinnett, Sandra; Arepally, Aravind
2013-02-01
To prospectively test in a porcine model the hypothesis that bariatric embolization with commercially available calibrated microspheres can result in substantial suppression of systemic ghrelin levels and affect weight gain over an 8-week period. The institutional animal care and use committee approved this study. Twelve healthy growing swine (mean weight, 38.4 kg; weight range, 30.3-47.0 kg) were evaluated. Bariatric embolization was performed by infusion of 40-μm calibrated microspheres selectively into the gastric arteries that supply the fundus. Six swine underwent bariatric embolization, while six control animals underwent a sham procedure with saline. Weight and fasting plasma ghrelin and glucose levels were obtained in animals at baseline and at weeks 1-8. Statistical testing for differences in serum ghrelin levels and weight at each time point was performed with the Wilcoxon signed rank test for intragroup differences and the Wilcoxon rank sum test for intergroup differences. The pattern of change in ghrelin levels over time was significantly different between control and experimental animals. Weekly ghrelin levels were measured in control and experimental animals as a change from baseline ghrelin values. Average postprocedure ghrelin values increased by 328.9 pg/dL ± 129.0 (standard deviation) in control animals and decreased by 537.9 pg/dL ± 209.6 in experimental animals (P = .004). The pattern of change in weight over time was significantly different between control and experimental animals. The average postprocedure weight gain in experimental animals was significantly lower than that in control animals (3.6 kg ± 3.8 vs 9.4 kg ± 2.8, respectively; P = .025). Bariatric embolization can significantly suppress ghrelin and significantly affect weight gain. Further study is warranted before this technique can be used routinely in humans.
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiao-Wen; Jupp, David L. B.
1991-01-01
The bidirectional radiance or reflectance of a forest or woodland can be modeled using principles of geometric optics and Boolean models for random sets in a three dimensional space. This model may be defined at two levels, the scene includes four components; sunlight and shadowed canopy, and sunlit and shadowed background. The reflectance of the scene is modeled as the sum of the reflectances of the individual components as weighted by their areal proportions in the field of view. At the leaf level, the canopy envelope is an assemblage of leaves, and thus the reflectance is a function of the areal proportions of sunlit and shadowed leaf, and sunlit and shadowed background. Because the proportions of scene components are dependent upon the directions of irradiance and exitance, the model accounts for the hotspot that is well known in leaf and tree canopies.
Modeling of Radiative Heat Transfer in an Electric Arc Furnace
NASA Astrophysics Data System (ADS)
Opitz, Florian; Treffinger, Peter; Wöllenstein, Jürgen
2017-12-01
Radiation is an important means of heat transfer inside an electric arc furnace (EAF). To gain insight into the complex processes of heat transfer inside the EAF vessel, not only radiation from the surfaces but also emission and absorption of the gas phase and the dust cloud need to be considered. Furthermore, the radiative heat exchange depends on the geometrical configuration which is continuously changing throughout the process. The present paper introduces a system model of the EAF which takes into account the radiative heat transfer between the surfaces and the participating medium. This is attained by the development of a simplified geometrical model, the use of a weighted-sum-of-gray-gases model, and a simplified consideration of dust radiation. The simulation results were compared with the data of real EAF plants available in literature.
The 3D modeling of high numerical aperture imaging in thin films
NASA Technical Reports Server (NTRS)
Flagello, D. G.; Milster, Tom
1992-01-01
A modelling technique is described which is used to explore three dimensional (3D) image irradiance distributions formed by high numerical aperture (NA is greater than 0.5) lenses in homogeneous, linear films. This work uses a 3D modelling approach that is based on a plane-wave decomposition in the exit pupil. Each plane wave component is weighted by factors due to polarization, aberration, and input amplitude and phase terms. This is combined with a modified thin-film matrix technique to derive the total field amplitude at each point in a film by a coherent vector sum over all plane waves. Then the total irradiance is calculated. The model is used to show how asymmetries present in the polarized image change with the influence of a thin film through varying degrees of focus.
Wheeler, David C; Czarnota, Jenna; Jones, Resa M
2017-01-01
Socioeconomic status (SES) is often considered a risk factor for health outcomes. SES is typically measured using individual variables of educational attainment, income, housing, and employment variables or a composite of these variables. Approaches to building the composite variable include using equal weights for each variable or estimating the weights with principal components analysis or factor analysis. However, these methods do not consider the relationship between the outcome and the SES variables when constructing the index. In this project, we used weighted quantile sum (WQS) regression to estimate an area-level SES index and its effect in a model of colonoscopy screening adherence in the Minnesota-Wisconsin Metropolitan Statistical Area. We considered several specifications of the SES index including using different spatial scales (e.g., census block group-level, tract-level) for the SES variables. We found a significant positive association (odds ratio = 1.17, 95% CI: 1.15-1.19) between the SES index and colonoscopy adherence in the best fitting model. The model with the best goodness-of-fit included a multi-scale SES index with 10 variables at the block group-level and one at the tract-level, with home ownership, race, and income among the most important variables. Contrary to previous index construction, our results were not consistent with an assumption of equal importance of variables in the SES index when explaining colonoscopy screening adherence. Our approach is applicable in any study where an SES index is considered as a variable in a regression model and the weights for the SES variables are not known in advance.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
Schrempft, Stephanie; van Jaarsveld, Cornelia H M; Fisher, Abigail; Wardle, Jane
2015-01-01
The home environment is thought to play a key role in early weight trajectories, although direct evidence is limited. There is general agreement that multiple factors exert small individual effects on weight-related outcomes, so use of composite measures could demonstrate stronger effects. This study therefore examined whether composite measures reflecting the 'obesogenic' home environment are associated with diet, physical activity, TV viewing, and BMI in preschool children. Families from the Gemini cohort (n = 1096) completed a telephone interview (Home Environment Interview; HEI) when their children were 4 years old. Diet, physical activity, and TV viewing were reported at interview. Child height and weight measurements were taken by the parents (using standard scales and height charts) and reported at interview. Responses to the HEI were standardized and summed to create four composite scores representing the food (sum of 21 variables), activity (sum of 6 variables), media (sum of 5 variables), and overall (food composite/21 + activity composite/6 + media composite/5) home environments. These were categorized into 'obesogenic risk' tertiles. Children in 'higher-risk' food environments consumed less fruit (OR; 95% CI = 0.39; 0.27-0.57) and vegetables (0.47; 0.34-0.64), and more energy-dense snacks (3.48; 2.16-5.62) and sweetened drinks (3.49; 2.10-5.81) than children in 'lower-risk' food environments. Children in 'higher-risk' activity environments were less physically active (0.43; 0.32-0.59) than children in 'lower-risk' activity environments. Children in 'higher-risk' media environments watched more TV (3.51; 2.48-4.96) than children in 'lower-risk' media environments. Neither the individual nor the overall composite measures were associated with BMI. Composite measures of the obesogenic home environment were associated as expected with diet, physical activity, and TV viewing. Associations with BMI were not apparent at this age.
Optical Oversampled Analog-to-Digital Conversion
1992-06-29
hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions
Analysis of Environmental Chemical Mixtures and Non-Hodgkin Lymphoma Risk in the NCI-SEER NHL Study
Czarnota, Jenna; Gennings, Chris; Colt, Joanne S.; De Roos, Anneclaire J.; Cerhan, James R.; Severson, Richard K.; Hartge, Patricia; Ward, Mary H.
2015-01-01
Background There are several suspected environmental risk factors for non-Hodgkin lymphoma (NHL). The associations between NHL and environmental chemical exposures have typically been evaluated for individual chemicals (i.e., one-by-one). Objectives We determined the association between a mixture of 27 correlated chemicals measured in house dust and NHL risk. Methods We conducted a population-based case–control study of NHL in four National Cancer Institute–Surveillance, Epidemiology, and End Results centers—Detroit, Michigan; Iowa; Los Angeles County, California; and Seattle, Washington—from 1998 to 2000. We used weighted quantile sum (WQS) regression to model the association of a mixture of chemicals and risk of NHL. The WQS index was a sum of weighted quartiles for 5 polychlorinated biphenyls (PCBs), 7 polycyclic aromatic hydrocarbons (PAHs), and 15 pesticides. We estimated chemical mixture weights and effects for study sites combined and for each site individually, and also for histologic subtypes of NHL. Results The WQS index was statistically significantly associated with NHL overall [odds ratio (OR) = 1.30; 95% CI: 1.08, 1.56; p = 0.006; for one quartile increase] and in the study sites of Detroit (OR = 1.71; 95% CI: 1.02, 2.92; p = 0.045), Los Angeles (OR = 1.44; 95% CI: 1.00, 2.08; p = 0.049), and Iowa (OR = 1.76; 95% CI: 1.23, 2.53; p = 0.002). The index was marginally statistically significant in Seattle (OR = 1.39; 95% CI: 0.97, 1.99; p = 0.071). The most highly weighted chemicals for predicting risk overall were PCB congener 180 and propoxur. Highly weighted chemicals varied by study site; PCBs were more highly weighted in Detroit, and pesticides were more highly weighted in Iowa. Conclusions An index of chemical mixtures was significantly associated with NHL. Our results show the importance of evaluating chemical mixtures when studying cancer risk. Citation Czarnota J, Gennings C, Colt JS, De Roos AJ, Cerhan JR, Severson RK, Hartge P, Ward MH, Wheeler DC. 2015. Analysis of environmental chemical mixtures and non-Hodgkin lymphoma risk in the NCI-SEER NHL Study. Environ Health Perspect 123:965–970; http://dx.doi.org/10.1289/ehp.1408630 PMID:25748701
Optimal trajectories for hypersonic launch vehicles
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas
1994-01-01
In this paper, we derive a near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optical flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. Because liquid hydrogen fueled hypersonic aircraft are volume sensitive, as well as weight sensitive, the cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize gross take-off weight for a given payload mass and volume in orbit.
An Electrophysiological Signature of Summed Similarity in Visual Working Memory
ERIC Educational Resources Information Center
van Vugt, Marieke K.; Sekuler, Robert; Wilson, Hugh R.; Kahana, Michael J.
2013-01-01
Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth…
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Dichromatic State Sum Models for Four-Manifolds from Pivotal Functors
NASA Astrophysics Data System (ADS)
Bärenz, Manuel; Barrett, John
2017-11-01
A family of invariants of smooth, oriented four-dimensional manifolds is defined via handle decompositions and the Kirby calculus of framed link diagrams. The invariants are parametrised by a pivotal functor from a spherical fusion category into a ribbon fusion category. A state sum formula for the invariant is constructed via the chain-mail procedure, so a large class of topological state sum models can be expressed as link invariants. Most prominently, the Crane-Yetter state sum over an arbitrary ribbon fusion category is recovered, including the nonmodular case. It is shown that the Crane-Yetter invariant for nonmodular categories is stronger than signature and Euler invariant. A special case is the four-dimensional untwisted Dijkgraaf-Witten model. Derivations of state space dimensions of TQFTs arising from the state sum model agree with recent calculations of ground state degeneracies in Walker-Wang models. Relations to different approaches to quantum gravity such as Cartan geometry and teleparallel gravity are also discussed.
Dichromatic State Sum Models for Four-Manifolds from Pivotal Functors
NASA Astrophysics Data System (ADS)
Bärenz, Manuel; Barrett, John
2018-06-01
A family of invariants of smooth, oriented four-dimensional manifolds is defined via handle decompositions and the Kirby calculus of framed link diagrams. The invariants are parametrised by a pivotal functor from a spherical fusion category into a ribbon fusion category. A state sum formula for the invariant is constructed via the chain-mail procedure, so a large class of topological state sum models can be expressed as link invariants. Most prominently, the Crane-Yetter state sum over an arbitrary ribbon fusion category is recovered, including the nonmodular case. It is shown that the Crane-Yetter invariant for nonmodular categories is stronger than signature and Euler invariant. A special case is the four-dimensional untwisted Dijkgraaf-Witten model. Derivations of state space dimensions of TQFTs arising from the state sum model agree with recent calculations of ground state degeneracies in Walker-Wang models. Relations to different approaches to quantum gravity such as Cartan geometry and teleparallel gravity are also discussed.
Meta-analyses of workplace physical activity and dietary behaviour interventions on weight outcomes.
Verweij, L M; Coffeng, J; van Mechelen, W; Proper, K I
2011-06-01
This meta-analytic review critically examines the effectiveness of workplace interventions targeting physical activity, dietary behaviour or both on weight outcomes. Data could be extracted from 22 studies published between 1980 and November 2009 for meta-analyses. The GRADE approach was used to determine the level of evidence for each pooled outcome measure. Results show moderate quality of evidence that workplace physical activity and dietary behaviour interventions significantly reduce body weight (nine studies; mean difference [MD]-1.19 kg [95% CI -1.64 to -0.74]), body mass index (BMI) (11 studies; MD -0.34 kg m⁻² [95% CI -0.46 to -0.22]) and body fat percentage calculated from sum of skin-folds (three studies; MD -1.12% [95% CI -1.86 to -0.38]). There is low quality of evidence that workplace physical activity interventions significantly reduce body weight and BMI. Effects on percentage body fat calculated from bioelectrical impedance or hydrostatic weighing, waist circumference, sum of skin-folds and waist-hip ratio could not be investigated properly because of a lack of studies. Subgroup analyses showed a greater reduction in body weight of physical activity and diet interventions containing an environmental component. As the clinical relevance of the pooled effects may be substantial on a population level, we recommend workplace physical activity and dietary behaviour interventions, including an environment component, in order to prevent weight gain. © 2010 The Authors. obesity reviews © 2010 International Association for the Study of Obesity.
Which kind of psychometrics is adequate for patient satisfaction questionnaires?
Konerding, Uwe
2016-01-01
The construction and psychometric analysis of patient satisfaction questionnaires are discussed. The discussion is based upon the classification of multi-item questionnaires into scales or indices. Scales consist of items that describe the effects of the latent psychological variable to be measured, and indices consist of items that describe the causes of this variable. Whether patient satisfaction questionnaires should be constructed and analyzed as scales or as indices depends upon the purpose for which these questionnaires are required. If the final aim is improving care with regard to patients' preferences, then these questionnaires should be constructed and analyzed as indices. This implies two requirements: 1) items for patient satisfaction questionnaires should be selected in such a way that the universe of possible causes of patient satisfaction is covered optimally and 2) Cronbach's alpha, principal component analysis, exploratory factor analysis, confirmatory factor analysis, and analyses with models from item response theory, such as the Rasch Model, should not be applied for psychometric analyses. Instead, multivariate regression analyses with a direct rating of patient satisfaction as the dependent variable and the individual questionnaire items as independent variables should be performed. The coefficients produced by such an analysis can be applied for selecting the best items and for weighting the selected items when a sum score is determined. The lower boundaries of the validity of the unweighted and the weighted sum scores can be estimated by their correlations with the direct satisfaction rating. While the first requirement is fulfilled in the majority of the previous patient satisfaction questionnaires, the second one deviates from previous practice. Hence, if patient satisfaction is actually measured with the final aim of improving care with regard to patients' preferences, then future practice should be changed so that the second requirement is also fulfilled.
Modeling the directivity of parametric loudspeaker
NASA Astrophysics Data System (ADS)
Shi, Chuang; Gan, Woon-Seng
2012-09-01
The emerging applications of the parametric loudspeaker, such as 3D audio, demands accurate directivity control at the audible frequency (i.e. the difference frequency). Though the delay-and-sum beamforming has been proven adequate to adjust the steering angles of the parametric loudspeaker, accurate prediction of the mainlobe and sidelobes remains a challenging problem. It is mainly because of the approximations that are used to derive the directivity of the difference frequency from the directivity of the primary frequency, and the mismatches between the theoretical directivity and the measured directivity caused by system errors incurred at different stages of the implementation. In this paper, we propose a directivity model of the parametric loudspeaker. The directivity model consists of two tuning vectors corresponding to the spacing error and the weight error for the primary frequency. The directivity model adopts a modified form of the product directivity principle for the difference frequency to further improve the modeling accuracy.
Black, Beth; Marcoux, Beth C; Stiller, Christine; Qu, Xianggui; Gellish, Ronald
2012-11-01
Physical therapists have been encouraged to engage in health promotion practice. Health professionals who engage in healthy behaviors themselves are more apt to recommend those behaviors, and patients are more motivated to change their behaviors when their health care provider is a credible role model. The purpose of this study was to describe the health behaviors and role-modeling attitudes of physical therapists and physical therapist students. This study was a descriptive cross-sectional survey. A national sample of 405 physical therapists and 329 physical therapist students participated in the survey. Participants' attitudes toward role modeling and behaviors related to physical activity, fruit and vegetable consumption, abstention from smoking, and maintenance of a healthy weight were measured. Wilcoxon rank sum tests were used to examine differences in attitudes and behaviors between physical therapists and physical therapist students. A majority of the participants reported that they engage in regular physical activity (80.8%), eat fruits and vegetables (60.3%), do not smoke (99.4%), and maintain a healthy weight (78.7%). Although there were no differences in behaviors, physical therapist students were more likely to believe that role modeling is a powerful teaching tool, physical therapist professionals should "practice what they preach," physical activity is a desirable behavior, and physical therapist professionals should be role models for nonsmoking and maintaining a healthy weight. Limitations of this study include the potential for response bias and social desirability bias. Physical therapists and physical therapist students engage in health-promoting behaviors at similarly high rates but differ in role-modeling attitudes.
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
2013-12-15
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
Kurle, Carolyn M; Bakker, Victoria J; Copeland, Holly; Burnett, Joe; Jones Scherbinski, Jennie; Brandt, Joseph; Finkelstein, Myra E
2016-09-06
The critically endangered California condor (Gymnogyps californianus) has relied intermittently on dead-stranded marine mammals since the Pleistocene, and this food source is considered important for their current recovery. However, contemporary marine mammals contain persistent organic pollutants that could threaten condor health. We used stable carbon and nitrogen isotope, contaminant, and behavioral data in coastal versus noncoastal condors to quantify contaminant transfer from marine mammals and created simulation models to predict the risk of reproductive impairment for condors from exposure to DDE (p,p'-DDE), a major metabolite of the chlorinated pesticide DDT. Coastal condors had higher whole blood isotope values and mean concentrations of contaminants associated with marine mammals, including mercury (whole blood), sum chlorinated pesticides (comprised of ∼95% DDE) (plasma), sum polychlorinated biphenyls (PCBs) (plasma), and sum polybrominated diphenyl ethers (PBDEs) (plasma), 12-100-fold greater than those of noncoastal condors. The mean plasma DDE concentration for coastal condors was 500 ± 670 (standard deviation) (n = 22) versus 24 ± 24 (standard deviation) (n = 8) ng/g of wet weight for noncoastal condors, and simulations predicted ∼40% of breeding-age coastal condors have DDE levels associated with eggshell thinning in other avian species. Our analyses demonstrate potentially harmful levels of marine contaminant transfer to California condors, which could hinder the recovery of this terrestrial species.
QCD Sum Rules and Models for Generalized Parton Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anatoly Radyushkin
2004-10-01
I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.
The spatiotemporal MEG covariance matrix modeled as a sum of Kronecker products.
Bijma, Fetsje; de Munck, Jan C; Heethaar, Rob M
2005-08-15
The single Kronecker product (KP) model for the spatiotemporal covariance of MEG residuals is extended to a sum of Kronecker products. This sum of KP is estimated such that it approximates the spatiotemporal sample covariance best in matrix norm. Contrary to the single KP, this extension allows for describing multiple, independent phenomena in the ongoing background activity. Whereas the single KP model can be interpreted by assuming that background activity is generated by randomly distributed dipoles with certain spatial and temporal characteristics, the sum model can be physiologically interpreted by assuming a composite of such processes. Taking enough terms into account, the spatiotemporal sample covariance matrix can be described exactly by this extended model. In the estimation of the sum of KP model, it appears that the sum of the first 2 KP describes between 67% and 93%. Moreover, these first two terms describe two physiological processes in the background activity: focal, frequency-specific alpha activity, and more widespread non-frequency-specific activity. Furthermore, temporal nonstationarities due to trial-to-trial variations are not clearly visible in the first two terms, and, hence, play only a minor role in the sample covariance matrix in terms of matrix power. Considering the dipole localization, the single KP model appears to describe around 80% of the noise and seems therefore adequate. The emphasis of further improvement of localization accuracy should be on improving the source model rather than the covariance model.
Yang, Huayun; Zhou, Shanshan; Li, Weidong; Liu, Qi; Tu, Yunjie
2015-10-01
Sediment samples were analyzed to comprehensively characterize the concentrations, distribution, possible sources and potential biological risk of organochlorine pesticides in Qiandao Lake, China. Concentrations of sumHCH and sumDDT in sediments ranged from 0.03 to 5.75 ng/g dry weight and not detected to 14.39 ng/g dry weight. The predominant β-HCH and the α-HCH/γ-HCH ratios indicated that the residues of HCHs were derived not only from historical technical HCH use but also from additional usage of lindane. Ratios of o,p'-DDT/p,p'-DDT and DDD/DDE suggested that both dicofol-type DDT and technical DDT applications may be present in most study areas. Additionally, based on two sediment quality guidelines, γ-HCH, o,p'-DDT and p,p'-DDT could be the main organochlorine pesticides species of ecotoxicological concern in Qiandao Lake.
Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi
2017-10-05
We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jiang, G.; Wong, C. Y.; Lin, S. C. F.; Rahman, M. A.; Ren, T. R.; Kwok, Ngaiming; Shi, Haiyan; Yu, Ying-Hao; Wu, Tonghai
2015-04-01
The enhancement of image contrast and preservation of image brightness are two important but conflicting objectives in image restoration. Previous attempts based on linear histogram equalization had achieved contrast enhancement, but exact preservation of brightness was not accomplished. A new perspective is taken here to provide balanced performance of contrast enhancement and brightness preservation simultaneously by casting the quest of such solution to an optimization problem. Specifically, the non-linear gamma correction method is adopted to enhance the contrast, while a weighted sum approach is employed for brightness preservation. In addition, the efficient golden search algorithm is exploited to determine the required optimal parameters to produce the enhanced images. Experiments are conducted on natural colour images captured under various indoor, outdoor and illumination conditions. Results have shown that the proposed method outperforms currently available methods in contrast to enhancement and brightness preservation.
Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai
2017-03-01
This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.
Mesopic luminance assessed with minimally distinct border perception
Raphael, Sabine; MacLeod, Donald I. A.
2015-01-01
In photopic vision, the border between two fields is minimally distinct when the two fields are isoluminant; that is, when the achromatic luminance of the two fields is equal. The distinctness of a border between extrafoveal reference and comparison fields was used here as an isoluminance criterion under a variety of adaptation conditions ranging from photopic to scotopic. The adjustment was done by trading off the amount of blue against the amount of red in the comparison field. Results show that isoluminant border settings are linear under all constant adaptation conditions, though varying with state of adaptation. The relative contribution of rods and cones to luminance was modeled such that the linear sum of the suitably weighted scotopic and photopic luminance is constant for the mesopic isoluminant conditions. The relative weights change with adapting intensity in a sigmoid fashion and also depend strongly on the position of the border in the visual field. PMID:26223024
Integrated optimization of planetary rover layout and exploration routes
NASA Astrophysics Data System (ADS)
Lee, Dongoo; Ahn, Jaemyung
2018-01-01
This article introduces an optimization framework for the integrated design of a planetary surface rover and its exploration route that is applicable to the initial phase of a planetary exploration campaign composed of multiple surface missions. The scientific capability and the mobility of a rover are modelled as functions of the science weight fraction, a key parameter characterizing the rover. The proposed problem is formulated as a mixed-integer nonlinear program that maximizes the sum of profits obtained through a planetary surface exploration mission by simultaneously determining the science weight fraction of the rover, the sites to visit and their visiting sequences under resource consumption constraints imposed on each route and collectively on a mission. A solution procedure for the proposed problem composed of two loops (the outer loop and the inner loop) is developed. The results of test cases demonstrating the effectiveness of the proposed framework are presented.
Hardware Implementation of a Bilateral Subtraction Filter
NASA Technical Reports Server (NTRS)
Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven
2009-01-01
A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.
1993-02-01
the relative cost effectiveness of Ada and C++ [10]. (An overview of the Air Force report is given in Appendix D.) Surprisingly, the study deter- mined ...support; 5 = excellent support), followed by a total score, a weighted sum of the rankings based on weights deter- mined by an expert panel: Category...International Conference Location: Britannia International Hotel, London Sponsor. Ada Language UK, Ltd. POC: Helen Byard, Administrator, Ada UK, P.O. 322, York
Booth, Pieter N; Law, Sheryl A; Ma, Jane; Buonagurio, John; Boyd, James; Turnley, Jessica
2017-09-01
This paper reviews literature on aesthetics and describes the development of vista and landscape aesthetics models. Spatially explicit variables were chosen to represent physical characteristics of natural landscapes that are important to aesthetic preferences. A vista aesthetics model evaluates the aesthetics of natural landscapes viewed from distances of more than 1000 m, and a landscape aesthetics model evaluates the aesthetic value of wetlands and forests within 1000 m from the viewer. Each of the model variables is quantified using spatially explicit metrics on a pixel-specific basis within EcoAIM™, a geographic information system (GIS)-based ecosystem services (ES) decision analysis support tool. Pixel values are "binned" into ranked categories, and weights are assigned to select variables to represent stakeholder preferences. The final aesthetic score is the weighted sum of all variables and is assigned ranked values from 1 to 10. Ranked aesthetic values are displayed on maps by patch type and integrated within EcoAIM. The response of the aesthetic scoring in the models was tested by comparing current conditions in a discrete area of the facility with a Development scenario in the same area. The Development scenario consisted of two 6-story buildings and a trail replacing natural areas. The results of the vista aesthetic model indicate that the viewshed area variable had the greatest effect on the aesthetics overall score. Results from the landscape aesthetics model indicate a 10% increase in overall aesthetics value, attributed to the increase in landscape diversity. The models are sensitive to the weights assigned to certain variables by the user, and these weights should be set to reflect regional landscape characteristics as well as stakeholder preferences. This demonstration project shows that natural landscape aesthetics can be evaluated as part of a nonmonetary assessment of ES, and a scenario-building exercise provides end users with a tradeoff analysis in support of natural resource management decisions. Integr Environ Assess Manag 2017;13:926-938. © 2017 SETAC. © 2017 SETAC.
Chamberlain, Ryan; Reyes, Denise; Curran, Geoffrey L.; Marjanska, Malgorzata; Wengenack, Thomas M.; Poduslo, Joseph F.; Garwood, Michael; Jack, Clifford R.
2009-01-01
One of the hallmark pathologies of Alzheimer’s disease (AD) is amyloid plaque deposition. Plaques appear hypointense on T2- and T2*-weighted MR images probably due to the presence of endogenous iron, but no quantitative comparison of various imaging techniques has been reported. We estimated the T1, T2, T2*, and proton density values of cortical plaques and normal cortical tissue and analyzed the plaque contrast generated by a collection of T2-, T2*-, and susceptibility-weighted imaging (SWI) methods in ex vivo transgenic mouse specimens. The proton density and T1 values were similar for both cortical plaques and normal cortical tissue. The T2 and T2* values were similar in cortical plaques, which indicates that the iron content of cortical plaques may not be as large as previously thought. Ex vivo plaque contrast was increased compared to a previously reported spin echo sequence by summing multiple echoes and by performing SWI; however, gradient echo and susceptibility weighted imaging was found to be impractical for in vivo imaging due to susceptibility interface-related signal loss in the cortex. PMID:19253386
Zhuo, Lin; Tao, Hong; Wei, Hong; Chengzhen, Wu
2016-01-01
We tried to establish compatible carbon content models of individual trees for a Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) plantation from Fujian province in southeast China. In general, compatibility requires that the sum of components equal the whole tree, meaning that the sum of percentages calculated from component equations should equal 100%. Thus, we used multiple approaches to simulate carbon content in boles, branches, foliage leaves, roots and the whole individual trees. The approaches included (i) single optimal fitting (SOF), (ii) nonlinear adjustment in proportion (NAP) and (iii) nonlinear seemingly unrelated regression (NSUR). These approaches were used in combination with variables relating diameter at breast height (D) and tree height (H), such as D, D2H, DH and D&H (where D&H means two separate variables in bivariate model). Power, exponential and polynomial functions were tested as well as a new general function model was proposed by this study. Weighted least squares regression models were employed to eliminate heteroscedasticity. Model performances were evaluated by using mean residuals, residual variance, mean square error and the determination coefficient. The results indicated that models with two dimensional variables (DH, D2H and D&H) were always superior to those with a single variable (D). The D&H variable combination was found to be the most useful predictor. Of all the approaches, SOF could establish a single optimal model separately, but there were deviations in estimating results due to existing incompatibilities, while NAP and NSUR could ensure predictions compatibility. Simultaneously, we found that the new general model had better accuracy than others. In conclusion, we recommend that the new general model be used to estimate carbon content for Chinese fir and considered for other vegetation types as well. PMID:26982054
Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang
2017-11-01
The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.
Optimal trajectories for hypersonic launch vehicles
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas
1992-01-01
In this paper, we derive a near-optimal guidance law for the ascent trajectory from Earth surface to Earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optimal flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. The performance objective is a weighted sum of fuel mass and volume, with the weighting factor selected to give minimum gross take-off weight for a specific payload mass and volume.
Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323
Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk.
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case-control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome.
Learning Reward Uncertainty in the Basal Ganglia
Bogacz, Rafal
2016-01-01
Learning the reliability of different sources of rewards is critical for making optimal choices. However, despite the existence of detailed theory describing how the expected reward is learned in the basal ganglia, it is not known how reward uncertainty is estimated in these circuits. This paper presents a class of models that encode both the mean reward and the spread of the rewards, the former in the difference between the synaptic weights of D1 and D2 neurons, and the latter in their sum. In the models, the tendency to seek (or avoid) options with variable reward can be controlled by increasing (or decreasing) the tonic level of dopamine. The models are consistent with the physiology of and synaptic plasticity in the basal ganglia, they explain the effects of dopaminergic manipulations on choices involving risks, and they make multiple experimental predictions. PMID:27589489
Lie algebraic similarity transformed Hamiltonians for lattice model systems
NASA Astrophysics Data System (ADS)
Wahlen-Strothman, Jacob M.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Scuseria, Gustavo E.
2015-01-01
We present a class of Lie algebraic similarity transformations generated by exponentials of two-body on-site Hermitian operators whose Hausdorff series can be summed exactly without truncation. The correlators are defined over the entire lattice and include the Gutzwiller factor ni ↑ni ↓ , and two-site products of density (ni ↑+ni ↓) and spin (ni ↑-ni ↓) operators. The resulting non-Hermitian many-body Hamiltonian can be solved in a biorthogonal mean-field approach with polynomial computational cost. The proposed similarity transformation generates locally weighted orbital transformations of the reference determinant. Although the energy of the model is unbound, projective equations in the spirit of coupled cluster theory lead to well-defined solutions. The theory is tested on the one- and two-dimensional repulsive Hubbard model where it yields accurate results for small and medium sized interaction strengths.
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
González-Benito, J; Castillo, E; Cruz-Caldito, J F
2015-07-28
Nanothermal-expansion of poly(ethylene-co-vinylacetate), EVA, and poly(methyl methacrylate), PMMA, in the form of films was measured to finally obtain linear coefficients of thermal expansion, CTEs. The simple deflection of a cantilever in an atomic force microscope, AFM, was used to monitor thermal expansions at the nanoscale. The influences of: (a) the structure of EVA in terms of its composition (vinylacetate content) and (b) the size of PMMA chains in terms of the molecular weight were studied. To carry out this, several polymer samples were used, EVA copolymers with different weight percents of the vinylacetate comonomer (12, 18, 25 and 40%) and PMMA polymers with different weight average molecular weights (33.9, 64.8, 75.600 and 360.0 kg mol(-1)). The dependencies of the vinyl acetate weight fraction of EVA and the molecular weight of PMMA on their corresponding CTEs were analyzed to finally explain them using new, intuitive and very simple models based on the rule of mixtures. In the case of EVA copolymers a simple equation considering the weighted contributions of each comonomer was enough to estimate the final CTE above the glass transition temperature. On the other hand, when the molecular weight dependence is considered the free volume concept was used as novelty. The expansion of PMMA, at least at the nanoscale, was well and easily described by the sum of the weighted contributions of the occupied and free volumes, respectively.
Dense modifiable interconnections utilizing photorefractive volume holograms
NASA Astrophysics Data System (ADS)
Psaltis, Demetri; Qiao, Yong
1990-11-01
This report describes an experimental two-layer optical neural network built at Caltech. The system uses photorefractive volume holograms to implement dense, modifiable synaptic interconnections and liquid crystal light valves (LCVS) to perform nonlinear thresholding operations. Kanerva's Sparse, Distributed Memory was implemented using this network and its ability to recognize handwritten character-alphabet (A-Z) has been demonstrated experimentally. According to Kanerva's model, the first layer has fixed, random weights of interconnections and the second layer is trained by sum-of-outer-products rule. After training, the recognition rates of the network on the training set (104 patterns) and test set (520 patterns) are 100 and 50 percent, respectively.
Analysis of Aerospike Plume Induced Base-Heating Environment
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1998-01-01
Computational analysis is conducted to study the effect of an aerospike engine plume on X-33 base-heating environment during ascent flight. To properly account for the effect of forebody and aftbody flowfield such as shocks and to allow for potential plume-induced flow-separation, thermo-flowfield of trajectory points is computed. The computational methodology is based on a three-dimensional finite-difference, viscous flow, chemically reacting, pressure-base computational fluid dynamics formulation, and a three-dimensional, finite-volume, spectral-line based weighted-sum-of-gray-gases radiation absorption model computational heat transfer formulation. The predicted convective and radiative base-heat fluxes are presented.
Holder, J P; Benedetti, L R; Bradley, D K
2016-11-01
Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.
Subsite mapping of enzymes. Depolymerase computer modelling.
Allen, J D; Thoma, J A
1976-01-01
We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629
Hietala, P; Wolfová, M; Wolf, J; Kantanen, J; Juga, J
2014-02-01
Improving the feed efficiency of dairy cattle has a substantial effect on the economic efficiency and on the reduction of harmful environmental effects of dairy production through lower feeding costs and emissions from dairy farming. To assess the economic importance of feed efficiency in the breeding goal for dairy cattle, the economic values for the current breeding goal traits and the additional feed efficiency traits for Finnish Ayrshire cattle under production circumstances in 2011 were determined. The derivation of economic values was based on a bioeconomic model in which the profit of the production system was calculated, using the generated steady state herd structure. Considering beef production from dairy farms, 2 marketing strategies for surplus calves were investigated: (A) surplus calves were sold at a young age and (B) surplus calves were fattened on dairy farms. Both marketing strategies were unprofitable when subsidies were not included in the revenues. When subsidies were taken into account, a positive profitability was observed in both marketing strategies. The marginal economic values for residual feed intake (RFI) of breeding heifers and cows were -25.5 and -55.8 €/kg of dry matter per day per cow and year, respectively. The marginal economic value for RFI of animals in fattening was -29.5 €/kg of dry matter per day per cow and year. To compare the economic importance among traits, the standardized economic weight of each trait was calculated as the product of the marginal economic value and the genetic standard deviation; the standardized economic weight expressed as a percentage of the sum of all standardized economic weights was called relative economic weight. When not accounting for subsidies, the highest relative economic weight was found for 305-d milk yield (34% in strategy A and 29% in strategy B), which was followed by protein percentage (13% in strategy A and 11% in strategy B). The third most important traits were calving interval (9%) and mature weight of cows (11%) in strategy A and B, respectively. The sums of the relative economic weights over categories for RFI were 6 and 7% in strategy A and B, respectively. Under production conditions in 2011, the relative economic weights for the studied feed efficiency traits were low. However, it is possible that the relative importance of feed efficiency traits in the breeding goal will increase in the future due to increasing requirements to mitigate the environmental impact of milk production. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Extension of the Haseman-Elston regression model to longitudinal data.
Won, Sungho; Elston, Robert C; Park, Taesung
2006-01-01
We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.
[Application of ordinary Kriging method in entomologic ecology].
Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong
2003-01-01
Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.
ERIC Educational Resources Information Center
Wei, Silin; Liu, Xiufeng; Jia, Yuane
2014-01-01
Scientific models and modeling play an important role in science, and students' understanding of scientific models is essential for their understanding of scientific concepts. The measurement instrument of "Students' Understanding of Models in Science" (SUMS), developed by Treagust, Chittleborough & Mamiala ("International…
Scaling exponent and dispersity of polymers in solution by diffusion NMR.
Williamson, Nathan H; Röding, Magnus; Miklavcic, Stanley J; Nydén, Magnus
2017-05-01
Molecular mass distribution measurements by pulsed gradient spin echo nuclear magnetic resonance (PGSE NMR) spectroscopy currently require prior knowledge of scaling parameters to convert from polymer self-diffusion coefficient to molecular mass. Reversing the problem, we utilize the scaling relation as prior knowledge to uncover the scaling exponent from within the PGSE data. Thus, the scaling exponent-a measure of polymer conformation and solvent quality-and the dispersity (M w /M n ) are obtainable from one simple PGSE experiment. The method utilizes constraints and parametric distribution models in a two-step fitting routine involving first the mass-weighted signal and second the number-weighted signal. The method is developed using lognormal and gamma distribution models and tested on experimental PGSE attenuation of the terminal methylene signal and on the sum of all methylene signals of polyethylene glycol in D 2 O. Scaling exponent and dispersity estimates agree with known values in the majority of instances, leading to the potential application of the method to polymers for which characterization is not possible with alternative techniques. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimization and evaluation of a proportional derivative controller for planar arm movement.
Jagodnik, Kathleen M; van den Bogert, Antonie J
2010-04-19
In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. Copyright 2009 Elsevier Ltd. All rights reserved.
Optimization and evaluation of a proportional derivative controller for planar arm movement
Jagodnik, Kathleen M.; van den Bogert, Antonie J.
2013-01-01
In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. PMID:20097345
NASA Astrophysics Data System (ADS)
Afanas'ev, V. P.; Gryazev, A. S.; Efremenko, D. S.; Kaplya, P. S.; Kuznetcova, A. V.
2017-12-01
Precise knowledge of the differential inverse inelastic mean free path (DIIMFP) and differential surface excitation probability (DSEP) of Tungsten is essential for many fields of material science. In this paper, a fitting algorithm is applied for extracting DIIMFP and DSEP from X-ray photoelectron spectra and electron energy loss spectra. The algorithm uses the partial intensity approach as a forward model, in which a spectrum is given as a weighted sum of cross-convolved DIIMFPs and DSEPs. The weights are obtained as solutions of the Riccati and Lyapunov equations derived from the invariant imbedding principle. The inversion algorithm utilizes the parametrization of DIIMFPs and DSEPs on the base of a classical Lorentz oscillator. Unknown parameters of the model are found by using the fitting procedure, which minimizes the residual between measured spectra and forward simulations. It is found that the surface layer of Tungsten contains several sublayers with corresponding Langmuir resonances. The thicknesses of these sublayers are proportional to the periods of corresponding Langmuir oscillations, as predicted by the theory of R.H. Ritchie.
Combining geodiversity with climate and topography to account for threatened species richness.
Tukiainen, Helena; Bailey, Joseph J; Field, Richard; Kangas, Katja; Hjort, Jan
2017-04-01
Understanding threatened species diversity is important for long-term conservation planning. Geodiversity-the diversity of Earth surface materials, forms, and processes-may be a useful biodiversity surrogate for conservation and have conservation value itself. Geodiversity and species richness relationships have been demonstrated; establishing whether geodiversity relates to threatened species' diversity and distribution pattern is a logical next step for conservation. We used 4 geodiversity variables (rock-type and soil-type richness, geomorphological diversity, and hydrological feature diversity) and 4 climatic and topographic variables to model threatened species diversity across 31 of Finland's national parks. We also analyzed rarity-weighted richness (a measure of site complementarity) of threatened vascular plants, fungi, bryophytes, and all species combined. Our 1-km 2 resolution data set included 271 threatened species from 16 major taxa. We modeled threatened species richness (raw and rarity weighted) with boosted regression trees. Climatic variables, especially the annual temperature sum above 5 °C, dominated our models, which is consistent with the critical role of temperature in this boreal environment. Geodiversity added significant explanatory power. High geodiversity values were consistently associated with high threatened species richness across taxa. The combined effect of geodiversity variables was even more pronounced in the rarity-weighted richness analyses (except for fungi) than in those for species richness. Geodiversity measures correlated most strongly with species richness (raw and rarity weighted) of threatened vascular plants and bryophytes and were weakest for molluscs, lichens, and mammals. Although simple measures of topography improve biodiversity modeling, our results suggest that geodiversity data relating to geology, landforms, and hydrology are also worth including. This reinforces recent arguments that conserving nature's stage is an important principle in conservation. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Garcia-Hernandez, Alberto
2015-11-01
The comparative evaluation of benefits and risks is one of the most important tasks during the development, market authorization and post-approval pharmacovigilance of medicinal products. Multi-criteria decision analysis (MCDA) has been recommended to support decision making in the benefit-risk assessment (BRA) of medicines. This paper identifies challenges associated with bias or variability that practitioners may encounter in this field and presents solutions to overcome them. The inclusion of overlapping or preference-complementary criteria, which are frequent violations to the assumptions of this model, should be avoided. For each criterion, a value function translates the original outcomes into preference-related scores. Applying non-linear value functions to criteria defined as the risk of suffering a certain event during the study introduces specific risk behaviours in this prescriptive, rather than descriptive, model and is therefore a questionable practice. MCDA uses weights to compare the importance of the model criteria with each other; during their elicitation a frequent situation where (generally favourable) mild effects are directly traded off against low probabilities of suffering (generally unfavourable) severe effects during the study is known to lead to biased and variable weights and ought to be prevented. The way the outcomes are framed during the elicitation process, positively versus negatively for instance, may also lead to differences in the preference weights, warranting an appropriate justification during each implementation. Finally, extending the weighted-sum MCDA model into a fully inferential tool through a probabilistic sensitivity analysis is desirable. However, this task is troublesome and should not ignore that clinical trial endpoints generally are positively correlated.
Modelling rainfall amounts using mixed-gamma model for Kuantan district
NASA Astrophysics Data System (ADS)
Zakaria, Roslinazairimah; Moslim, Nor Hafizah
2017-05-01
An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.
Analog Delta-Back-Propagation Neural-Network Circuitry
NASA Technical Reports Server (NTRS)
Eberhart, Silvio
1990-01-01
Changes in synapse weights due to circuit drifts suppressed. Proposed fully parallel analog version of electronic neural-network processor based on delta-back-propagation algorithm. Processor able to "learn" when provided with suitable combinations of inputs and enforced outputs. Includes programmable resistive memory elements (corresponding to synapses), conductances (synapse weights) adjusted during learning. Buffer amplifiers, summing circuits, and sample-and-hold circuits arranged in layers of electronic neurons in accordance with delta-back-propagation algorithm.
21 CFR 556.720 - Tetracycline.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS... body weight per day. (b) Tolerances. Tolerances are established for the sum of tetracycline residues in... liver, and 12 ppm in fat and kidney. [63 FR 57246, Oct. 27, 1998] ...
Compton Scattering and Photo-absorption Sum Rules on Nuclei
NASA Astrophysics Data System (ADS)
Gorshteyn, Mikhail; Hobbs, Timothy; Londergan, J. Timothy; Szczepaniak, Adam P.
2012-03-01
We revisit the photo-absorption sum rule for real Compton scattering from the proton and from nuclear targets. In analogy with the Thomas-Reiche-Kuhn sum rule appropriate at low energies, we propose a new ``constituent quark model'' sum rule that relates the integrated strength of hadronic resonances to the scattering amplitude on constituent quarks. We study the constituent quark model sum rule for several nuclear targets. In addition we extract the J=0 pole contribution for both proton and nuclei. Using the modern high energy proton data we find that the J=0 pole contribution differs significantly from the Thomson term, in contrast with the original findings by Damashek and Gilman. We discuss phenomenological implications of this new result.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Jiang, Yulei; Antic, Tatjana; Giger, Maryellen L.; Eggener, Scott; Oto, Aytekin
2013-02-01
The purpose of this study was to study T2-weighted magnetic resonance (MR) image texture features and diffusionweighted (DW) MR image features in distinguishing prostate cancer (PCa) from normal tissue. We collected two image datasets: 23 PCa patients (25 PCa and 23 normal tissue regions of interest [ROIs]) imaged with Philips MR scanners, and 30 PCa patients (41 PCa and 26 normal tissue ROIs) imaged with GE MR scanners. A radiologist drew ROIs manually via consensus histology-MR correlation conference with a pathologist. A number of T2-weighted texture features and apparent diffusion coefficient (ADC) features were investigated, and linear discriminant analysis (LDA) was used to combine select strong image features. Area under the receiver operating characteristic (ROC) curve (AUC) was used to characterize feature effectiveness in distinguishing PCa from normal tissue ROIs. Of the features studied, ADC 10th percentile, ADC average, and T2-weighted sum average yielded AUC values (+/-standard error) of 0.95+/-0.03, 0.94+/-0.03, and 0.85+/-0.05 on the Phillips images, and 0.91+/-0.04, 0.89+/-0.04, and 0.70+/-0.06 on the GE images, respectively. The three-feature combination yielded AUC values of 0.94+/-0.03 and 0.89+/-0.04 on the Phillips and GE images, respectively. ADC 10th percentile, ADC average, and T2-weighted sum average, are effective in distinguishing PCa from normal tissue, and appear robust in images acquired from Phillips and GE MR scanners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horton, Megan K., E-mail: megan.horton@mssm.edu; Blount, Benjamin C.; Valentin-Blasini, Liza
Background: Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy Objectives: We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New Yorkmore » City using weighted quantile sum (WQS) regression. Methods: We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (±2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Results: Individual analyte concentrations in urine were significantly correlated (Spearman's r 0.4–0.5, p<0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Conclusions: Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. - Highlights: • Perchlorate, nitrate, thiocyanate and iodide measured in maternal urine. • Thyroid function (TSH and Free T4) measured in maternal blood. • Weighted quantile sum (WQS) regression examined complex mixture effect. • WQS identified an inverse association between the exposure mixture and maternal TSH. • Perchlorate indicated as the ‘bad actor’ of the mixture.« less
Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis.
Nieves, Jeri W; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J Americo M; Sorenson, Eric J; D'Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi
2016-12-01
There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale-Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5-68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of "good" micronutrients and "good" food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes.
Optimization of joint energy micro-grid with cold storage
NASA Astrophysics Data System (ADS)
Xu, Bin; Luo, Simin; Tian, Yan; Chen, Xianda; Xiong, Botao; Zhou, Bowen
2018-02-01
To accommodate distributed photovoltaic (PV) curtailment, to make full use of the joint energy micro-grid with cold storage, and to reduce the high operating costs, the economic dispatch of joint energy micro-grid load is particularly important. Considering the different prices during the peak and valley durations, an optimization model is established, which takes the minimum production costs and PV curtailment fluctuations as the objectives. Linear weighted sum method and genetic-taboo Particle Swarm Optimization (PSO) algorithm are used to solve the optimization model, to obtain optimal power supply output. Taking the garlic market in Henan as an example, the simulation results show that considering distributed PV and different prices in different time durations, the optimization strategies are able to reduce the operating costs and accommodate PV power efficiently.
An instance theory of associative learning.
Jamieson, Randall K; Crump, Matthew J C; Hannah, Samuel D
2012-03-01
We present and test an instance model of associative learning. The model, Minerva-AL, treats associative learning as cued recall. Memory preserves the events of individual trials in separate traces. A probe presented to memory contacts all traces in parallel and retrieves a weighted sum of the traces, a structure called the echo. Learning of a cue-outcome relationship is measured by the cue's ability to retrieve a target outcome. The theory predicts a number of associative learning phenomena, including acquisition, extinction, reacquisition, conditioned inhibition, external inhibition, latent inhibition, discrimination, generalization, blocking, overshadowing, overexpectation, superconditioning, recovery from blocking, recovery from overshadowing, recovery from overexpectation, backward blocking, backward conditioned inhibition, and second-order retrospective revaluation. We argue that associative learning is consistent with an instance-based approach to learning and memory.
Measuring efficiency of university-industry Ph.D. projects using best worst method.
Salimi, Negin; Rezaei, Jafar
A collaborative Ph.D. project, carried out by a doctoral candidate, is a type of collaboration between university and industry. Due to the importance of such projects, researchers have considered different ways to evaluate the success, with a focus on the outputs of these projects. However, what has been neglected is the other side of the coin-the inputs. The main aim of this study is to incorporate both the inputs and outputs of these projects into a more meaningful measure called efficiency. A ratio of the weighted sum of outputs over the weighted sum of inputs identifies the efficiency of a Ph.D. The weights of the inputs and outputs can be identified using a multi-criteria decision-making (MCDM) method. Data on inputs and outputs are collected from 51 Ph.D. candidates who graduated from Eindhoven University of Technology. The weights are identified using a new MCDM method called Best Worst Method (BWM). Because there may be differences in the opinion of Ph.D. candidates and supervisors on weighing the inputs and outputs, data for BWM are collected from both groups. It is interesting to see that there are differences in the level of efficiency from the two perspectives, because of the weight differences. Moreover, a comparison between the efficiency scores of these projects and their success scores reveals differences that may have significant implications. A sensitivity analysis divulges the most contributing inputs and outputs.
Gao, Xiao; Wang, Quanchuan; Jackson, Todd; Zhao, Guang; Liang, Yi; Chen, Hong
2011-04-01
Despite evidence indicating fatness and thinness information are processed differently among weight-preoccupied and eating disordered individuals, the exact nature of these attentional biases is not clear. In this research, eye movement (EM) tracking assessed biases in specific component processes of visual attention (i.e., orientation, detection, maintenance and disengagement of gaze) in relation to body-related stimuli among 20 weight dissatisfied (WD) and 20 weight satisfied young women. Eye movements were recorded while participants completed a dot-probe task that featured fatness-neutral and thinness-neutral word pairs. Compared to controls, WD women were more likely to direct their initial gaze toward fatness words, had a shorter mean latency of first fixation on both fatness and thinness words, had longer first fixation on fatness words but shorter first fixation on thinness words, and shorter total gaze duration on thinness words. Reaction time data showed a maintenance bias towards fatness words among the WD women. In sum, results indicated WD women show initial orienting, speeded detection and initial maintenance biases towards fat body words in addition to a speeded detection - avoidance pattern of biases in relation to thin body words. In sum, results highlight the importance of the utility of EM-tracking as a means of identifying subtle attentional biases among weight dissatisfied women drawn from a non-clinical setting and the need to assess attentional biases as a dynamic process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Critical weight statistics of the random energy model and of the directed polymer on the Cayley tree
NASA Astrophysics Data System (ADS)
Monthus, Cécile; Garel, Thomas
2007-05-01
We consider the critical point of two mean-field disordered models: (i) the random energy model (REM), introduced by Derrida as a mean-field spin-glass model of N spins and (ii) the directed polymer of length N on a Cayley Tree (DPCT) with random bond energies. Both models are known to exhibit a freezing transition between a high-temperature phase where the entropy is extensive and a low-temperature phase of finite entropy, where the weight statistics coincides with the weight statistics of Lévy sums with index μ=T/Tc<1 . In this paper, we study the weight statistics at criticality via the entropy S=-∑wilnwi and the generalized moments Yk=∑wik , where the wi are the Boltzmann weights of the 2N configurations. In the REM, we find that the critical weight statistics is governed by the finite-size exponent ν=2 : the entropy scales as Smacr N(Tc)˜N1/2 , the typical values elnYk¯ decay as N-k/2 , and the disorder-averaged values Yk¯ are governed by rare events and decay as N-1/2 for any k>1 . For the DPCT, we find that the entropy scales similarly as Smacr N(Tc)˜N1/2 , whereas another exponent ν'=1 governs the Yk statistics: the typical values elnYk¯ decay as N-k , and the disorder-averaged values Yk¯ decay as N-1 for any k>1 . As a consequence, the asymptotic probability distribution π¯N=∞(q) of the overlap q , in addition to the delta function δ(q) , which bears the whole normalization, contains an isolated point at q=1 , as a memory of the delta peak (1-T/Tc)δ(q-1) of the low-temperature phase T
The Effects of Walking Speed on Tibiofemoral Loading Estimated Via Musculoskeletal Modeling
Lerner, Zachary F.; Haight, Derek J.; DeMers, Matthew S.; Board, Wayne J.; Browning, Raymond C.
2015-01-01
Net muscle moments (NMMs) have been used as proxy measures of joint loading, but musculoskeletal models can estimate contact forces within joints. The purpose of this study was to use a musculoskeletal model to estimate tibiofemoral forces and to examine the relationship between NMMs and tibiofemoral forces across walking speeds. We collected kinematic, kinetic, and electromyographic data as ten adult participants walked on a dual-belt force-measuring treadmill at 0.75, 1.25, and 1.50 m/s. We scaled a musculoskeletal model to each participant and used OpenSim to calculate the NMMs and muscle forces through inverse dynamics and weighted static optimization, respectively. We determined tibiofemoral forces from the vector sum of intersegmental and muscle forces crossing the knee. Estimated tibiofemoral forces increased with walking speed. Peak early-stance compressive tibiofemoral forces increased 52% as walking speed increased from 0.75 to 1.50 m/s, whereas peak knee extension NMMs increased by 168%. During late stance, peak compressive tibiofemoral forces increased by 18% as speed increased. Although compressive loads at the knee did not increase in direct proportion to NMMs, faster walking resulted in greater compressive forces during weight acceptance and increased compressive and anterior/posterior tibiofemoral loading rates in addition to a greater abduction NMM. PMID:23878264
Li, Gai-Ling; Chen, Hui-Jian; Zhang, Wan-Xia; Tong, Qiang; Yan, You-E
2017-08-10
The effect of maternal omega-3 fatty acids intake on the body composition of the offspring is unclear. The aim of this study was to conduct a systematic review and meta-analysis to confirm the effects of omega-3 fatty acids supplementation during pregnancy and/or lactation on body weight, body length, body mass index (BMI), waist circumference, fat mass and sum of skinfold thicknesses of offspring. Human intervention studies were selected by a systematic search of PubMed, Web of Science, the Cochrane Library and references of related reviews and studies. Randomized controlled trials of maternal omega-3 fatty acids intake during pregnancy or lactation for offspring's growth were included. The data were analyzed with RevMan 5.3 and Stata 12.0. Effect sizes were presented as weighted mean differences (WMD) or standardized mean difference (SMD) with 95% confidence intervals (95% CI). Twenty-six studies comprising 10,970 participants were included. Significant increases were found in birth weight (WMD = 42.55 g, 95% CI: 21.25, 63.85) and waist circumference (WMD = 0.35 cm, 95% CI: 0.04, 0.67) in the omega-3 fatty acids group. There were no effects on birth length (WMD = 0.09 cm, 95% CI: -0.03, 0.21), postnatal length (WMD = 0.13 cm, 95% CI: -0.11, 0.36), postnatal weight (WMD = 0.04 kg, 95% CI: -0.07, 0.14), BMI (WMD = 0.09, 95% CI: -0.05, 0.23), the sum of skinfold thicknesses (WMD = 0.45 mm, 95% CI: -0.30, 1.20), fat mass (WMD = 0.05 kg, 95% CI: -0.01, 0.11) and the percentage of body fat (WMD = 0.04%, 95% CI: -0.38, 0.46). This meta-analysis showed that maternal omega-3 fatty acids supplementation can increase offspring's birth weight and postnatal waist circumference. However, it did not appear to influence children's birth length, postnatal weight/length, BMI, sum of skinfold thicknesses, fat mass and the percentage of body fat during postnatal period. Larger, well-designed studies are recommended to confirm this conclusion. Copyright © 2017 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
ISOFIT - A PROGRAM FOR FITTING SORPTION ISOTHERMS TO EXPERIMENTAL DATA
Isotherm expressions are important for describing the partitioning of contaminants in environmental systems. ISOFIT (ISOtherm FItting Tool) is a software program that fits isotherm parameters to experimental data via the minimization of a weighted sum of squared error (WSSE) obje...
Character expansion methods for matrix models of dually weighted graphs
NASA Astrophysics Data System (ADS)
Kazakov, Vladimir A.; Staudacher, Matthias; Wynter, Thomas
1996-04-01
We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphys possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating the equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problems of phase transitions from random to flat lattices. January 1995
System Finds Horizontal Location of Center of Gravity
NASA Technical Reports Server (NTRS)
Johnston, Albert S.; Howard, Richard T.; Brewster, Linda L.
2006-01-01
An instrumentation system rapidly and repeatedly determines the horizontal location of the center of gravity of a laboratory vehicle that slides horizontally on three air bearings (see Figure 1). Typically, knowledge of the horizontal center-of-mass location of such a vehicle is needed in order to balance the vehicle properly for an experiment and/or to assess the dynamic behavior of the vehicle. The system includes a load cell above each air bearing, electronic circuits that generate digital readings of the weight on each load cell, and a computer equipped with software that processes the readings. The total weight and, hence, the mass of the vehicle are computed from the sum of the load-cell weight readings. Then the horizontal position of the center of gravity is calculated straightforwardly as the weighted sum of the known position vectors of the air bearings, the contribution of each bearing being proportional to the weight on that bearing. In the initial application for which this system was devised, the center- of-mass calculation is particularly simple because the air bearings are located at corners of an equilateral triangle. However, the system is not restricted to this simple geometry. The system acquires and processes weight readings at a rate of 800 Hz for each load cell. The total weight and the horizontal location of the center of gravity are updated at a rate of 800/3 approx. equals 267 Hz. In a typical application, a technician would use the center-of-mass output of this instrumentation system as a guide to the manual placement of small weights on the vehicle to shift the center of gravity to a desired horizontal position. Usually, the desired horizontal position is that of the geometric center. Alternatively, this instrumentation system could be used to provide position feedback for a control system that would cause weights to be shifted automatically (see Figure 2) in an effort to keep the center of gravity at the geometric center.
Calculating the nutrient composition of recipes with computers.
Powers, P M; Hoover, L W
1989-02-01
The objective of this research project was to compare the nutrient values computed by four commonly used computerized recipe calculation methods. The four methods compared were the yield factor, retention factor, summing, and simplified retention factor methods. Two versions of the summing method were modeled. Four pork entrée recipes were selected for analysis: roast pork, pork and noodle casserole, pan-broiled pork chops, and pork chops with vegetables. Assumptions were made about changes expected to occur in the ingredients during preparation and cooking. Models were designed to simulate the algorithms of the calculation methods using a microcomputer spreadsheet software package. Identical results were generated in the yield factor, retention factor, and summing-cooked models for roast pork. The retention factor and summing-cooked models also produced identical results for the recipe for pan-broiled pork chops. The summing-raw model gave the highest value for water in all four recipes and the lowest values for most of the other nutrients. A superior method or methods was not identified. However, on the basis of the capabilities provided with the yield factor and retention factor methods, more serious consideration of these two methods is recommended.
Schrempft, Stephanie; van Jaarsveld, Cornelia H. M.; Fisher, Abigail; Wardle, Jane
2015-01-01
Objectives The home environment is thought to play a key role in early weight trajectories, although direct evidence is limited. There is general agreement that multiple factors exert small individual effects on weight-related outcomes, so use of composite measures could demonstrate stronger effects. This study therefore examined whether composite measures reflecting the ‘obesogenic’ home environment are associated with diet, physical activity, TV viewing, and BMI in preschool children. Methods Families from the Gemini cohort (n = 1096) completed a telephone interview (Home Environment Interview; HEI) when their children were 4 years old. Diet, physical activity, and TV viewing were reported at interview. Child height and weight measurements were taken by the parents (using standard scales and height charts) and reported at interview. Responses to the HEI were standardized and summed to create four composite scores representing the food (sum of 21 variables), activity (sum of 6 variables), media (sum of 5 variables), and overall (food composite/21 + activity composite/6 + media composite/5) home environments. These were categorized into ‘obesogenic risk’ tertiles. Results Children in ‘higher-risk’ food environments consumed less fruit (OR; 95% CI = 0.39; 0.27–0.57) and vegetables (0.47; 0.34–0.64), and more energy-dense snacks (3.48; 2.16–5.62) and sweetened drinks (3.49; 2.10–5.81) than children in ‘lower-risk’ food environments. Children in ‘higher-risk’ activity environments were less physically active (0.43; 0.32–0.59) than children in ‘lower-risk’ activity environments. Children in ‘higher-risk’ media environments watched more TV (3.51; 2.48–4.96) than children in ‘lower-risk’ media environments. Neither the individual nor the overall composite measures were associated with BMI. Conclusions Composite measures of the obesogenic home environment were associated as expected with diet, physical activity, and TV viewing. Associations with BMI were not apparent at this age. PMID:26248313
Polder, A; Odland, J O; Tkachev, A; Føreid, S; Savinova, T N; Skaare, J U
2003-05-01
The concentrations of HCB, alpha-, beta- and gamma-HCH, 3 chlordanes (CHLs), p,p'-DDE, p,p'-DDD, p,p'-DDT, and 30 PCBs (polychlorinated biphenyls) were determined in 140 human milk samples from Kargopol (n=19), Severodvinsk (n=50), Arkhangelsk (n=51) and Naryan-Mar (n=20). Pooled samples were used for determination of three toxaphenes (chlorobornanes, CHBs). The concentrations of HCB, beta-HCH and p,p'-DDE in Russian human milk were 2, 10 and 3 times higher than corresponding levels in Norway, respectively, while concentrations of sum-PCBs and sum-TEQs (toxic equivalent quantities) of the mono-ortho substituted PCBs were in the same range as corresponding levels in Norway. The PCB-156 contributed most to the sum-TEQs. Highest mean concentrations of HCB (129 microg/kg milk fat) and sum-PCBs (458 microg/kg milk fat) were detected in Naryan-Mar, while highest mean concentrations of sum-HCHs (408 microg/kg milk fat), sum-CHLs (48 microg/kg milk fat), sum-DDTs (1392 microg/kg milk fat) and sum-toxaphenes (13 microg/kg milk fat) were detected in Arkhangelsk. An eastward geographic trend of increasing ratios of alpha/beta-HCH, gamma/beta-HCH, p,p'-DDT/p,p'-DDE and PCB-180/28 was observed. In all areas the levels of sum-HCHs decreased with parity (number of children born). Considerable variation in levels of the analysed organochlorines (OCs) was found in all the studied areas. Breast milk from mothers nursing their second or third child (multiparas) in Naryan-Mar showed a significant different PCB profile compared to mothers giving birth to their first child (primiparas) from the same area and to primi- and multiparas in the other areas. Both p,p'-DDE and p,p'-DDT showed a significant, but weak, negative correlation with the infants birth weight.
Weaving and neural complexity in symmetric quantum states
NASA Astrophysics Data System (ADS)
Susa, Cristian E.; Girolami, Davide
2018-04-01
We study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
Systematics of strength function sum rules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Calvin W.
2015-08-28
Sum rules provide useful insights into transition strength functions and are often expressed as expectation values of an operator. In this letter I demonstrate that non-energy-weighted transition sum rules have strong secular dependences on the energy of the initial state. Such non-trivial systematics have consequences: the simplification suggested by the generalized Brink–Axel hypothesis, for example, does not hold for most cases, though it weakly holds in at least some cases for electric dipole transitions. Furthermore, I show the systematics can be understood through spectral distribution theory, calculated via traces of operators and of products of operators. Seen through this lens,more » violation of the generalized Brink–Axel hypothesis is unsurprising: one expectssum rules to evolve with excitation energy. Moreover, to lowest order the slope of the secular evolution can be traced to a component of the Hamiltonian being positive (repulsive) or negative (attractive).« less
A two-stage approach to removing noise from recorded music
NASA Astrophysics Data System (ADS)
Berger, Jonathan; Goldberg, Maxim J.; Coifman, Ronald C.; Goldberg, Maxim J.; Coifman, Ronald C.
2004-05-01
A two-stage algorithm for removing noise from recorded music signals (first proposed in Berger et al., ICMC, 1995) is described and updated. The first stage selects the ``best'' local trigonometric basis for the signal and models noise as the part having high entropy [see Berger et al., J. Audio Eng. Soc. 42(10), 808-818 (1994)]. In the second stage, the original source and the model of the noise obtained from the first stage are expanded into dyadic trees of smooth local sine bases. The best basis for the source signal is extracted using a relative entropy function (the Kullback-Leibler distance) to compare the sum of the costs of the children nodes to the cost of their parent node; energies of the noise in corresponding nodes of the model noise tree are used as weights. The talk will include audio examples of various stages of the method and proposals for further research.
Mobile Visual Search Based on Histogram Matching and Zone Weight Learning
NASA Astrophysics Data System (ADS)
Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong
2018-01-01
In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.
Davis, M E; Rutledge, J J; Cundiff, L V; Hauser, E R
1983-10-01
Several measures of life cycle cow efficiency were calculated using weights and individual feed consumptions recorded on 160 dams of beef, dairy and beef X dairy breeding and their progeny. Ratios of output to input were used to estimate efficiency, where outputs included weaning weights of progeny plus salvage value of the dam and inputs included creep feed consumed by progeny plus feed consumed by the dam over her entire lifetime. In one approach to estimating efficiency, inputs and outputs were weighted by probabilities that were a function of the cow herd age distribution and percentage calf crop in a theoretical herd. The second approach to estimating cow efficiency involved dividing the sum of the weights by the sum of the feed consumption values, with all pieces of information being given equal weighting. Relationships among efficiency estimates and various traits of dams and progeny were examined. Weights, heights, and weight:height ratios of dams at 240 d of age were not correlated significantly with subsequent efficiency of calf production, indicating that indirect selection for lifetime cow efficiency at an early age based on these traits would be ineffective. However, females exhibiting more efficient weight gains from 240 d to first calving tended to become more efficient dams. Correlations of efficiency with weight of dam at calving and at weaning were negative and generally highly significant. Height at withers was negatively related to efficiency. Ratio of weight to height indicated that fatter dams generally were less efficient. The effect of milk production on efficiency depended upon the breed combinations involved. Dams calving for the first time at an early age and continuing to calve at short intervals were superior in efficiency. Weaning rate was closely related to life cycle efficiency. Large negative correlations between efficiency and feed consumption of dams were observed, while correlations of efficiency with progeny weights and feed consumptions in individual parities tended to be positive though nonsignificant. However, correlations of efficiency with accumulative progeny weights and feed consumptions generally were significant.
Smith, Philip L; Sewell, David K; Lilburn, Simon D
2015-11-01
Normalization models of visual sensitivity assume that the response of a visual mechanism is scaled divisively by the sum of the activity in the excitatory and inhibitory mechanisms in its neighborhood. Normalization models of attention assume that the weighting of excitatory and inhibitory mechanisms is modulated by attention. Such models have provided explanations of the effects of attention in both behavioral and single-cell recording studies. We show how normalization models can be obtained as the asymptotic solutions of shunting differential equations, in which stimulus inputs and the activity in the mechanism control growth rates multiplicatively rather than additively. The value of the shunting equation approach is that it characterizes the entire time course of the response, not just its asymptotic strength. We describe two models of attention based on shunting dynamics, the integrated system model of Smith and Ratcliff (2009) and the competitive interaction theory of Smith and Sewell (2013). These models assume that attention, stimulus salience, and the observer's strategy for the task jointly determine the selection of stimuli into visual short-term memory (VSTM) and the way in which stimulus representations are weighted. The quality of the VSTM representation determines the speed and accuracy of the decision. The models provide a unified account of a variety of attentional phenomena found in psychophysical tasks using single-element and multi-element displays. Our results show the generality and utility of the normalization approach to modeling attention. Copyright © 2014 Elsevier B.V. All rights reserved.
Recovery in Young Children with Weight Faltering: Child and Household Risk Factors
Black, Maureen M.; Tilton, Nicholas; Bento, Samantha; Cureton, Pamela; Feigelman, Susan
2015-01-01
Objective To examine whether weight recovery among children with weight faltering varied by enrollment age and child and household risk factors. Study design Observational, conducted in an interdisciplinary specialty practice with a skill-building mealtime behavior intervention, including coaching with video-recorded interactions. Eligibility included age 6–36 months with weight/age <5th percentile or crossing of two major percentiles. Children were categorized as <24 months vs ≥24 months. Child and household risk factors were summed into risk indices (top quartile, elevated risks, vs. reference). Outcome was weight/age z-score change over 6 months. Analyses were conducted with longitudinal linear mixed-effects models, including age by risk index interaction terms. Results Enrolled 286 children (mean age 18.8 months, SD 6.8). Significant weight/age recovery occurred regardless of risk index or age. Mean weight/age z-score change was significantly greater among younger, compared with older age (0.29 vs. 0.17, p=0.03); top household risk quartile, compared with reference (0.34 vs. 0.22, p=0.046); and marginally greater among top child risk quartile, compared with reference (0.37 vs. 0.25, p=0.058). Mean weight/age z-score change was not associated with single risk factors, or interactions; greatest weight gain occurred in most underweight children. Conclusions Weight recovery over 6 months was statistically significant, although modest, and greater among younger children and among children with multiple child and household risk factors. Findings support Differential Susceptibility Theory, whereby some children with multiple risk factors are differentially responsive to intervention. Future investigations should evaluate components of the mealtime behavior intervention. PMID:26687578
A longitudinal study of low back pain and daily vibration exposure in professional drivers.
Bovenzi, Massimo
2010-01-01
The aim of this study was to investigate the relation between low back pain (LBP) outcomes and measures of daily exposure to whole-body vibration (WBV) in professional drivers. In a study population of 202 male drivers, who were not affected with LBP at the initial survey, LBP in terms of duration, intensity, and disability was investigated over a two-year follow-up period. Vibration measurements were made on representative samples of machines and vehicles. The following measures of daily WBV exposure were obtained: (i) 8-h energy-equivalent frequency-weighted acceleration (highest axis), A(8)(max) in ms(-2) r.m.s.; (ii) A(8)(sum) (root-sum-of-squares) in ms(-2) r.m.s.; (iii) Vibration Dose Value (highest axis), VDV(max) in ms(-1.75); (iv) VDV(sum) (root-sum-of-quads) in ms(-1.75). The cumulative incidence of LBP over the follow-up period was 38.6%. The incidence of high pain intensity and severe disability was 16.8 and 14.4%, respectively. After adjustment for several confounders, VDV(max) or VDV(sum) gave better predictions of LBP outcomes over time than A(8)(max) or A(8)(sum), respectively. Poor predictions were obtained with A(8)(max), which is the currently preferred measure of daily WBV exposure in European countries. In multivariate data analysis, physical work load was a significant predictor of LBP outcomes over the follow-up period. Perceived psychosocial work environment was not associated with LBP.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.
Exploring local regularities for 3D object recognition
NASA Astrophysics Data System (ADS)
Tian, Huaiwen; Qin, Shengfeng
2016-11-01
In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Fischer, H Felix; Rose, Matthias
2016-10-19
Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.
Multipolar and Composite Ordering in Two-Dimensional Semiclassical Geometrically Frustrated Magnets
NASA Astrophysics Data System (ADS)
Parker, Edward Temchin
Despite the success of QCD at high energies where the perturbation calculations can be carried out because of the asymptotic freedom, many fundamental questions, regarding the confinement of quarks and gluons, the nuclear forces, and the nucleon mass and structure, still remain in the non-perturbative regime. Dispersive sum rules, based on universal principles, provide a data-driven approach to study the nucleon structure without model-dependencies. Among those sum rules, the well known Gerasimov-Drell-Hearn (GDH) sum rule relates the anomalous magnetic moment to a weighted integral over the photo-absorption cross section. Its generalized form is extended for the virtual photon absorption at an arbitrary four momentum transfer square (Q2) and thus provides a unique relation to study the nucleon spin structure over an experimentally accessible range of Q2. The measured integrals can be compared with theoretical predictions for the spin dependent Compton amplitudes. Such experimental tests at intermediate and low Q 2 deepen our knowledge of the transition from the asymptotic freedom regime to the color confinement regime in QCD. Experiment E97-110 has been performed at the Thomas Jefferson National Accelerator Facility to precisely measure the generalized GDH sum rule and the moments of the neutron and 3He spin structure functions in the low energy region. During the experiment, a longitudinally-polarized electron beam with energies from 1.1 to 4.4 GeV was scattered from a 3He gas target which was polarized longitudinally or transversely at the Hall A center. Inclusive asymmetries and polarized cross-section differences, as well as the unpolarized cross sections, were measured in the quasielastic and resonance regions. In this work, the 3He spin dependent structure functions of g1(nu,Q 2) and g2(nu,Q 2) at Q2 = 0.032-0.230 GeV 2 have been extracted from the experimental data, and the generalized GDH sum rule of 3He is firstly obtained for Q 2 < 0.1 GeV2. The results exhibit a "turn-over" behavior at Q2 = 0.1 GeV2, which strongly indicates that the GDH sum rule for real photons will be recovered at Q2 → 0.
Manual control of yaw motion with combined visual and vestibular cues
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1977-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Small sum privacy and large sum utility in data publishing.
Fu, Ada Wai-Chee; Wang, Ke; Wong, Raymond Chi-Wing; Wang, Jia; Jiang, Minhao
2014-08-01
While the study of privacy preserving data publishing has drawn a lot of interest, some recent work has shown that existing mechanisms do not limit all inferences about individuals. This paper is a positive note in response to this finding. We point out that not all inference attacks should be countered, in contrast to all existing works known to us, and based on this we propose a model called SPLU. This model protects sensitive information, by which we refer to answers for aggregate queries with small sums, while queries with large sums are answered with higher accuracy. Using SPLU, we introduce a sanitization algorithm to protect data while maintaining high data utility for queries with large sums. Empirical results show that our method behaves as desired. Copyright © 2014 Elsevier Inc. All rights reserved.
Heterodimer Binding Scaffolds Recognition via the Analysis of Kinetically Hot Residues.
Perišić, Ognjen
2018-03-16
Physical interactions between proteins are often difficult to decipher. The aim of this paper is to present an algorithm that is designed to recognize binding patches and supporting structural scaffolds of interacting heterodimer proteins using the Gaussian Network Model (GNM). The recognition is based on the (self) adjustable identification of kinetically hot residues and their connection to possible binding scaffolds. The kinetically hot residues are residues with the lowest entropy, i.e., the highest contribution to the weighted sum of the fastest modes per chain extracted via GNM. The algorithm adjusts the number of fast modes in the GNM's weighted sum calculation using the ratio of predicted and expected numbers of target residues (contact and the neighboring first-layer residues). This approach produces very good results when applied to dimers with high protein sequence length ratios. The protocol's ability to recognize near native decoys was compared to the ability of the residue-level statistical potential of Lu and Skolnick using the Sternberg and Vakser decoy dimers sets. The statistical potential produced better overall results, but in a number of cases its predicting ability was comparable, or even inferior, to the prediction ability of the adjustable GNM approach. The results presented in this paper suggest that in heterodimers at least one protein has interacting scaffold determined by the immovable, kinetically hot residues. In many cases, interacting proteins (especially if being of noticeably different sizes) either behave as a rigid lock and key or, presumably, exhibit the opposite dynamic behavior. While the binding surface of one protein is rigid and stable, its partner's interacting scaffold is more flexible and adaptable.
Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.
Steel, Ruth Irene
2015-01-01
Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.
NASA Astrophysics Data System (ADS)
Pini, M. G.; Rettori, A.; Bogani, L.; Lascialfari, A.; Mariani, M.; Caneschi, A.; Sessoli, R.
2011-09-01
The static and dynamic properties of the single-chain molecular magnet Co(hfac)2NITPhOMe (CoPhOMe) (hfac = hexafluoroacetylacetonate, NITPhOMe = 4'-methoxy-phenyl-4,4,5,5-tetramethylimidazoline-1-oxyl-3-oxide) are investigated in the framework of the Ising model with Glauber dynamics, in order to take into account both the effect of an applied magnetic field and a finite size of the chains. For static fields of moderate intensity and short chain lengths, the approximation of a monoexponential decay of the magnetization fluctuations is found to be valid at low temperatures; for strong fields and long chains, a multiexponential decay should rather be assumed. The effect of an oscillating magnetic field, with intensity much smaller than that of the static one, is included in the theory in order to obtain the dynamic susceptibility χ(ω). We find that, for an open chain with N spins, χ(ω) can be written as a weighted sum of N frequency contributions, with a sum rule relating the frequency weights to the static susceptibility of the chain. Very good agreement is found between the theoretical dynamic susceptibility and the ac susceptibility measured in moderate static fields (Hdc≤2 kOe), where the approximation of a single dominating frequency for each segment length turns out to be valid. For static fields in this range, data for the relaxation time, τ versus Hdc, of the magnetization of CoPhOMe at low temperature are also qualitatively reproduced by theory, provided that finite-size effects are included.
Design and test of a four channel motor for electromechanical flight control actuation
NASA Technical Reports Server (NTRS)
1984-01-01
To provide a suitable electromagnetic torque summing approach to flight control system redundancy, a four channel motor capable of sustaining full performance after any two credible failures was designed, fabricated, and tested. The design consists of a single samarium cobalt permanent magnet rotor with four separate three phase windings arrayed in individual stator quadrants around the periphery. Trade studies established the sensitivities of weight and performance to such parameters as design speed, winding pattern, number of poles, magnet configuration, and strength. The motor electromagnetically sums the torque of the individual channels on a single rotor and eliminate complex mechanical gearing arrangements.
Sum and mean. Standard programs for activation analysis.
Lindstrom, R M
1994-01-01
Two computer programs in use for over a decade in the Nuclear Methods Group at NIST illustrate the utility of standard software: programs widely available and widely used, in which (ideally) well-tested public algorithms produce results that are well understood, and thereby capable of comparison, within the community of users. Sum interactively computes the position, net area, and uncertainty of the area of spectral peaks, and can give better results than automatic peak search programs when peaks are very small, very large, or unusually shaped. Mean combines unequal measurements of a single quantity, tests for consistency, and obtains the weighted mean and six measures of its uncertainty.
Highly conductive composites for fuel cell flow field plates and bipolar plates
Jang, Bor Z; Zhamu, Aruna; Song, Lulu
2014-10-21
This invention provides a fuel cell flow field plate or bipolar plate having flow channels on faces of the plate, comprising an electrically conductive polymer composite. The composite is composed of (A) at least 50% by weight of a conductive filler, comprising at least 5% by weight reinforcement fibers, expanded graphite platelets, graphitic nano-fibers, and/or carbon nano-tubes; (B) polymer matrix material at 1 to 49.9% by weight; and (C) a polymer binder at 0.1 to 10% by weight; wherein the sum of the conductive filler weight %, polymer matrix weight % and polymer binder weight % equals 100% and the bulk electrical conductivity of the flow field or bipolar plate is at least 100 S/cm. The invention also provides a continuous process for cost-effective mass production of the conductive composite-based flow field or bipolar plate.
Tree Branching: Leonardo da Vinci's Rule versus Biomechanical Models
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule. PMID:24714065
Tree branching: Leonardo da Vinci's rule versus biomechanical models.
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule.
Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur
2017-08-01
Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.
NASA Astrophysics Data System (ADS)
Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur
2017-08-01
Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.
Weaving and neural complexity in symmetric quantum states
Susa, Cristian E.; Girolami, Davide
2017-12-27
Here, we study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.
14 CFR Appendix E to Part 420 - Tables for Explosive Site Plan
Code of Federal Regulations, 2010 CFR
2010-01-01
.... Table E-2—Liquid Propellant Explosive Equivalents Propellant combinations Explosive equivalent LO2/LH2 The larger of: 8W2/3 where W is the weight of LO2/LH2, or14% of W. LO2/LH2 + LO2/RP-1 Sum of (20% for LO2/RP-1) + the larger of: 8W2/3 where W is the weight of LO2/LH2, or14% of W. LO2/R-1 20% of W up to...
Weaving and neural complexity in symmetric quantum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susa, Cristian E.; Girolami, Davide
Here, we study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.
2013-04-01
from the University of Rochester. Marchetti has worked in digital image processing at Eastman Kodak and in digital control systems at Contraves USA...which was based on a weighted sum of the gain for self and the perceived gain of other stakeholder programs. o A more recent perception of gains weighs...handled with a weighted formula. To the extent that understanding is incomplete (i.e., knowledge of other’s gain is less than 1), a stakeholder program
Exact Maximum-Entropy Estimation with Feynman Diagrams
NASA Astrophysics Data System (ADS)
Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.
2018-02-01
A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.
Co-pyrolysis characteristics of sawdust and coal blend in TGA and a fixed bed reactor.
Park, Dong Kyoo; Kim, Sang Done; Lee, See Hoon; Lee, Jae Goo
2010-08-01
Co-pyrolysis characteristics of sawdust and coal blend were determined in TGA and a fixed bed reactor. The yield and conversion of co-pyrolysis of sawdust and coal blend based on volatile matters are higher than those of the sum of sawdust and coal individually. Form TGA experiments, weight loss rate of sawdust and coal blend increases above 400 degrees C and additional weight loss was observed at 700 degrees C. In a fixed bed at isothermal condition, the synergy to produce more volatiles is appeared at 500-700 degrees C, and the maximum synergy exhibits with a sawdust blending ratio of 0.6 at 600 degrees C. The gas product yields remarkably increase at lower temperature range by reducing tar yield. The CO yield increases up to 26% at 400 degrees C and CH(4) yield increases up to 62% at 600 degrees C compared with the calculated value from the additive model. (c) 2010 Elsevier Ltd. All rights reserved.
Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun
2017-03-01
H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.
Sum rules for quasifree scattering of hadrons
NASA Astrophysics Data System (ADS)
Peterson, R. J.
2018-02-01
The areas d σ /d Ω of fitted quasifree scattering peaks from bound nucleons for continuum hadron-nucleus spectra measuring d2σ /d Ω d ω are converted to sum rules akin to the Coulomb sums familiar from continuum electron scattering spectra from nuclear charge. Hadronic spectra with or without charge exchange of the beam are considered. These sums are compared to the simple expectations of a nonrelativistic Fermi gas, including a Pauli blocking factor. For scattering without charge exchange, the hadronic sums are below this expectation, as also observed with Coulomb sums. For charge exchange spectra, the sums are near or above the simple expectation, with larger uncertainties. The strong role of hadron-nucleon in-medium total cross sections is noted from use of the Glauber model.
ERIC Educational Resources Information Center
Kwok, Oi-man; West, Stephen G.; Green, Samuel B.
2007-01-01
This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…
NASA Astrophysics Data System (ADS)
Sofiev, Mikhail; Ritenberga, Olga; Albertini, Roberto; Arteta, Joaquim; Belmonte, Jordina; Geller Bernstein, Carmi; Bonini, Maira; Celenk, Sevcan; Damialis, Athanasios; Douros, John; Elbern, Hendrik; Friese, Elmar; Galan, Carmen; Oliver, Gilles; Hrga, Ivana; Kouznetsov, Rostislav; Krajsek, Kai; Magyar, Donat; Parmentier, Jonathan; Plu, Matthieu; Prank, Marje; Robertson, Lennart; Steensen, Birthe Marie; Thibaudon, Michel; Segers, Arjo; Stepanovich, Barbara; Valdebenito, Alvaro M.; Vira, Julius; Vokou, Despoina
2017-10-01
The paper presents the first modelling experiment of the European-scale olive pollen dispersion, analyses the quality of the predictions, and outlines the research needs. A 6-model strong ensemble of Copernicus Atmospheric Monitoring Service (CAMS) was run throughout the olive season of 2014, computing the olive pollen distribution. The simulations have been compared with observations in eight countries, which are members of the European Aeroallergen Network (EAN). Analysis was performed for individual models, the ensemble mean and median, and for a dynamically optimised combination of the ensemble members obtained via fusion of the model predictions with observations. The models, generally reproducing the olive season of 2014, showed noticeable deviations from both observations and each other. In particular, the season was reported to start too early by 8 days, but for some models the error mounted to almost 2 weeks. For the end of the season, the disagreement between the models and the observations varied from a nearly perfect match up to 2 weeks too late. A series of sensitivity studies carried out to understand the origin of the disagreements revealed the crucial role of ambient temperature and consistency of its representation by the meteorological models and heat-sum-based phenological model. In particular, a simple correction to the heat-sum threshold eliminated the shift of the start of the season but its validity in other years remains to be checked. The short-term features of the concentration time series were reproduced better, suggesting that the precipitation events and cold/warm spells, as well as the large-scale transport, were represented rather well. Ensemble averaging led to more robust results. The best skill scores were obtained with data fusion, which used the previous days' observations to identify the optimal weighting coefficients of the individual model forecasts. Such combinations were tested for the forecasting period up to 4 days and shown to remain nearly optimal throughout the whole period.
Code of Federal Regulations, 2012 CFR
2012-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2013 CFR
2013-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2011 CFR
2011-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2010 CFR
2010-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2014 CFR
2014-01-01
... to the Act. Administrative controls means the provisions relating to organization and management... agreement under subsection 274b. of the Act. Non-Agreement State means any other State. Alert means events... equivalent means the sum of the products of the dose equivalent to the body organ or tissue and the weighting...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Commission has entered into an effective agreement under subsection 274b. of the Act. Non-agreement State... access control measures that are not related to the safe use of, or security of, radiological materials... equivalent means the sum of the products of the dose equivalent to the organ or tissue and the weighting...
ERIC Educational Resources Information Center
Lohnas, Lynn J.; Kahana, Michael J.
2014-01-01
According to the retrieved context theory of episodic memory, the cue for recall of an item is a weighted sum of recently activated cognitive states, including previously recalled and studied items as well as their associations. We show that this theory predicts there should be compound cuing in free recall. Specifically, the temporal contiguity…
76 FR 44815 - Chlorantraniliprole; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-27
... effects resulting from short- term dosing were observed. Therefore, the aggregate risk is the sum of the... increased liver weight (males only). Incidental oral short/intermediate- N/A N/A There was no hazard term (1 to 30 days). identified via the oral route over the short- and intermediate-term and therefore, no...
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the... boat. (b) The boat must be loaded with a quantity of weight that, when submerged, is equal to the sum...
Street choice logit model for visitors in shopping districts.
Kawada, Ko; Yamada, Takashi; Kishimoto, Tatsuya
2014-09-01
In this study, we propose two models for predicting people's activity. The first model is the pedestrian distribution prediction (or postdiction) model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation). The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that "have more shops, and are wider and lower". In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive) and CARS (negative). Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive).
Street Choice Logit Model for Visitors in Shopping Districts
Kawada, Ko; Yamada, Takashi; Kishimoto, Tatsuya
2014-01-01
In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction) model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation). The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive) and CARS (negative). Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive). PMID:25379274
Goshvarpour, Ateke; Goshvarpour, Atefeh
2018-04-30
Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.
Enhanced linear-array photoacoustic beamforming using modified coherence factor.
Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador
2018-02-01
Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations
NASA Astrophysics Data System (ADS)
Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.
2018-07-01
One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We consider simulations in which the number of photon packets is Poisson distributed, while the weight assigned to a single photon packet follows any distribution of choice. We show how to estimate the statistical uncertainty of the sum of weights in each bin from the output of a single radiative-transfer simulation. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalize existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.
An Extension of SIC Predictions to the Wiener Coactive Model
Houpt, Joseph W.; Townsend, James T.
2011-01-01
The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form. PMID:21822333
An Extension of SIC Predictions to the Wiener Coactive Model.
Houpt, Joseph W; Townsend, James T
2011-06-01
The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form.
NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2010-01-01
The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.
NASA Technical Reports Server (NTRS)
Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.
1992-01-01
A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Item Response Modeling with Sum Scores
ERIC Educational Resources Information Center
Johnson, Timothy R.
2013-01-01
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
NASA Astrophysics Data System (ADS)
Chair, Noureddine
2014-02-01
We have recently developed methods for obtaining exact two-point resistance of the complete graph minus N edges. We use these methods to obtain closed formulas of certain trigonometrical sums that arise in connection with one-dimensional lattice, in proving Scott's conjecture on permanent of Cauchy matrix, and in the perturbative chiral Potts model. The generalized trigonometrical sums of the chiral Potts model are shown to satisfy recursion formulas that are transparent and direct, and differ from those of Gervois and Mehta. By making a change of variables in these recursion formulas, the dimension of the space of conformal blocks of SU(2) and SO(3) WZW models may be computed recursively. Our methods are then extended to compute the corner-to-corner resistance, and the Kirchhoff index of the first non-trivial two-dimensional resistor network, 2×N. Finally, we obtain new closed formulas for variant of trigonometrical sums, some of which appear in connection with number theory.
ERIC Educational Resources Information Center
Green, Samuel B.; Yang, Yanyun
2009-01-01
A method is presented for estimating reliability using structural equation modeling (SEM) that allows for nonlinearity between factors and item scores. Assuming the focus is on consistency of summed item scores, this method for estimating reliability is preferred to those based on linear SEM models and to the most commonly reported estimate of…
A simplified model of precipitation enhancement over a heterogeneous surface
NASA Astrophysics Data System (ADS)
Cioni, Guido; Hohenegger, Cathy
2018-06-01
Soil moisture heterogeneities influence the onset of convection and subsequent evolution of precipitating systems through the triggering of mesoscale circulations. However, local evaporation also plays a role in determining precipitation amounts. Here we aim at disentangling the effect of advection and evaporation on precipitation over the course of a diurnal cycle by formulating a simple conceptual model. The derivation of the model is inspired by the results of simulations performed with a high-resolution (250 m) large eddy simulation model over a surface with varying degrees of heterogeneity. A key element of the conceptual model is the representation of precipitation as a weighted sum of advection and evaporation, each weighed by its own efficiency. The model is then used to isolate the main parameters that control precipitation variations over a spatially drier patch. It is found that these changes surprisingly do not depend on soil moisture itself but instead purely on parameters that describe the atmospheric initial state. The likelihood for enhanced precipitation over drier soils is discussed based on these parameters. Additional experiments are used to test the validity of the model.
NASA Astrophysics Data System (ADS)
Nadi, S.; Samiei, M.; Salari, H. R.; Karami, N.
2017-09-01
This paper proposes a new model for multi-criteria evaluation under uncertain condition. In this model we consider the interaction between criteria as one of the most challenging issues especially in the presence of uncertainty. In this case usual pairwise comparisons and weighted sum cannot be used to calculate the importance of criteria and to aggregate them. Our model is based on the combination of non-additive fuzzy linguistic preference relation AHP (FLPRAHP), Choquet integral and Sugeno λ-measure. The proposed model capture fuzzy preferences of users and fuzzy values of criteria and uses Sugeno λ -measure to determine the importance of criteria and their interaction. Then, integrating Choquet integral and FLPRAHP, all the interaction between criteria are taken in to account with least number of comparison and the final score for each alternative is determined. So we would model a comprehensive set of interactions between criteria that can lead us to more reliable result. An illustrative example presents the effectiveness and capability of the proposed model to evaluate different alternatives in a multi-criteria decision problem.
Viscoelastic deformation near active plate boundaries
NASA Technical Reports Server (NTRS)
Ward, S. N.
1986-01-01
Model deformations near the active plate boundaries of Western North America using space-based geodetic measurements as constraints are discussed. The first six months of this project were spent gaining familarity with space-based measurements, accessing the Crustal Dynamics Data Information Computer, and building time independent deformation models. The initial goal was to see how well the simplest elastic models can reproduce very long base interferometry (VLBI) baseline data. From the Crustal Dynamics Data Information Service, a total of 18 VLBI baselines are available which have been surveyed on four or more occasions. These data were fed into weighted and unweighted inversions to obtain baseline closure rates. Four of the better quality lines are illustrated. The deformation model assumes that the observed baseline rates result from a combination of rigid plate tectonic motions plus a component resulting from elastic strain build up due to a failure of the plate boundary to slip at the full plate tectonic rate. The elastic deformation resulting from the locked plate boundary is meant to portray interseismic strain accumulation. During and shortly after a large interplate earthquake, these strains are largely released, and points near the fault which were previously retarded suddenly catch up to the positions predicted by rigid plate models. Researchers judge the quality of fit by the sum squares of weighted residuals, termed total variance. The observed baseline closures have a total variance of 99 (cm/y)squared. When the RM2 velocities are assumed to model the data, the total variance increases to 154 (cm/y)squared.
Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis
Nieves, Jeri W.; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J. Americo M.; Sorenson, Eric J.; D’Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi
2017-01-01
IMPORTANCE There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). OBJECTIVE To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. EXPOSURES Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). MAIN OUTCOMES AND MEASURES Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale–Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). RESULTS Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5–68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of “good” micronutrients and “good” food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. CONCLUSIONS AND RELEVANCE Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes. PMID:27775751
Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights
NASA Astrophysics Data System (ADS)
Mohammed, Husam Jasim; Kasim, Maznah Mat; Shaharanee, Izwan Nizal Mohd
2017-11-01
This paper compares the performances of four rank-based weighting assessment techniques, Rank Sum (RS), Rank Reciprocal (RR), Rank Exponent (RE), and Rank Order Centroid (ROC) on five identified e-learning criteria to select the best weights method. A total of 35 experts in a public university in Malaysia were asked to rank the criteria and to evaluate five e-learning approaches which include blended learning, flipped classroom, ICT supported face to face learning, synchronous learning, and asynchronous learning. The best ranked criteria weights are defined as weights that have the least total absolute differences with the geometric mean of all weights, were then used to select the most suitable e-learning approach by using TOPSIS method. The results show that RR weights are the best, while flipped classroom approach implementation is the most suitable approach. This paper has developed a decision framework to aid decision makers (DMs) in choosing the most suitable weighting method for solving MCDM problems.
Using Adjoint Methods to Improve 3-D Velocity Models of Southern California
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.
2006-12-01
We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.
A universal reduced glass transition temperature for liquids
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1979-01-01
Data on the dependence of the glass transition temperature on the molecular structure for low-molecular-weight liquids are analyzed in order to determine whether Boyer's reduced glass transition temperature (1952) is a universal constant as proposed. It is shown that the Boyer ratio varies widely depending on the chemical nature of the molecule. It is pointed out that a characteristic temperature ratio, defined by the ratio of the sum of the melting temperature and the boiling temperature to the sum of the glass transition temperature and the boiling temperature, is a universal constant independent of the molecular structure of the liquid. The average value of the ratio obtained from data for 65 liquids is 1.15.
Calculations of the first frequency moment of the structure factor in the BCS model
NASA Astrophysics Data System (ADS)
Rendell, J. M.; Carbotte, J. P.
1998-03-01
We have calculated the first frequency moment of the dynamical structure factor, S(q,ω), known as the f-sum, using the BCS model of susceptibility, \\chi(q,ω), with phenomenological models of the normal state dispersion, tilde\\varepsilon_k, and the superconducting energy gap, Δ_k(T). We have found an explicit expression for the f-sum in both the normal state and the superconducting state. Numerically, we show that the f-sum is insensitive to temperature changes in the range 0 to the order of magnitude of T_c, to the state (normal or superconducting) and to the size and type of energy gap, Δ_k(T), in the superconducting state. The f-sum does depend intimately on the normal state dispersion model, tilde\\varepsilonk and on the filling in the first Brillouin zone. In addition, we show numerically that the f-sum is nearly constant for the Random Phase Approximation (RPA) of the susceptibility up to pseudo-potentials, U <= U_c, the critical potential. Thus, a large increase in Im \\chi(q_0,ω_0) at frequency ω0 and a potential U > 0 (e.g. examining the 41 meV peak at q0 = (π,π)), is compensated by commensurate reduction in Im \\chi(q_0,ω) at other frequencies.
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Combined natural convection and non-gray radiation heat transfer in a horizontal annulus
NASA Astrophysics Data System (ADS)
Sun, Yujia; Zhang, Xiaobing; Howell, John R.
2018-02-01
Natural convection and non-gray radiation in an annulus containing a radiative participating gas is investigated. To determine the effect of non-gray radiation, the spectral line based weighted sum of gray gas is adopted to model the gas radiative properties. Case with only surface radiation (transparent medium) is also considered to see the relative contributions of surface radiation and gas radiation. The finite volume method is used to solve the mass, momentum, energy and radiative transfer equations. Comparisons between pure convection, case considering only surface radiation and case considering both gas radiation and surface radiation are made and the results show that radiation is not negligible and gas radiation becomes more important with increasing Rayleigh number (and the annulus size).
Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Fernández-López, Juan Ramón; Cámara, Jesús; Maldonado, Sara; Rosique-Gracia, Javier
2013-01-01
The aim of this study was to analyse the association of morphology as well as functional outcomes during a paddling test with ranking position (RP) of competitive junior surfers. Ten male surfers (age, mean 17.60, s=2.06 years) performed a maximum incremental test on a modified ergometer (Ergo Vasa Swim, USA) to determine, per unit of weight, the relative heart rate at lactate threshold (RHRLT) and at onset of blood lactate accumulation (RHROBLA) and the relative power output at LT (RWLT) and at OBLA (RWOBLA). Anthropometrics were weight, height and sum of six skinfolds (subscapular, triceps, supraspinal, abdominal, anterior thigh and calf) and Heath-Carter anthropometric somatotypes. A stepwise multiple regression was constructed to model and predict RP. Surfers shared a relative short stature and light weight, with a broader range of skinfold thickness (174.30, s=0.07 cm; 66.73, s=5.91 kg; 57.03, s=12.29 mm) and mean somatotype was ectomorphic-mesomorph: 2.20-4.36-3.09 (Category 2). Two model equations were possible: (A) RP = - 244.550 RWOBLA+262.787; (B) RP = - 217.028·RWOBLA+31.21·endomorphy + 169.16 with 63.1% and 83% of variance explained, respectively. A hierarchical cluster analysis on the Euclidean distances of the variables in model B also distinguished between upper and lower ranking groups. RWOBLA was more useful than endomorphy, anthropometric measures and also than the other functional outcomes to predict RPs. RWOBLA and endomorphy should be considered important variables that may influence the success of these young competitive surfers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sambade, Maria J.; Kimple, Randall J.; Camp, J. Terese
2010-06-01
Purpose: To determine whether lapatinib, a dual epidermal growth factor receptor (EGFR)/HER2 kinase inhibitor, can radiosensitize EGFR+ or HER2+ breast cancer xenografts. Methods and Materials: Mice bearing xenografts of basal-like/EGFR+ SUM149 and HER2+ SUM225 breast cancer cells were treated with lapatinib and fractionated radiotherapy and tumor growth inhibition correlated with alterations in ERK1 and AKT activation by immunohistochemistry. Results: Basal-like/EGFR+ SUM149 breast cancer tumors were completely resistant to treatment with lapatinib alone but highly growth impaired with lapatinib plus radiotherapy, exhibiting an enhancement ratio average of 2.75 and a fractional tumor product ratio average of 2.20 during the study period.more » In contrast, HER2+ SUM225 breast cancer tumors were highly responsive to treatment with lapatinib alone and yielded a relatively lower enhancement ratio average of 1.25 during the study period with lapatinib plus radiotherapy. Durable tumor control in the HER2+ SUM225 model was more effective with the combination treatment than either lapatinib or radiotherapy alone. Immunohistochemical analyses demonstrated that radiosensitization by lapatinib correlated with ERK1/2 inhibition in the EGFR+ SUM149 model and with AKT inhibition in the HER2+ SUM225 model. Conclusion: Our data suggest that lapatinib combined with fractionated radiotherapy may be useful against EGFR+ and HER2+ breast cancers and that inhibition of downstream signaling to ERK1/2 and AKT correlates with sensitization in EGFR+ and HER2+ cells, respectively.« less
Spin Foam Models of Quantum Gravity
NASA Astrophysics Data System (ADS)
Miković, A.
2005-03-01
We give a short review of the spin foam models of quantum gravity, with an emphasis on the Barret-Crane model. After explaining the shortcomings of the Barret-Crane model, we briefly discuss two new approaches, one based on the 3d spin foam state sum invariants for the embedded spin networks, and the other based on representing the string scattering amplitudes as 2d spin foam state sum invariants.
Health-related quality of life, obesity, and fitness in schoolchildren: the Cuenca study.
Morales, Pablo Franquelo; Sánchez-López, Mairena; Moya-Martínez, Pablo; García-Prieto, Jorge Cañete; Martínez-Andrés, María; García, Noelia Lahoz; Martínez-Vizcaíno, Vicente
2013-09-01
The purpose of this study was to analyze the association of weight status and physical fitness with health-related quality of life (HRQoL) and to examine the independent association of body mass index (BMI), cardiorespiratory fitness (CRF) and musculoskeletal fitness (MF) with HRQoL in schoolchildren. Cross-sectional study of 1,158 schoolchildren, 8-11 years, from 20 schools in the Cuenca province, Spain. We measured weight, height, and physical fitness, measured by CRF (20-m shuttle run test) and MF index by summing the age-sex z scores of handgrip strength test/weight + standing broad jump test. Self-reported HRQoL was measured by KIDSCREEN-52 questionnaire. Normal weight boys scored better in physical well-being, mood and emotions, autonomy, and social support and peers dimensions than overweight/obese boys. The mean in self-perception dimensions was lower in obese girls compared to normal weight or overweight girls. Higher levels of CRF and MF were associated with better physical well-being in both genders. Multiple linear regression models showed that the influence of MF in boys and CRF in girls on HRQoL was greater than that of overweight. This is one of the first studies that assess the association of CRF and MF with HRQoL while controlling for BMI. CRF and MF are closely related to HRQoL, in particular to physical well-being. Improving fitness could be a strategy of particular interest for improving the HRQoL of schoolchildren.
Stratum Weight Determination Using Shortest Path Algorithm
Susan L. King
2005-01-01
Forest Inventory and Analysis uses poststratification to calculate resource estimates. Each county has a different stratification, and the stratification may differ depending on the number of panels of data available. A ?5 by 5 sum? filter was passed over the reclassified forest/nonforest Multi-Resolution Landscape Characterization image used in Phase 1, generating an...
9 CFR 381.409 - Nutrition label content.
Code of Federal Regulations, 2010 CFR
2010-01-01
... general factors of 4, 4, and 9 calories per gram for protein, total carbohydrate, and total fat... calories per gram for protein, total carbohydrate less the amount of insoluble dietary fiber, and total fat... subtraction of the sum of the crude protein, total fat, moisture, and ash from the total weight of the product...
9 CFR 317.309 - Nutrition label content.
Code of Federal Regulations, 2010 CFR
2010-01-01
... general factors of 4, 4, and 9 calories per gram for protein, total carbohydrate, and total fat... calories per gram for protein, total carbohydrate less the amount of insoluble dietary fiber, and total fat... subtraction of the sum of the crude protein, total fat, moisture, and ash from the total weight of the product...
7 CFR 760.5 - Fair market value of milk.
Code of Federal Regulations, 2010 CFR
2010-01-01
... market value of the affected farmer's normal marketings, which, for the purposes of this subpart, shall be the sum of the net proceeds such farmer would have received for his normal marketings in each of... affected farmer's normal marketings for each such pay period by the average net price per hundred-weight of...
46 CFR 163.002-5 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... load means the sum of the weights of— (1) The rigid ladder or lift platform, the suspension cables (if... persons capacity of the hoist; (c) Lift height means the distance from the lowest step of the pilot ladder... (2) If the hoist does not have suspension cables, the ladder or lift platform is in its lowest...
46 CFR 163.002-5 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... load means the sum of the weights of— (1) The rigid ladder or lift platform, the suspension cables (if... persons capacity of the hoist; (c) Lift height means the distance from the lowest step of the pilot ladder... (2) If the hoist does not have suspension cables, the ladder or lift platform is in its lowest...
Do Employers Forgive Applicants' Bad Spelling in Résumés?
ERIC Educational Resources Information Center
Martin-Lacroux, Christelle; Lacroux, Alain
2017-01-01
Spelling deficiencies are becoming a growing concern among employers, but few studies have quantified this phenomenon and its impact on recruiters' choice. This article aims to highlight the relative weight of the form (the spelling skills) in application forms, compared with the content (the level of work experience), in recruiters' judgment…
The permeability coefficients of mixed matrix membranes of polydimethylsiloxane (PDMS) and silicalite crystal are taken as the sum of the permeability coefficients of membrane components each weighted by their associated mass fraction. The permeability coefficient of a membrane c...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Definitions. As used in this part: Additional tier 1 capital is defined in § 3.20(c). Advanced approaches... described in § 3.100(b)(1). Advanced approaches total risk-weighted assets means: (1) The sum of: (i) Credit... respect to a company, means any company that controls, is controlled by, or is under common control with...
On the null distribution of Bayes factors in linear regression
USDA-ARS?s Scientific Manuscript database
We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...
2015-12-24
minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution
Nouri-Borujerdi, Ali; Kazi, Salim Newaz
2014-01-01
In this study an expression for soot absorption coefficient is introduced to extend the weighted-sum-of-gray gases data to the furnace medium containing gas-soot mixture in a utility boiler 150 MWe. Heat transfer and temperature distribution of walls and within the furnace space are predicted by zone method technique. Analyses have been done considering both cases of presence and absence of soot particles at 100% load. To validate the proposed soot absorption coefficient, the expression is coupled with the Taylor and Foster's data as well as Truelove's data for CO2-H2O mixture and the total emissivities are calculated and compared with the Truelove's parameters for 3-term and 4-term gray gases plus two soot absorption coefficients. In addition, some experiments were conducted at 100% and 75% loads to measure furnace exit gas temperature as well as the rate of steam production. The predicted results show good agreement with the measured data at the power plant site. PMID:25143981
Gharehkhani, Samira; Nouri-Borujerdi, Ali; Kazi, Salim Newaz; Yarmand, Hooman
2014-01-01
In this study an expression for soot absorption coefficient is introduced to extend the weighted-sum-of-gray gases data to the furnace medium containing gas-soot mixture in a utility boiler 150 MWe. Heat transfer and temperature distribution of walls and within the furnace space are predicted by zone method technique. Analyses have been done considering both cases of presence and absence of soot particles at 100% load. To validate the proposed soot absorption coefficient, the expression is coupled with the Taylor and Foster's data as well as Truelove's data for CO2-H2O mixture and the total emissivities are calculated and compared with the Truelove's parameters for 3-term and 4-term gray gases plus two soot absorption coefficients. In addition, some experiments were conducted at 100% and 75% loads to measure furnace exit gas temperature as well as the rate of steam production. The predicted results show good agreement with the measured data at the power plant site.
NASA Astrophysics Data System (ADS)
Musdalifah, N.; Handajani, S. S.; Zukhronah, E.
2017-06-01
Competition between the homoneous companies cause the company have to keep production quality. To cover this problem, the company controls the production with statistical quality control using control chart. Shewhart control chart is used to normal distributed data. The production data is often non-normal distribution and occured small process shift. Grand median control chart is a control chart for non-normal distributed data, while cumulative sum (cusum) control chart is a sensitive control chart to detect small process shift. The purpose of this research is to compare grand median and cusum control charts on shuttlecock weight variable in CV Marjoko Kompas dan Domas by generating data as the actual distribution. The generated data is used to simulate multiplier of standard deviation on grand median and cusum control charts. Simulation is done to get average run lenght (ARL) 370. Grand median control chart detects ten points that out of control, while cusum control chart detects a point out of control. It can be concluded that grand median control chart is better than cusum control chart.
NASA Technical Reports Server (NTRS)
Liu, G.
1985-01-01
One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.
Toscani, Mariana K; Mario, Fernanda M; Radavelli-Bagatini, Simone; Wiltgen, Denusa; Matos, Maria Cristina; Spritzer, Poli Maria
2011-11-01
The aim of the present study was to assess the effects of a high protein (HP) and a normal protein (NP) diet on patients with polycystic ovary syndrome (PCOS) and body mass index-matched controls in a sample of southern Brazilian women. This 8-week randomized trial was carried out at a university gynecological endocrinology clinic and included 18 patients with PCOS and 22 controls. Changes in weight, body composition, hormone, and metabolic profile were analyzed in women randomized to receive HP (30% protein, 40% carbohydrate, and 30% lipid) or NP (15% protein, 55% carbohydrate, and 30% lipid). The energy content was estimated for each participant at 20-25 kcal/kg current weight/day. Physical activity, blood pressure, homeostasis model assessment (HOMA) index, and fasting and 2-h glucose and insulin remained stable during the intervention in PCOS and controls, even in the presence of weight loss. There were no changes in lipid profile in either group. In contrast, body weight, body mass index (BMI), waist circumference, percent of body fat, and sum of trunk skinfolds decreased significantly after both diets in both groups. Total testosterone also decreased in PCOS and controls regardless of diet. In conclusion, calorie reduction, rather than protein content, seemed to affect body composition and hormonal profile in this short-term study. These findings emphasize the role of non-pharmacological interventions to reduce weight and ameliorate the anthropometric and clinical phenotype in PCOS.
Heterodimer Binding Scaffolds Recognition via the Analysis of Kinetically Hot Residues
Perišić, Ognjen
2018-01-01
Physical interactions between proteins are often difficult to decipher. The aim of this paper is to present an algorithm that is designed to recognize binding patches and supporting structural scaffolds of interacting heterodimer proteins using the Gaussian Network Model (GNM). The recognition is based on the (self) adjustable identification of kinetically hot residues and their connection to possible binding scaffolds. The kinetically hot residues are residues with the lowest entropy, i.e., the highest contribution to the weighted sum of the fastest modes per chain extracted via GNM. The algorithm adjusts the number of fast modes in the GNM’s weighted sum calculation using the ratio of predicted and expected numbers of target residues (contact and the neighboring first-layer residues). This approach produces very good results when applied to dimers with high protein sequence length ratios. The protocol’s ability to recognize near native decoys was compared to the ability of the residue-level statistical potential of Lu and Skolnick using the Sternberg and Vakser decoy dimers sets. The statistical potential produced better overall results, but in a number of cases its predicting ability was comparable, or even inferior, to the prediction ability of the adjustable GNM approach. The results presented in this paper suggest that in heterodimers at least one protein has interacting scaffold determined by the immovable, kinetically hot residues. In many cases, interacting proteins (especially if being of noticeably different sizes) either behave as a rigid lock and key or, presumably, exhibit the opposite dynamic behavior. While the binding surface of one protein is rigid and stable, its partner’s interacting scaffold is more flexible and adaptable. PMID:29547506
Analysis of the influencing factors of global energy interconnection development
NASA Astrophysics Data System (ADS)
Zhang, Yi; He, Yongxiu; Ge, Sifan; Liu, Lin
2018-04-01
Under the background of building global energy interconnection and achieving green and low-carbon development, this paper grasps a new round of energy restructuring and the trend of energy technology change, based on the present situation of global and China's global energy interconnection development, established the index system of the impact of global energy interconnection development factors. A subjective and objective weight analysis of the factors affecting the development of the global energy interconnection was conducted separately by network level analysis and entropy method, and the weights are summed up by the method of additive integration, which gives the comprehensive weight of the influencing factors and the ranking of their influence.
Nash, Mark S; Tractenberg, Rochelle E; Mendez, Armando J; David, Maya; Ljungberg, Inger H; Tinsley, Emily A; Burns-Drecq, Patricia A; Betancourt, Luisa F; Groah, Suzanne L
2016-10-01
To assess cardiometabolic syndrome (CMS) risk definitions in spinal cord injury/disease (SCI/D). Cross-sectional analysis of a pooled sample. Two SCI/D academic medical and rehabilitation centers. Baseline data from subjects in 7 clinical studies were pooled; not all variables were collected in all studies; therefore, participant numbers varied from 119 to 389. The pooled sample included men (79%) and women (21%) with SCI/D >1 year at spinal cord levels spanning C3-T2 (American Spinal Injury Association Impairment Scale [AIS] grades A-D). Not applicable. We computed the prevalence of CMS using the American Heart Association/National Heart, Lung, and Blood Institute guideline (CMS diagnosis as sum of risks ≥3 method) for the following risk components: overweight/obesity, insulin resistance, hypertension, and dyslipidemia. We compared this prevalence with the risk calculated from 2 routinely used nonguideline CMS risk assessments: (1) key cut scores identifying insulin resistance derived from the homeostatic model 2 (HOMA2) method or quantitative insulin sensitivity check index (QUICKI), and (2) a cardioendocrine risk ratio based on an inflammation (C-reactive protein [CRP])-adjusted total cholesterol/high-density lipoprotein cholesterol ratio. After adjustment for multiple comparisons, injury level and AIS grade were unrelated to CMS or risk factors. Of the participants, 13% and 32.1% had CMS when using the sum of risks or HOMA2/QUICKI model, respectively. Overweight/obesity and (pre)hypertension were highly prevalent (83% and 62.1%, respectively), with risk for overweight/obesity being significantly associated with CMS diagnosis (sum of risks, χ(2)=10.105; adjusted P=.008). Insulin resistance was significantly associated with CMS when using the HOMA2/QUICKI model (χ(2)2=21.23, adjusted P<.001). Of the subjects, 76.4% were at moderate to high risk from elevated CRP, which was significantly associated with CMS determination (both methods; sum of risks, χ(2)2=10.198; adjusted P=.048 and HOMA2/QUICKI, χ(2)2=10.532; adjusted P=.04). As expected, guideline-derived CMS risk factors were prevalent in individuals with SCI/D. Overweight/obesity, hypertension, and elevated CRP were common in SCI/D and, because they may compound risks associated with CMS, should be considered population-specific risk determinants. Heightened surveillance for risk, and adoption of healthy living recommendations specifically directed toward weight reduction, hypertension management, and inflammation control, should be incorporated as a priority for disease prevention and management. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
A new approach for computing a flood vulnerability index using cluster analysis
NASA Astrophysics Data System (ADS)
Fernandez, Paulo; Mourato, Sandra; Moreira, Madalena; Pereira, Luísa
2016-08-01
A Flood Vulnerability Index (FloodVI) was developed using Principal Component Analysis (PCA) and a new aggregation method based on Cluster Analysis (CA). PCA simplifies a large number of variables into a few uncorrelated factors representing the social, economic, physical and environmental dimensions of vulnerability. CA groups areas that have the same characteristics in terms of vulnerability into vulnerability classes. The grouping of the areas determines their classification contrary to other aggregation methods in which the areas' classification determines their grouping. While other aggregation methods distribute the areas into classes, in an artificial manner, by imposing a certain probability for an area to belong to a certain class, as determined by the assumption that the aggregation measure used is normally distributed, CA does not constrain the distribution of the areas by the classes. FloodVI was designed at the neighbourhood level and was applied to the Portuguese municipality of Vila Nova de Gaia where several flood events have taken place in the recent past. The FloodVI sensitivity was assessed using three different aggregation methods: the sum of component scores, the first component score and the weighted sum of component scores. The results highlight the sensitivity of the FloodVI to different aggregation methods. Both sum of component scores and weighted sum of component scores have shown similar results. The first component score aggregation method classifies almost all areas as having medium vulnerability and finally the results obtained using the CA show a distinct differentiation of the vulnerability where hot spots can be clearly identified. The information provided by records of previous flood events corroborate the results obtained with CA, because the inundated areas with greater damages are those that are identified as high and very high vulnerability areas by CA. This supports the fact that CA provides a reliable FloodVI.
FIA BioSum: a tool to evaluate financial costs, opportunities and effectiveness of fuel treatments.
Jeremy Fried; Glenn Christensen
2004-01-01
FIA BioSum, a tool developed by the USDA Forest Services Forest Inventory and Analysis (FIA) Program, generates reliable cost estimates, identifies opportunities and evaluates the effectiveness of fuel treatments in forested landscapes. BioSum is an analytic framework that integrates a suite of widely used computer models with a foundation of attribute-rich,...
Force sum rules for stepped surfaces of jellium
NASA Astrophysics Data System (ADS)
Farjam, Mani
2007-03-01
The Budd-Vannimenus theorem for jellium surface is generalized for stepped surfaces of jellium. Our sum rules show that the average value of the electrostatic potential over the stepped jellium surface equals the value of the potential at the corresponding flat jellium surface. Several sum rules are tested with numerical results obtained within the Thomas-Fermi model of stepped surfaces.
Ho, Lisa M; Nelson, Rendon C; Delong, David M
2007-05-01
To prospectively evaluate the use of lean body weight (LBW) as the main determinant of the volume and rate of contrast material administration during multi-detector row computed tomography of the liver. This HIPAA-compliant study had institutional review board approval. All patients gave written informed consent. Four protocols were compared. Standard protocol involved 125 mL of iopamidol injected at 4 mL/sec. Total body weight (TBW) protocol involved 0.7 g iodine per kilogram of TBW. Calculated LBW and measured LBW protocols involved 0.86 g of iodine per kilogram and 0.92 g of iodine per kilogram calculated or measured LBW for men and women, respectively. Injection rate used for the three experimental protocols was determined proportionally on the basis of the calculated volume of contrast material. Postcontrast attenuation measurements during portal venous phase were obtained in liver, portal vein, and aorta for each group and were summed for each patient. Patient-to-patient enhancement variability in same group was measured with Levene test. Two-tailed t test was used to compare the three experimental protocols with the standard protocol. Data analysis was performed in 101 patients (25 or 26 patients per group), including 56 men and 45 women (mean age, 53 years). Average summed attenuation values for standard, TBW, calculated LBW, and measured LBW protocols were 419 HU +/- 50 (standard deviation), 443 HU +/- 51, 433 HU +/- 50, and 426 HU +/- 33, respectively (P = not significant for all). Levene test results for summed attenuation data for standard, TBW, calculated LBW, and measured LBW protocols were 40 +/- 29, 38 +/- 33 (P = .83), 35 +/- 35 (P = .56), and 26 +/- 19 (P = .05), respectively. By excluding highly variable but poorly perfused adipose tissue from calculation of contrast medium dose, the measured LBW protocol may lessen patient-to-patient enhancement variability while maintaining satisfactory hepatic and vascular enhancement.
An Isotonic Partial Credit Model for Ordering Subjects on the Basis of Their Sum Scores
ERIC Educational Resources Information Center
Ligtvoet, Rudy
2012-01-01
In practice, the sum of the item scores is often used as a basis for comparing subjects. For items that have more than two ordered score categories, only the partial credit model (PCM) and special cases of this model imply that the subjects are stochastically ordered on the common latent variable. However, the PCM is very restrictive with respect…
Swingle, Brian
2013-09-06
We compute the entanglement entropy of a wide class of models that may be characterized as describing matter coupled to gauge fields. Our principle result is an entanglement sum rule that states that the entropy of the full system is the sum of the entropies of the two components. In the context of the models we consider, this result applies to the full entropy, but more generally it is a statement about the additivity of universal terms in the entropy. Our proof simultaneously extends and simplifies previous arguments, with extensions including new models at zero temperature as well as the ability to treat finite temperature crossovers. We emphasize that while the additivity is an exact statement, each term in the sum may still be difficult to compute. Our results apply to a wide variety of phases including Fermi liquids, spin liquids, and some non-Fermi liquid metals. For example, we prove that our model of an interacting Fermi liquid has exactly the log violation of the area law for entanglement entropy predicted by the Widom formula in agreement with earlier arguments.
Silva, T M; de Medeiros, A N; Oliveira, R L; Gonzaga Neto, S; Queiroga, R de C R do E; Ribeiro, R D X; Leão, A G; Bezerra, L R
2016-07-01
This study aimed to determine the impact of replacing soybean meal with peanut cake in the diets of crossbred Boer goats as determined by carcass characteristics and quality and by the fatty acid profile of meat. Forty vaccinated and dewormed crossbred Boer goats were used. Goats had an average age of 5 mo and an average BW of 15.6 ± 2.7 kg. Goats were fed Tifton-85 hay and a concentrate consisting of corn bran, soybean meal, and mineral premix. Peanut cake was substituted for soybean meal at levels of 0.0, 33.33, 66.67, and 100%. Biometric and carcass morphometric measurements of crossbred Boer goats were not affected by replacing soybean meal with peanut cake in the diet. There was no influence of the replacement of soybean meal with peanut cake on weight at slaughter ( = 0.28), HCW ( = 0.26), cold carcass weight ( = 0.23), noncarcass components of weight ( = 0.71), or muscularity index values ( = 0.11). However, regression equations indicated that there would be a reduction of 18 and 11% for loin eye area and muscle:bone ratio, respectively, between the treatment without peanut cake and the treatment with total soybean meal replacement. The weights and yields of the commercial cuts were not affected ( > 0.05) by replacing soybean meal with peanut cake in the diet. Replacing soybean meal with peanut cake did not affect the pH ( = 0.79), color index ( > 0.05), and chemical composition ( > 0.05) of the meat (). However, a quadratic trend for the ash content was observed with peanut cake inclusion in the diet ( = 0.09). Peanut cake inclusion in the diet did not affect the concentrations of the sum of SFA ( = 0.29), the sum of unsaturated fatty acids (UFA; = 0.29), or the sum of PUFA ( = 0.97) or the SFA:UFA ratio ( = 0.23) in goat meat. However, there was a linear decrease ( = 0.01) in the sum of odd-chain fatty acids in the meat with increasing peanut cake in the diet. Soybean meal replacement with peanut cake did not affect the n-6:n-3 ratio ( = 0.13) or the medium-chain fatty acid ( = 0.76), long-chain fatty acid ( = 0.74), or atherogenicity index values ( = 0.60) in the meat. The sensory attributes of the longissimus lumborum did not differ with the inclusion of peanut cake in the diet as a replacement for soybean meal. These results suggest that based on carcass and meat characteristics, peanut cake can completely substitute soybean meal in the diet of crossbred Boer goats.
NASA Astrophysics Data System (ADS)
Li, Xing; Jia, Li
2014-10-01
Combustion characteristics of methane jet flames in an industrial burner working in high temperature combustion regime were investigated experimentally and numerically to clarify the effects of swirling high temperature air on combustion. Speziale-Sarkar-Gatski (SSG) Reynolds stress model, Eddy-Dissipation Model (EDM), Discrete Ordinates Method (DTM) combined with Weighted-Sum-of-Grey Gases Model (WSGG) were employed for the numerical simulation. Both Thermal-NO and Prompt-NO mechanism were considered to evaluate the NO formation. Temperature distribution, NO emissions by experiment and computation in swirling and non-swirling patterns show combustion characteristics of methane jet flames are totally different. Non-swirling high temperature air made high NO formation while significant NO prohibition were achieved by swirling high temperature air. Furthermore, velocity fields, dimensionless major species mole fraction distributions and Thermal-NO molar reaction rate profiles by computation interpret an inner exhaust gas recirculation formed in the combustion zone in swirling case.
Compton scattering from nuclei and photo-absorption sum rules
NASA Astrophysics Data System (ADS)
Gorchtein, Mikhail; Hobbs, Timothy; Londergan, J. Timothy; Szczepaniak, Adam P.
2011-12-01
We revisit the photo-absorption sum rule for real Compton scattering from the proton and from nuclear targets. In analogy with the Thomas-Reiche-Kuhn sum rule appropriate at low energies, we propose a new “constituent quark model” sum rule that relates the integrated strength of hadronic resonances to the scattering amplitude on constituent quarks. We study the constituent quark model sum rule for several nuclear targets. In addition, we extract the α=0 pole contribution for both proton and nuclei. Using the modern high-energy proton data, we find that the α=0 pole contribution differs significantly from the Thomson term, in contrast with the original findings by Damashek and Gilman.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
On the joint bimodality of temperature and moisture near stratocumulus cloud tops
NASA Technical Reports Server (NTRS)
Randall, D. A.
1983-01-01
The observed distributions of the thermodynamic variables near stratocumulus top are highly bimodal. Two simple models of sub-grid fractional cloudiness motivated by this observed bimodality are examined. In both models, certain low order moments of two independent, moist-conservative thermodynamic variables are assumed to be known. The first model is based on the assumption of two discrete populations of parcels: a warm-day population and a cool-moist population. If only the first and second moments are assumed to be known, the number of unknowns exceeds the number of independent equations. If the third moments are assumed to be known as well, the number of independent equations exceeds the number of unknowns. The second model is based on the assumption of a continuous joint bimodal distribution of parcels, obtained as the weighted sum of two binormal distributions. For this model, the third moments are used to obtain 9 independent nonlinear algebraic equations in 11 unknowns. Two additional equations are needed to determine the covariance within the two subpopulations. In case these two internal covariance vanish, the system of equations can be solved analytically.
ERIC Educational Resources Information Center
Shen, Jianping; Xia, Jiangang
2012-01-01
Is the power relationship between public school teachers and principals a win-win situation or a zero-sum game? By applying hierarchical linear modeling to the 1999-2000 nationally representative Schools and Staffing Survey data, we found that both the win-win and zero-sum-game theories had empirical evidence. The decision-making areas…
Additivity in tree biomass components of Pyrenean oak (Quercus pyrenaica Willd.)
Joao P. Carvalho; Bernard R. Parresol
2003-01-01
In tree biomass estimations, it is important to consider the property of additivity, i.e., the total tree biomass should equal the sum of the components. This work presents functions that allow estimation of the stem and crown dry weight components of Pyrenean oak (Quercus pyrenaica Willd.) trees. A procedure that considers additivity of tree biomass...
2013-01-01
Background The role of environmental factors in lumbar intervertebral disc degeneration (DD) in young adults is largely unknown. Therefore, we investigated whether body mass index (BMI), smoking, and physical activity are associated with lumbar DD among young adults. Methods The Oulu Back Study (OBS) is a subpopulation of the 1986 Northern Finland Birth Cohort (NFBC 1986) and it originally included 2,969 children. The OBS subjects received a postal questionnaire, and those who responded (N = 1,987) were invited to the physical examination. The participants (N = 874) were invited to lumbar MRI study. A total of 558 young adults (325 females and 233 males) underwent MRI that used a 1.5-T scanner at the mean age of 21. Each lumbar intervertebral disc was graded as normal (0), mildly (1), moderately (2), or severely (3) degenerated. We calculated a sum score of the lumbar DD, and analyzed the associations between environmental risk factors (smoking, physical activity and weight-related factors assessed at 16 and 19 years) and DD using ordinal logistic regression, the results being expressed as cumulative odds ratios (COR). All analyses were stratified by gender. Results Of the 558 subjects, 256 (46%) had no DD, 117 (21%) had sum score of one, 93 (17%) sum score of two, and 92 (17%) sum score of three or higher. In the multivariate ordinal logistic regression model, BMI at 16 years (highest vs. lowest quartile) was associated with DD sum score among males (COR 2.35; 95% CI 1.19-4.65) but not among females (COR 1.29; 95% CI 0.72-2.32). Smoking of at least four pack-years was associated with DD among males, but not among females (COR 2.41; 95% CI 0.99-5.86 and 1.59; 95% 0.67-3.76, respectively). Self-reported physical activity was not associated with DD. Conclusions High BMI at 16 years was associated with lumbar DD at 21 years among young males but not among females. High pack-years of smoking showed a comparable association in males, while physical activity had no association with DD in either gender. These results suggest that environmental factors are associated with DD among young males. PMID:23497297
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
Scoring of Side-Chain Packings: An Analysis of Weight Factors and Molecular Dynamics Structures.
Colbes, Jose; Aguila, Sergio A; Brizuela, Carlos A
2018-02-26
The protein side-chain packing problem (PSCPP) is a central task in computational protein design. The problem is usually modeled as a combinatorial optimization problem, which consists of searching for a set of rotamers, from a given rotamer library, that minimizes a scoring function (SF). The SF is a weighted sum of terms, that can be decomposed in physics-based and knowledge-based terms. Although there are many methods to obtain approximate solutions for this problem, all of them have similar performances and there has not been a significant improvement in recent years. Studies on protein structure prediction and protein design revealed the limitations of current SFs to achieve further improvements for these two problems. In the same line, a recent work reported a similar result for the PSCPP. In this work, we ask whether or not this negative result regarding further improvements in performance is due to (i) an incorrect weighting of the SFs terms or (ii) the constrained conformation resulting from the protein crystallization process. To analyze these questions, we (i) model the PSCPP as a bi-objective combinatorial optimization problem, optimizing, at the same time, the two most important terms of two SFs of state-of-the-art algorithms and (ii) performed a preprocessing relaxation of the crystal structure through molecular dynamics to simulate the protein in the solvent and evaluated the performance of these two state-of-the-art SFs under these conditions. Our results indicate that (i) no matter what combination of weight factors we use the current SFs will not lead to better performances and (ii) the evaluated SFs will not be able to improve performance on relaxed structures. Furthermore, the experiments revealed that the SFs and the methods are biased toward crystallized structures.
Supplier Selection Using Weighted Utility Additive Method
NASA Astrophysics Data System (ADS)
Karande, Prasad; Chakraborty, Shankar
2015-10-01
Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove its applicability and appropriateness in solving supplier selection problems.
Rapid parallel semantic processing of numbers without awareness.
Van Opstal, Filip; de Lange, Floris P; Dehaene, Stanislas
2011-07-01
In this study, we investigate whether multiple digits can be processed at a semantic level without awareness, either serially or in parallel. In two experiments, we presented participants with two successive sets of four simultaneous Arabic digits. The first set was masked and served as a subliminal prime for the second, visible target set. According to the instructions, participants had to extract from the target set either the mean or the sum of the digits, and to compare it with a reference value. Results showed that participants applied the requested instruction to the entire set of digits that was presented below the threshold of conscious perception, because their magnitudes jointly affected the participant's decision. Indeed, response decision could be accurately modeled as a sigmoid logistic function that pooled together the evidence provided by the four targets and, with lower weights, the four primes. In less than 800ms, participants successfully approximated the addition and mean tasks, although they tended to overweight the large numbers, particularly in the sum task. These findings extend previous observations on ensemble coding by showing that set statistics can be extracted from abstract symbolic stimuli rather than low-level perceptual stimuli, and that an ensemble code can be represented without awareness. Copyright © 2011 Elsevier B.V. All rights reserved.
Swanson, David L; Garland, Theodore
2009-01-01
Summit metabolic rate (M(sum), maximum cold-induced metabolic rate) is positively correlated with cold tolerance in birds, suggesting that high M(sum) is important for residency in cold climates. However, the phylogenetic distribution of high M(sum) among birds and the impact of its evolution on current distributions are not well understood. Two potential adaptive hypotheses might explain the phylogenetic distribution of high M(sum) among birds. The cold adaptation hypothesis contends that species wintering in cold climates should have higher M(sum) than species wintering in warmer climates. The flight adaptation hypothesis suggests that volant birds might be capable of generating high M(sum) as a byproduct of their muscular capacity for flight; thus, variation in M(sum) should be associated with capacity for sustained flight, one indicator of which is migration. We collected M(sum) data from the literature for 44 bird species and conducted both conventional and phylogenetically informed statistical analyses to examine the predictors of M(sum) variation. Significant phylogenetic signal was present for log body mass, log mass-adjusted M(sum), and average temperature in the winter range. In multiple regression models, log body mass, winter temperature, and clade were significant predictors of log M(sum). These results are consistent with a role for climate in determining M(sum) in birds, but also indicate that phylogenetic signal remains even after accounting for associations indicative of adaptation to winter temperature. Migratory strategy was never a significant predictor of log M(sum) in multiple regressions, a result that is not consistent with the flight adaptation hypothesis.
Design of pilot studies to inform the construction of composite outcome measures.
Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing
2017-06-01
Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.
NASA Astrophysics Data System (ADS)
Kanada-En'yo, Yoshiko
2016-02-01
Isovector and isoscalar dipole excitations in 9Be and 10Be are investigated in the framework of antisymmetrized molecular dynamics, in which angular-momentum and parity projections are performed. In the present method, 1p-1h excitation modes built on the ground state and a large amplitude α -cluster mode are taken into account. The isovector giant dipole resonance (GDR) in E >20 MeV shows the two-peak structure, which is understood from the dipole excitation in the 2 α core part with the prolate deformation. Because of valence neutron modes against the 2 α core, low-energy E 1 resonances appear in E <20 MeV, exhausting about 20 % of the Thomas-Reiche-Kuhn sum rule and 10 % of the calculated energy-weighted sum. The dipole resonance at E ˜15 MeV in 10Be can be interpreted as the parity partner of the ground state having a 6He+α structure and has remarkable E 1 strength because of the coherent contribution of two valence neutrons. The isoscalar dipole strength for some low-energy resonances is significantly enhanced by the coupling with the α -cluster mode. For the E 1 strength of 9Be, the calculation overestimates the energy-weighted sum (EWS) in the low-energy (E <20 MeV) and GDR (20
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
NASA Astrophysics Data System (ADS)
Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.
1993-08-01
Inspired from the time delays that occur in neurobiological signal transmission, we describe an adaptive time delay neural network (ATNN) which is a powerful dynamic learning technique for spatiotemporal pattern transformation and temporal sequence identification. The dynamic properties of this network are formulated through the adaptation of time-delays and synapse weights, which are adjusted on-line based on gradient descent rules according to the evolution of observed inputs and outputs. We have applied the ATNN to examples that possess spatiotemporal complexity, with temporal sequences that are completed by the network. The ATNN is able to be applied to pattern completion. Simulation results show that the ATNN learns the topology of a circular and figure eight trajectories within 500 on-line training iterations, and reproduces the trajectory dynamically with very high accuracy. The ATNN was also trained to model the Fourier series expansion of the sum of different odd harmonics. The resulting network provides more flexibility and efficiency than the TDNN and allows the network to seek optimal values for time-delays as well as optimal synapse weights.
Evaluation of health care system reform in Hubei Province, China.
Sang, Shuping; Wang, Zhenkun; Yu, Chuanhua
2014-02-21
This study established a set of indicators for and evaluated the effects of health care system reform in Hubei Province (China) from 2009 to 2011 with the purpose of providing guidance to policy-makers regarding health care system reform. The resulting indicators are based on the "Result Chain" logic model and include the following four domains: Inputs and Processes, Outputs, Outcomes and Impact. Health care system reform was evaluated using the weighted TOPSIS and weighted Rank Sum Ratio methods. Ultimately, the study established a set of indicators including four grade-1 indicators, 16 grade-2 indicators and 76 grade-3 indicators. The effects of the reforms increased year by year from 2009 to 2011 in Hubei Province. The health status of urban and rural populations and the accessibility, equity and quality of health services in Hubei Province were improved after the reforms. This sub-national case can be considered an example of a useful approach to the evaluation of the effects of health care system reform, one that could potentially be applied in other provinces or nationally.
The giant Gamow-Teller resonance states
NASA Astrophysics Data System (ADS)
Suzuki, Toshio
1982-04-01
The mean energy of the giant Gamow-Teller resonance state (GTS) is studied, which is defined by the non-energy-weighted and the linearly energy-weighted sum of the strengths for ΣAi = 1 τi- σi- Using Bohr and Mottelson's hamiltonian with the ξl· σ force, the difference between the mean energies of GTS and the isobaric analog state (IAS) is expressed as E GTS -E IAS,≈ 2<π¦Σ Ai=1ξ il i· σ i¦π>/ (3T 0-4(k τ-k στ) T 0. The observed energy systematics is well explained by kτ- kστ≈ 4/ A MeV . The relationship between the mean energies and the excitation energies of the collective states in the random phase approximation for charge-exchange excitations is discussed in a simple model. From the excitation energy systematics of GTS, the values of kστ and the Migdal parameter g' are estimated to be about k στ = {(16-24)}/{A}MeV and g' = 0.49-0.72 , respectively.
Zenic, Natasa; Ostojic, Ljerka; Sisic, Nedim; Pojskic, Haris; Peric, Mia; Uljevic, Ognjen; Sekulic, Damir
2015-01-01
Objective The community of residence (ie, urban vs rural) is one of the known factors of influence on substance use and misuse (SUM). The aim of this study was to explore the community-specific prevalence of SUM and the associations that exist between scholastic, familial, sports and sociodemographic factors with SUM in adolescents from Bosnia and Herzegovina. Methods In this cross-sectional study, which was completed between November and December 2014, the participants were 957 adolescents (aged 17 to 18 years) from Bosnia and Herzegovina (485; 50.6% females). The independent variables were sociodemographic, academic, sport and familial factors. The dependent variables consisted of questions on cigarette smoking and alcohol consumption. We have calculated differences between groups of participants (gender, community), while the logistic regressions were applied to define associations between the independent and dependent variables. Results In the urban community, cigarette smoking is more prevalent in girls (OR=2.05; 95% CI 1.27 to 3.35), while harmful drinking is more prevalent in boys (OR=2.07; 95% CI 1.59 to 2.73). When data are weighted by gender and community, harmful drinking is more prevalent in urban boys (OR=1.97; 95% CI 1.31 to 2.95), cigarette smoking is more frequent in rural boys (OR=1.61; 95% CI 1.04 to 2.39), and urban girls misuse substances to a greater extent than rural girls (OR=1.70; 95% CI 1.16 to 2.51,OR=2.85; 95% CI 1.88 to 4.31,OR=2.78; 95% CI 1.67 to 4.61 for cigarette smoking, harmful drinking and simultaneous smoking-drinking, respectively). Academic failure is strongly associated with a higher likelihood of SUM. The associations between parental factors and SUM are more evident in urban youth. Sports factors are specifically correlated with SUM for urban girls. Conclusions Living in an urban environment should be considered as a higher risk factor for SUM in girls. Parental variables are more strongly associated with SUM among urban youth, most probably because of the higher parental involvement in children’ personal lives in urban communities (ie, college plans, for example). Specific indicators should be monitored in the prevention of SUM. PMID:26546145
Louzada, Martha L; Carrier, Marc; Lazo-Langner, Alejandro; Dao, Vi; Kovacs, Michael J; Ramsay, Timothy O; Rodger, Marc A; Zhang, Jerry; Lee, Agnes Y Y; Meyer, Guy; Wells, Philip S
2012-07-24
Long-term low-molecular-weight heparin (LMWH) is the current standard for treatment of venous thromboembolism (VTE) in cancer patients. Whether treatment strategies should vary according to individual risk of VTE recurrence remains unknown. We performed a retrospective cohort study and a validation study in patients with cancer-associated VTE to derive a clinical prediction rule that stratifies VTE recurrence risk. The cohort study of 543 patients determined the model with the best classification performance included 4 independent predictors (sex, primary tumor site, stage, and prior VTE) with 100% sensitivity, a wide separation of recurrence rates, 98.1% negative predictive value, and a negative likelihood ratio of 0.16. In this model, the score sum ranged between -3 and 3 score points. Patients with a score ≤ 0 had low risk (≤ 4.5%) for recurrence and patients with a score >1 had a high risk (≥ 19%) for VTE recurrence. Subsequently, we applied and validated the rule in an independent set of 819 patients from 2 randomized, controlled trials comparing low-molecular-weight heparin to coumarin treatment in cancer patients. By identifying VTE recurrence risk in cancer patients with VTE, we may be able to tailor treatment, improving clinical outcomes while minimizing costs.
Parallel interference cancellation for CDMA applications
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Simon, Marvin K. (Inventor); Raphaeli, Dan (Inventor)
1997-01-01
The present invention provides a method of decoding a spread spectrum composite signal, the composite signal comprising plural user signals that have been spread with plural respective codes, wherein each coded signal is despread, averaged to produce a signal value, analyzed to produce a tentative decision, respread, summed with other respread signals to produce combined interference signals, the method comprising scaling the combined interference signals with a weighting factor to produce a scaled combined interference signal, scaling the composite signal with the weighting factor to produce a scaled composite signal, scaling the signal value by the complement of the weighting factor to produce a leakage signal, combining the scaled composite signal, the scaled combined interference signal and the leakage signal to produce an estimate of a respective user signal.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.; Newsom, J. R.; Abel, I.
1980-01-01
A direct method of synthesizing a low-order optimal feedback control law for a high order system is presented. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean square steady state responses and control inputs. The controller is shown to be equivalent to a partial state estimator. The method is applied to the problem of active flutter suppression. Numerical results are presented for a 20th order system representing an aeroelastic wind-tunnel wing model. Low-order controllers (fourth and sixth order) are compared with a full order (20th order) optimal controller and found to provide near optimal performance with adequate stability margins.
Optimal landing of a helicopter in autorotation
NASA Technical Reports Server (NTRS)
Lee, A. Y. N.
1985-01-01
Gliding descent in autorotation is a maneuver used by helicopter pilots in case of engine failure. The landing of a helicopter in autorotation is formulated as a nonlinear optimal control problem. The OH-58A helicopter was used. Helicopter vertical and horizontal velocities, vertical and horizontal displacement, and the rotor angle speed were modeled. An empirical approximation for the induced veloctiy in the vortex-ring state were provided. The cost function of the optimal control problem is a weighted sum of the squared horizontal and vertical components of the helicopter velocity at touchdown. Optimal trajectories are calculated for entry conditions well within the horizontal-vertical restriction curve, with the helicopter initially in hover or forwared flight. The resultant two-point boundary value problem with path equality constraints was successfully solved using the Sequential Gradient Restoration Technique.
Soviet Economic Policy Towards Eastern Europe
1988-11-01
high. Without specifying the determinants of Soviet demand for "allegiance" in more detail, the model is not testable; we cannot predict how subsidy...trade inside (Czechoslovakia, Bulgaria). These countries are behaving as predicted by the model . If this hypothesis is true, the pattern of subsidies...also compares the sum of per capita subsidies by country between 1970 and 1982 with the sum of subsidies predicted by the model . Because of the poor
Tripartite equilibrium strategy for a carbon tax setting problem in air passenger transport.
Xu, Jiuping; Qiu, Rui; Tao, Zhimiao; Xie, Heping
2018-03-01
Carbon emissions in air passenger transport have become increasing serious with the rapidly development of aviation industry. Combined with a tripartite equilibrium strategy, this paper proposes a multi-level multi-objective model for an air passenger transport carbon tax setting problem (CTSP) among an international organization, an airline and passengers with the fuzzy uncertainty. The proposed model is simplified to an equivalent crisp model by a weighted sum procedure and a Karush-Kuhn-Tucker (KKT) transformation method. To solve the equivalent crisp model, a fuzzy logic controlled genetic algorithm with entropy-Bolitzmann selection (FLC-GA with EBS) is designed as an integrated solution method. Then, a numerical example is provided to demonstrate the practicality and efficiency of the optimization method. Results show that the cap tax mechanism is an important part of air passenger trans'port carbon emission mitigation and thus, it should be effectively applied to air passenger transport. These results also indicate that the proposed method can provide efficient ways of mitigating carbon emissions for air passenger transport, and therefore assist decision makers in formulating relevant strategies under multiple scenarios.
NASA Technical Reports Server (NTRS)
Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.
1992-01-01
The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Tissue microstructure estimation using a deep network inspired by a dictionary-based framework.
Ye, Chuyang
2017-12-01
Diffusion magnetic resonance imaging (dMRI) captures the anisotropic pattern of water displacement in the neuronal tissue and allows noninvasive investigation of the complex tissue microstructure. A number of biophysical models have been proposed to relate the tissue organization with the observed diffusion signals, so that the tissue microstructure can be inferred. The Neurite Orientation Dispersion and Density Imaging (NODDI) model has been a popular choice and has been widely used for many neuroscientific studies. It models the diffusion signal with three compartments that are characterized by distinct diffusion properties, and the parameters in the model describe tissue microstructure. In NODDI, these parameters are estimated in a maximum likelihood framework, where the nonlinear model fitting is computationally intensive. Therefore, efforts have been made to develop efficient and accurate algorithms for NODDI microstructure estimation, which is still an open problem. In this work, we propose a deep network based approach that performs end-to-end estimation of NODDI microstructure, which is named Microstructure Estimation using a Deep Network (MEDN). MEDN comprises two cascaded stages and is motivated by the AMICO algorithm, where the NODDI microstructure estimation is formulated in a dictionary-based framework. The first stage computes the coefficients of the dictionary. It resembles the solution to a sparse reconstruction problem, where the iterative process in conventional estimation approaches is unfolded and truncated, and the weights are learned instead of predetermined by the dictionary. In the second stage, microstructure properties are computed from the output of the first stage, which resembles the weighted sum of normalized dictionary coefficients in AMICO, and the weights are also learned. Because spatial consistency of diffusion signals can be used to reduce the effect of noise, we also propose MEDN+, which is an extended version of MEDN. MEDN+ allows incorporation of neighborhood information by inserting a stage with learned weights before the MEDN structure, where the diffusion signals in the neighborhood of a voxel are processed. The weights in MEDN or MEDN+ are jointly learned from training samples that are acquired with diffusion gradients densely sampling the q-space. We performed MEDN and MEDN+ on brain dMRI scans, where two shells each with 30 gradient directions were used, and measured their accuracy with respect to the gold standard. Results demonstrate that the proposed networks outperform the competing methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Comprehensive derivation of bond-valence parameters for ion pairs involving oxygen
Gagné, Olivier Charles; Hawthorne, Frank Christopher
2015-01-01
Published two-body bond-valence parameters for cation–oxygen bonds have been evaluated via the root mean-square deviation (RMSD) from the valence-sum rule for 128 cations, using 180 194 filtered bond lengths from 31 489 coordination polyhedra. Values of the RMSD range from 0.033–2.451 v.u. (1.1–40.9% per unit of charge) with a weighted mean of 0.174 v.u. (7.34% per unit of charge). The set of best published parameters has been determined for 128 ions and used as a benchmark for the determination of new bond-valence parameters in this paper. Two common methods for the derivation of bond-valence parameters have been evaluated: (1) fixing B and solving for R o; (2) the graphical method. On a subset of 90 ions observed in more than one coordination, fixing B at 0.37 Å leads to a mean weighted-RMSD of 0.139 v.u. (6.7% per unit of charge), while graphical derivation gives 0.161 v.u. (8.0% per unit of charge). The advantages and disadvantages of these (and other) methods of derivation have been considered, leading to the conclusion that current methods of derivation of bond-valence parameters are not satisfactory. A new method of derivation is introduced, the GRG (generalized reduced gradient) method, which leads to a mean weighted-RMSD of 0.128 v.u. (6.1% per unit of charge) over the same sample of 90 multiple-coordination ions. The evaluation of 19 two-parameter equations and 7 three-parameter equations to model the bond-valence–bond-length relation indicates that: (1) many equations can adequately describe the relation; (2) a plateau has been reached in the fit for two-parameter equations; (3) the equation of Brown & Altermatt (1985 ▸) is sufficiently good that use of any of the other equations tested is not warranted. Improved bond-valence parameters have been derived for 135 ions for the equation of Brown & Altermatt (1985 ▸) in terms of both the cation and anion bond-valence sums using the GRG method and our complete data set. PMID:26428406
Prioritizing material recovery for end-of-life printed circuit boards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Xue, E-mail: xxw6590@rit.edu; Gaustad, Gabrielle, E-mail: gabrielle.gaustad@rit.edu
2012-10-15
Highlights: Black-Right-Pointing-Pointer Material recovery driven by composition, choice of ranking, and weighting. Black-Right-Pointing-Pointer Economic potential for new recycling technologies quantified for several metrics. Black-Right-Pointing-Pointer Indicators developed for materials incurring high eco-toxicity costs. Black-Right-Pointing-Pointer Methodology useful for a variety of stakeholders, particularly policy-makers. - Abstract: The increasing growth in generation of electronic waste (e-waste) motivates a variety of waste reduction research. Printed circuit boards (PCBs) are an important sub-set of the overall e-waste stream due to the high value of the materials contained within them and potential toxicity. This work explores several environmental and economic metrics for prioritizing the recovery ofmore » materials from end-of-life PCBs. A weighted sum model is used to investigate the trade-offs among economic value, energy saving potentials, and eco-toxicity. Results show that given equal weights for these three sustainability criteria gold has the highest recovery priority, followed by copper, palladium, aluminum, tin, lead, platinum, nickel, zinc, and silver. However, recovery priority will change significantly due to variation in the composition of PCBs, choice of ranking metrics, and weighting factors when scoring multiple metrics. These results can be used by waste management decision-makers to quantify the value and environmental savings potential for recycling technology development and infrastructure. They can also be extended by policy-makers to inform possible penalties for land-filling PCBs or exporting to the informal recycling sector. The importance of weighting factors when examining recovery trade-offs, particularly for policies regarding PCB collection and recycling are explored further.« less
Energy Intake and Energy Expenditure for Determining Excess Weight Gain in Pregnant Women
Gilmore, L. Anne; Butte, Nancy F.; Ravussin, Eric; Han, Hongmei; Burton, Jeffrey H.; Redman, Leanne M.
2016-01-01
Objective To conduct a secondary analysis designed to test whether gestational weight gain is due to increased energy intake or adaptive changes in energy expenditures. Methods In this secondary analysis, energy intake and energy expenditure of 45 pregnant women (BMI 18.5–24.9 kg/m2, n=33 and BMI ≥ 25, n=12) were measured preconceptionally 22, and 36 weeks of gestation. Energy intake was calculated as the sum of total energy expenditure measured by doubly labeled water and energy deposition determined by the 4-compartment body composition model. Weight, body composition, and metabolic chamber measurement were completed preconceptionally, 9, 22, and 36 weeks of gestation. Basal metabolic rate was measured by indirect calorimetry in a room calorimeter and activity energy expenditure by doubly labeled water. Results Energy intake from 22 to 36 weeks of gestation was significantly higher in high gainers (n=19) (3437 ± 99 kcal/d) versus low + ideal gainers (n=26) (2687 ± 110 p< .001) within both BMI categories. Basal metabolic rate increased in proportion to gestational weight gain; however, basal metabolic rate adjusted for body composition changes with gestational weight gain was not significantly different between high gainers and low + ideal gainers (151 ± 33 vs. 129 ± 36 kcal/d; p=.66). Activity energy expenditure decreased throughout pregnancy in both groups (low + ideal gainers: −150 ± 70 kcal/d; p=.04 and high gainers: −230 ± 92 kcal/day; p=.01), but there was no difference between high gainers and low + ideal gainers (p=.49). Conclusion Interventions designed to increase adherence to the IOM guidelines for weight gain in pregnancy may have increased efficacy if focused on limiting energy intake while increasing nutrient density and maintaining levels of physical activity. PMID:27054928
Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei
2016-02-01
This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.
A Novel Noncircular MUSIC Algorithm Based on the Concept of the Difference and Sum Coarray.
Chen, Zhenhong; Ding, Yingtao; Ren, Shiwei; Chen, Zhiming
2018-01-25
In this paper, we propose a vectorized noncircular MUSIC (VNCM) algorithm based on the concept of the coarray, which can construct the difference and sum (diff-sum) coarray, for direction finding of the noncircular (NC) quasi-stationary sources. Utilizing both the NC property and the concept of the Khatri-Rao product, the proposed method can be applied to not only the ULA but also sparse arrays. In addition, we utilize the quasi-stationary characteristic instead of the spatial smoothing method to solve the coherent issue generated by the Khatri-Rao product operation so that the available degree of freedom (DOF) of the constructed virtual array will not be reduced by half. Compared with the traditional NC virtual array obtained in the NC MUSIC method, the diff-sum coarray achieves a higher number of DOFs as it comprises both the difference set and the sum set. Due to the complementarity between the difference set and the sum set for the coprime array, we choose the coprime array with multiperiod subarrays (CAMpS) as the array model and summarize the properties of the corresponding diff-sum coarray. Furthermore, we develop a diff-sum coprime array with multiperiod subarrays (DsCAMpS) whose diff-sum coarray has a higher DOF. Simulation results validate the effectiveness of the proposed method and the high DOF of the diff-sum coarray.
A Novel Noncircular MUSIC Algorithm Based on the Concept of the Difference and Sum Coarray
Chen, Zhenhong; Ding, Yingtao; Chen, Zhiming
2018-01-01
In this paper, we propose a vectorized noncircular MUSIC (VNCM) algorithm based on the concept of the coarray, which can construct the difference and sum (diff–sum) coarray, for direction finding of the noncircular (NC) quasi-stationary sources. Utilizing both the NC property and the concept of the Khatri–Rao product, the proposed method can be applied to not only the ULA but also sparse arrays. In addition, we utilize the quasi-stationary characteristic instead of the spatial smoothing method to solve the coherent issue generated by the Khatri–Rao product operation so that the available degree of freedom (DOF) of the constructed virtual array will not be reduced by half. Compared with the traditional NC virtual array obtained in the NC MUSIC method, the diff–sum coarray achieves a higher number of DOFs as it comprises both the difference set and the sum set. Due to the complementarity between the difference set and the sum set for the coprime array, we choose the coprime array with multiperiod subarrays (CAMpS) as the array model and summarize the properties of the corresponding diff–sum coarray. Furthermore, we develop a diff–sum coprime array with multiperiod subarrays (DsCAMpS) whose diff–sum coarray has a higher DOF. Simulation results validate the effectiveness of the proposed method and the high DOF of the diff–sum coarray. PMID:29370138
Predicting Production Costs for Advanced Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.; Weston, R. P.
2002-01-01
For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This paper outlines the development of a process-based cost model in which the physical elements of the vehicle are soared according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this paper is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool.
NASA Astrophysics Data System (ADS)
Gholizadeh, H.; Robeson, S. M.
2015-12-01
Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.
Simulations of lattice animals and trees
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Nadler, Walter; Grassberger, Peter
2005-01-01
The scaling behaviour of randomly branched polymers in a good solvent is studied in two to nine dimensions, using as microscopic models lattice animals and lattice trees on simple hypercubic lattices. As a stochastic sampling method we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. Essentially we start simulating percolation clusters (either site or bond), re-weigh them according to the animal (tree) ensemble, and prune or branch the further growth according to a heuristic fitness function. In contrast to previous applications of PERM, this fitness function is not the weight with which the actual configuration would contribute to the partition sum, but is closely related to it. We obtain high statistics of animals with up to several thousand sites in all dimension 2 <= d <= 9. In addition to the partition sum (number of different animals) we estimate gyration radii and numbers of perimeter sites. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4 and >=8. In addition, we present the hitherto most precise estimates for growth constants in d >= 3. For clusters with one site attached to an attractive surface, we verify for d >= 3 the superuniversality of the cross-over exponent phgr at the adsorption transition predicted by Janssen and Lyssy, but not for d = 2. There, we find phgr = 0.480(4) instead of the conjectured phgr = 1/2. Finally, we discuss the collapse of animals and trees, arguing that our present version of the algorithm is also efficient for some of the models studied in this context, but showing that it is not very efficient for the 'classical' model for collapsing animals.
An exact sum-rule for the Hubbard model: an historical/pedagogical approach
NASA Astrophysics Data System (ADS)
Di Matteo, S.; Claveau, Y.
2017-07-01
The aim of the present article is to derive an exact integral equation for the Green function of the Hubbard model through an equation-of-motion procedure, like in the original Hubbard papers. Though our exact integral equation does not allow to solve the Hubbard model, it represents a strong constraint on its approximate solutions. An analogous sum rule has been already obtained in the literature, through the use of a spectral moment technique. We think however that our equation-of-motion procedure can be more easily related to the historical procedure of the original Hubbard papers. We also discuss examples of possible applications of the sum rule and propose and analyse a solution, fulfilling it, that can be used for a pedagogical introduction to the Mott-Hubbard metal-insulator transition.
Zhao, Longshan; Wu, Faqi
2015-01-01
In this study, a simple travel time-based runoff model was proposed to simulate a runoff hydrograph on soil surfaces with different microtopographies. Three main parameters, i.e., rainfall intensity (I), mean flow velocity (v m) and ponding time of depression (t p), were inputted into this model. The soil surface was divided into numerous grid cells, and the flow length of each grid cell (l i) was then calculated from a digital elevation model (DEM). The flow velocity in each grid cell (v i) was derived from the upstream flow accumulation area using v m. The total flow travel time through each grid cell to the surface outlet was the sum of the sum of flow travel times along the flow path (i.e., the sum of l i/v i) and t p. The runoff rate at the slope outlet for each respective travel time was estimated by finding the sum of the rain rate from all contributing cells for all time intervals. The results show positive agreement between the measured and predicted runoff hydrographs. PMID:26103635
Zhao, Longshan; Wu, Faqi
2015-01-01
In this study, a simple travel time-based runoff model was proposed to simulate a runoff hydrograph on soil surfaces with different microtopographies. Three main parameters, i.e., rainfall intensity (I), mean flow velocity (vm) and ponding time of depression (tp), were inputted into this model. The soil surface was divided into numerous grid cells, and the flow length of each grid cell (li) was then calculated from a digital elevation model (DEM). The flow velocity in each grid cell (vi) was derived from the upstream flow accumulation area using vm. The total flow travel time through each grid cell to the surface outlet was the sum of the sum of flow travel times along the flow path (i.e., the sum of li/vi) and tp. The runoff rate at the slope outlet for each respective travel time was estimated by finding the sum of the rain rate from all contributing cells for all time intervals. The results show positive agreement between the measured and predicted runoff hydrographs.
Renormalisation group corrections to neutrino mixing sum rules
NASA Astrophysics Data System (ADS)
Gehrlein, J.; Petcov, S. T.; Spinrath, M.; Titov, A. V.
2016-11-01
Neutrino mixing sum rules are common to a large class of models based on the (discrete) symmetry approach to lepton flavour. In this approach the neutrino mixing matrix U is assumed to have an underlying approximate symmetry form Ũν, which is dictated by, or associated with, the employed (discrete) symmetry. In such a setup the cosine of the Dirac CP-violating phase δ can be related to the three neutrino mixing angles in terms of a sum rule which depends on the symmetry form of Ũν. We consider five extensively discussed possible symmetry forms of Ũν: i) bimaximal (BM) and ii) tri-bimaximal (TBM) forms, the forms corresponding to iii) golden ratio type A (GRA) mixing, iv) golden ratio type B (GRB) mixing, and v) hexagonal (HG) mixing. For each of these forms we investigate the renormalisation group corrections to the sum rule predictions for δ in the cases of neutrino Majorana mass term generated by the Weinberg (dimension 5) operator added to i) the Standard Model, and ii) the minimal SUSY extension of the Standard Model.
Deng, Xinyang; Jiang, Wen; Zhang, Jiandong
2017-01-01
The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156
The Testing of Airplane Fabrics
NASA Technical Reports Server (NTRS)
Schraivogel, Karl
1932-01-01
This report considers the determining factors in the choice of airplane fabrics, describes the customary methods of testing and reports some of the experimental results. To sum up briefly the results obtained with the different fabrics, it may be said that increasing the strength of covering fabrics by using coarser yarns ordinarily offers no difficulty, because the weight increment from doping is relatively smaller.
2015-12-01
issues. A weighted mean can be used in place of the grand mean3 and the STATA software automatically handles the assignment of the sums of squares. Thus...between groups (i.e., sphericity) using the multivariate test of means provided in STATA 12.1. This test checks whether or not population variances and
Influence of absorption by environmental water vapor on radiation transfer in wildland fires
D. Frankman; B. W. Webb; B. W. Butler
2008-01-01
The attenuation of radiation transfer from wildland flames to fuel by environmental water vapor is investigated. Emission is tracked from points on an idealized flame to locations along the fuel bed while accounting for absorption by environmental water vapor in the intervening medium. The Spectral Line Weighted-sum-of-gray-gases approach was employed for treating the...
ERIC Educational Resources Information Center
Soh, Kaycheng
2015-01-01
In the various world university ranking schemes, the "Overall" is a sum of the weighted indicator scores. As the indicators are of a different nature from each other, "Overall" conceals important differences. Factor analysis of the data from three prominent ranking schemes reveals that there are two factors in each of the…
Looking for Patterns in OfStEd Judgements about Primary Pupil Achievement in Design and Technology
ERIC Educational Resources Information Center
Cross, Alan
2006-01-01
Respective English governments have placed considerable faith, political weight and not inconsiderable sums of money in a system of school inspection organised and led by the Office for Standards in Education (OfStEd). This article considers a so-called foundation subject, design and technology, and the extent to which we might meaningfully use…
Low-energy isovector and isoscalar dipole response in neutron-rich nuclei
NASA Astrophysics Data System (ADS)
Vretenar, D.; Niu, Y. F.; Paar, N.; Meng, J.
2012-04-01
The self-consistent random-phase approximation, based on the framework of relativistic energy density functionals, is employed in the study of isovector and isoscalar dipole response in 68Ni,132Sn, and 208Pb. The evolution of pygmy dipole states (PDSs) in the region of low excitation energies is analyzed as a function of the density dependence of the symmetry energy for a set of relativistic effective interactions. The occurrence of PDSs is predicted in the response to both the isovector and the isoscalar dipole operators, and its strength is enhanced with the increase in the symmetry energy at saturation and the slope of the symmetry energy. In both channels, the PDS exhausts a relatively small fraction of the energy-weighted sum rule but a much larger percentage of the inverse energy-weighted sum rule. For the isovector dipole operator, the reduced transition probability B(E1) of the PDSs is generally small because of pronounced cancellation of neutron and proton partial contributions. The isoscalar-reduced transition amplitude is predominantly determined by neutron particle-hole configurations, most of which add coherently, and this results in a collective response of the PDSs to the isoscalar dipole operator.
Dynamic Precursors of Flares in Active Region NOAA 10486
NASA Astrophysics Data System (ADS)
Korsós, M. B.; Gyenge, N.; Baranyi, T.; Ludmány, A.
2015-03-01
Four different methods are applied here to study the precursors of flare activity in the Active Region NOAA 10486. Two approaches track the temporal behaviour of suitably chosen features (one, the weighted hori- zontal gradient W G M , is the generalized form of the horizontal gradient of the magnetic field, G M ; the other is the sum of the horizontal gradient of the magnetic field, G S , for all sunspot pairs). W G M is a photospheric indicator, that is a proxy measure of magnetic non-potentiality of a specific area of the active region, i.e., it captures the temporal variation of the weighted horizontal gradient of magnetic flux summed up for the region where opposite magnetic polarities are highly mixed. The third one, referred to as the separateness parameter, S l- f , considers the overall morphology. Further, G S and S l- f are photospheric, newly defined quick-look indicators of the polarity mix of the entire active region. The fourth method is tracking the temporal variation of small X-ray flares, their times of succession and their energies observed by the Reuven Ramaty High Energy Solar Spectroscopic Imager instrument. All approaches yield specific pre-cursory signatures for the imminence of flares.
Super (a,d)-H-antimagic covering of möbius ladder graph
NASA Astrophysics Data System (ADS)
Indriyani, Novia; Sri Martini, Titin
2018-04-01
Let G = (V(G), E(G)) be a simple graph. Let H-covering of G is a subgraph H 1, H 2, …, Hj with every edge in G is contained in at least one graph Hi for 1 ≤ i ≤ j. If every Hi is isomorphic, then G admits an H-covering. Furthermore, an (a,d)-H-antimagic covering if there bijective function ξ :V(G)\\cup E(G)\\to \\{1,2,3,\\ldots,|V(G)|+|E(G)|\\}. The H‑-weights for all subgraphs H‑ isomorphic to H ω ({H}^{\\prime })={\\sum }v\\in V({H^{\\prime })}ξ (v)+{\\sum }e\\in E({H^{\\prime })}ξ (e). The weights of subgraphs constitutes an arithmatic progression {a, a + d, …, a + (t ‑ 1)d} where a and d are positive integers and t is the number of subgraphs G isomorphic to H. If ξ (V(G))=\\{1,2,\\ldots,|V(G)|\\} then ξ is called super (a, d)-H-antimagic covering. The research provides super (a, d)-H-antimagic covering with d = {1, 3} of Möbius ladder graph Mn for n > 5 and n is odd.
Fang, Fang; Ni, Bing-Jie; Yu, Han-Qing
2009-06-01
In this study, weighted non-linear least-squares analysis and accelerating genetic algorithm are integrated to estimate the kinetic parameters of substrate consumption and storage product formation of activated sludge. A storage product formation equation is developed and used to construct the objective function for the determination of its production kinetics. The weighted least-squares analysis is employed to calculate the differences in the storage product concentration between the model predictions and the experimental data as the sum of squared weighted errors. The kinetic parameters for the substrate consumption and the storage product formation are estimated to be the maximum heterotrophic growth rate of 0.121/h, the yield coefficient of 0.44 mg CODX/mg CODS (COD, chemical oxygen demand) and the substrate half saturation constant of 16.9 mg/L, respectively, by minimizing the objective function using a real-coding-based accelerating genetic algorithm. Also, the fraction of substrate electrons diverted to the storage product formation is estimated to be 0.43 mg CODSTO/mg CODS. The validity of our approach is confirmed by the results of independent tests and the kinetic parameter values reported in literature, suggesting that this approach could be useful to evaluate the product formation kinetics of mixed cultures like activated sludge. More importantly, as this integrated approach could estimate the kinetic parameters rapidly and accurately, it could be applied to other biological processes.
Suárez, Inmaculada; Coto, Baudilio
2015-08-14
Average molecular weights and polydispersity indexes are some of the most important parameters considered in the polymer characterization. Usually, gel permeation chromatography (GPC) and multi angle light scattering (MALS) are used for this determination, but GPC values are overestimated due to the dispersion introduced by the column separation. Several procedures were proposed to correct such effect usually involving more complex calibration processes. In this work, a new method of calculation has been considered including diffusion effects. An equation for the concentration profile due to diffusion effects along the GPC column was considered to be a Fickian function and polystyrene narrow standards were used to determine effective diffusion coefficients. The molecular weight distribution function of mono and poly disperse polymers was interpreted as a sum of several Fickian functions representing a sample formed by only few kind of polymer chains with specific molecular weight and diffusion coefficient. Proposed model accurately fit the concentration profile along the whole elution time range as checked by the computed standard deviation. Molecular weights obtained by this new method are similar to those obtained by MALS or traditional GPC while polydispersity index values are intermediate between those obtained by the traditional GPC combined to Universal Calibration method and the MALS method. Values for Pearson and Lin coefficients shows improvement in the correlation of polydispersity index values determined by GPC and MALS methods when diffusion coefficients and new methods are used. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Richings, Gareth W.; Habershon, Scott
2018-04-01
We present significant algorithmic improvements to a recently proposed direct quantum dynamics method, based upon combining well established grid-based quantum dynamics approaches and expansions of the potential energy operator in terms of a weighted sum of Gaussian functions. Specifically, using a sum of low-dimensional Gaussian functions to represent the potential energy surface (PES), combined with a secondary fitting of the PES using singular value decomposition, we show how standard grid-based quantum dynamics methods can be dramatically accelerated without loss of accuracy. This is demonstrated by on-the-fly simulations (using both standard grid-based methods and multi-configuration time-dependent Hartree) of both proton transfer on the electronic ground state of salicylaldimine and the non-adiabatic dynamics of pyrazine.
Examination of the first excited state of 4He as a potential breathing mode
NASA Astrophysics Data System (ADS)
Bacca, Sonia; Barnea, Nir; Leidemann, Winfried; Orlandini, Giuseppina
2015-02-01
The isoscalar monopole excitation of 4He is studied within a few-body ab initio approach. We consider the transition density to the low-lying and narrow 0+ resonance, as well as various sum rules and the strength energy distribution itself at different momentum transfers q . Realistic nuclear forces of chiral and phenomenological nature are employed. Various indications for a collective breathing mode are found: (i) the specific shape of the transition density, (ii) the high degree of exhaustion of the non-energy-weighted sum rule at low q , and (iii) the complete dominance of the resonance peak in the excitation spectrum. For the incompressibility K of the α particle, two different definitions give two rather small values (22 and 36 MeV).
Sequence-based model of gap gene regulatory network.
Kozlov, Konstantin; Gursky, Vitaly; Kulakovskiy, Ivan; Samsonova, Maria
2014-01-01
The detailed analysis of transcriptional regulation is crucially important for understanding biological processes. The gap gene network in Drosophila attracts large interest among researches studying mechanisms of transcriptional regulation. It implements the most upstream regulatory layer of the segmentation gene network. The knowledge of molecular mechanisms involved in gap gene regulation is far less complete than that of genetics of the system. Mathematical modeling goes beyond insights gained by genetics and molecular approaches. It allows us to reconstruct wild-type gene expression patterns in silico, infer underlying regulatory mechanism and prove its sufficiency. We developed a new model that provides a dynamical description of gap gene regulatory systems, using detailed DNA-based information, as well as spatial transcription factor concentration data at varying time points. We showed that this model correctly reproduces gap gene expression patterns in wild type embryos and is able to predict gap expression patterns in Kr mutants and four reporter constructs. We used four-fold cross validation test and fitting to random dataset to validate the model and proof its sufficiency in data description. The identifiability analysis showed that most model parameters are well identifiable. We reconstructed the gap gene network topology and studied the impact of individual transcription factor binding sites on the model output. We measured this impact by calculating the site regulatory weight as a normalized difference between the residual sum of squares error for the set of all annotated sites and for the set with the site of interest excluded. The reconstructed topology of the gap gene network is in agreement with previous modeling results and data from literature. We showed that 1) the regulatory weights of transcription factor binding sites show very weak correlation with their PWM score; 2) sites with low regulatory weight are important for the model output; 3) functional important sites are not exclusively located in cis-regulatory elements, but are rather dispersed through regulatory region. It is of importance that some of the sites with high functional impact in hb, Kr and kni regulatory regions coincide with strong sites annotated and verified in Dnase I footprint assays.
Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms
NASA Astrophysics Data System (ADS)
Ablinger, Jakob; Blümlein, Johannes; Raab, Clemens; Schneider, Carsten; Wißbrock, Fabian
2014-08-01
We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version of the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators, new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∼30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N∈C. Integrals with a power-like divergence in N-space ∝aN,a∈R,a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.
Connectotyping: Model Based Fingerprinting of the Functional Connectome
Miranda-Dominguez, Oscar; Mills, Brian D.; Carpenter, Samuel D.; Grant, Kathleen A.; Kroenke, Christopher D.; Nigg, Joel T.; Fair, Damien A.
2014-01-01
A better characterization of how an individual’s brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called “connectotype”, or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model’s ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach. PMID:25386919
Jeremy S. Fried; Theresa B. Jain; Sara Loreno; Robert F. Keefe; Conor K. Bell
2017-01-01
The BioSum modeling framework summarizes current and prospective future forest conditions under alternative management regimes along with their costs, revenues and product yields. BioSum translates Forest Inventory and Analysis (FIA) data for input to the Forest Vegetation Simulator (FVS), summarizes FVS outputs for input to the treatment operations cost model (OpCost...
Antioch, K M; Walsh, M K
2002-01-01
Under Australian casemix funding arrangements that use Diagnosis-Related Groups (DRGs) the average price is policy based, not benchmarked. Cost weights are too low for State-wide chronic disease services. Risk-adjusted Capitation Funding Models (RACFM) are feasible alternatives. A RACFM was developed for public patients with cystic fibrosis treated by an Australian Health Maintenance Organization (AHMO). Adverse selection is of limited concern since patients pay solidarity contributions via Medicare levy with no premium contributions to the AHMO. Sponsors paying premium subsidies are the State of Victoria and the Federal Government. Cost per patient is the dependent variable in the multiple regression. Data on DRG 173 (cystic fibrosis) patients were assessed for heteroskedasticity, multicollinearity, structural stability and functional form. Stepwise linear regression excluded non-significant variables. Significant variables were 'emergency' (1276.9), 'outlier' (6377.1), 'complexity' (3043.5), 'procedures' (317.4) and the constant (4492.7) (R(2)=0.21, SE=3598.3, F=14.39, Prob<0.0001. Regression coefficients represent the additional per patient costs summed to the base payment (constant). The model explained 21% of the variance in cost per patient. The payment rate is adjusted by a best practice annual admission rate per patient. The model is a blended RACFM for in-patient, out-patient, Hospital In The Home, Fee-For-Service Federal payments for drugs and medical services; lump sum lung transplant payments and risk sharing through cost (loss) outlier payments. State and Federally funded home and palliative services are 'carved out'. The model, which has national application via Coordinated Care Trials and by Australian States for RACFMs may be instructive for Germany, which plans to use Australian DRGs for casemix funding. The capitation alternative for chronic disease can improve equity, allocative efficiency and distributional justice. The use of Diagnostic Cost Groups (DCGs) is a promising alternative classification system for capitation arrangements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaser, L.; Karl, T.; Guenther, A.
2013-01-01
We present the first eddy covariance flux measurements of volatile organic compounds (VOCs) using a proton-transfer-reaction time-of-flight mass-spectrometer (PTR-TOFMS) above a ponderosa pine forest in Colorado, USA. The high mass resolution of the PTR-TOF-MS enabled the identification of chemical sum formulas. During a 30 day measurement period in August and September 2010, 649 different ion mass peaks were detected in the ambient air mass spectrum (including primary ions and mass calibration ompounds). Eddy covariance with the vertical wind speed was calculated for all ion mass peaks. On a typical day, 17 ion mass peaks including protonated parent compounds, their fragmentsmore » and isotopes as well as VOC-H+-water clusters showed a significant flux with daytime average emissions above a reliable flux threshold of 0.1mgcompoundm-2 h-1. These ion mass peaks could be assigned to seven compound classes. The main flux contributions during daytime (10:00-18:00 LT) are attributed to the sum of 2-methyl-3-buten-2-ol (MBO) and isoprene (50 %), methanol (12%), the sum of acetic acid and glycolaldehyde (10%) and the sum of monoterpenes (10 %). The total MBO+isoprene flux was composed of 10% isoprene and 90% MBO. There was good agreement between the light and temperature dependency of the sum of MBO and isoprene observed for this work and those of earlier studies. The above canopy flux measurements of the sum of MBO and isoprene and the sum of 20 monoterpenes were compared to emissions calculated using the Model of Emissions of Gases and Aerosols from Nature (MEGAN 2.1). The best agreement between MEGAN 2.1 and measurements was reached using emission factors determined from site specific leaf cuvette measurements. While the modelled and measured MBO+isoprene fluxes agree well the emissions of the sum of monoterpenes is underestimated by MEGAN 2.1. This is expected as some factors impacting monoterpene emissions, such as physical damage of needles and branches due to storms, are not included in MEGAN 2.1.« less
A novel three-stage distance-based consensus ranking method
NASA Astrophysics Data System (ADS)
Aghayi, Nazila; Tavana, Madjid
2018-05-01
In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.
Photo-Spectrometer Realized In A Standard Cmos Ic Process
Simpson, Michael L.; Ericson, M. Nance; Dress, William B.; Jellison, Gerald E.; Sitter, Jr., David N.; Wintenberg, Alan L.
1999-10-12
A spectrometer, comprises: a semiconductor having a silicon substrate, the substrate having integrally formed thereon a plurality of layers forming photo diodes, each of the photo diodes having an independent spectral response to an input spectra within a spectral range of the semiconductor and each of the photo diodes formed only from at least one of the plurality of layers of the semiconductor above the substrate; and, a signal processing circuit for modifying signals from the photo diodes with respective weights, the weighted signals being representative of a specific spectral response. The photo diodes have different junction depths and different polycrystalline silicon and oxide coverings. The signal processing circuit applies the respective weights and sums the weighted signals. In a corresponding method, a spectrometer is manufactured by manipulating only the standard masks, materials and fabrication steps of standard semiconductor processing, and integrating the spectrometer with a signal processing circuit.
An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value
NASA Astrophysics Data System (ADS)
Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu
2018-03-01
Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.
The proper weighting function for retrieving temperatures from satellite measured radiances
NASA Technical Reports Server (NTRS)
Arking, A.
1976-01-01
One class of methods for converting satellite measured radiances into atmospheric temperature profiles, involves a linearization of the radiative transfer equation: delta r = the sum of (W sub i) (delta T sub i) where (i=1...s) and where delta T sub i is the deviation of the temperature in layer i from that of a reference atmosphere, delta R is the difference in the radiance at satellite altitude from the corresponding radiance for the reference atmosphere, and W sub i is the discrete (or vector) form of the T-weighting (i.e., temperature weighting) function W(P), where P is pressure. The top layer of the atmosphere corresponds to i = 1, the bottom layer to i = s - 1, and i = s refers to the surface. Linearization in temperature (or some function of temperature) is at the heart of all linear or matrix methods. The weighting function that should be used is developed.
NASA Astrophysics Data System (ADS)
Karrasch, C.; Hauschild, J.; Langer, S.; Heidrich-Meisner, F.
2013-06-01
We revisit the problem of the spin Drude weight D of the integrable spin-1/2 XXZ chain using two complementary approaches, exact diagonalization (ED) and the time-dependent density-matrix renormalization group (tDMRG). We pursue two main goals. First, we present extensive results for the temperature dependence of D. By exploiting time translation invariance within tDMRG, one can extract D for significantly lower temperatures than in previous tDMRG studies. Second, we discuss the numerical quality of the tDMRG data and elaborate on details of the finite-size scaling of the ED results, comparing calculations carried out in the canonical and grand-canonical ensembles. Furthermore, we analyze the behavior of the Drude weight as the point with SU(2)-symmetric exchange is approached and discuss the relative contribution of the Drude weight to the sum rule as a function of temperature.
Relative trace-element concern indexes for eastern Kentucky coals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, S.L.
Coal trace elements that could affect environmental quality were studied in 372 samples (collected and analyzed by the Kentucky Geological Survey and the United States Geological Survey) from 36 coal beds in eastern Kentucky. Relative trace-element concern indexes are defined as the weighted sum of standarized (substract mean; divide by standard deviation) concentrations. Index R is calculated from uranium and thorium, index 1 from elements of minor concern (antimony, barium, bromine, chloride, cobalt, lithium, manganese, sodium, and strontium), index 2 from elements of moderate concern (chromium, copper, fluorine, nickel, vanadium, and zinc), and index 4 from elements of greatest concernmore » (arsenic, boron, cadmium, lead, mercury, molybdenum, and selenium). Numericals indicate weights, except that index R is weighted by 1, and index 124 is the unweighted sum of indexes 1, 2, and 4. Contour mapping indexes is valid because all indexes have nonnugget effect variograms. Index 124 is low west of Lee and Bell counties, and in Pike County. Index 124 is high in the area bounded by Boyd, Menifee, Knott, and Martin counties and in Owsley, Clay, and Leslie counties. Coal from some areas of eastern Kentucky is less likely to cause environmental problems than that from other areas. Positive correlations of all indexes with the centered log ratios of ash, and negative correlations with centered log ratios of carbon, hydrogen, nitrogen, oxygen, and sulfur indicate that trace elements of concern are predominantly associated with ash. Beneficiation probably would reduce indexes significantly.« less
Separating OR, SUM, and XOR Circuits.
Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H
2016-08-01
Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O ( n ), but require SUM-circuits of size Ω( n 3/2 /log 2 n ).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis.
Sum rules for the uniform-background model of an atomic-sharp metal corner
NASA Astrophysics Data System (ADS)
Streitenberger, P.
1994-04-01
Analytical results are derived for the electrostatic potential of an atomic-sharp 90° metal corner in the uniform-background model. The electrostatic potential at a free jellium edge and the jellium corner, respectively, is determined exactly in terms of the energy per electron of the uniform electron gas integrated over the background density. The surface energy, the edge formation energy and the derivative of the corner formation energy with respect to the background density are given as integrals over the electrostatic potential. The present approach represents a novel approach to such sum rules, inclusive of the Budd-Vannimenus sum rules for a free jellium surface, based on general properties of linear response functions.
Diagonalizing Tensor Covariants, Light-Cone Commutators, and Sum Rules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo, C. Y.
We derive fixed-mass sum rules for virtual Compton scattering the forward direction. We use the methods of both Dicus, Jackiw, and Teplitz (for the absorptive parts) and Heimann, Hey, and Mandula (for the real parts). We find a set of tensor covariansa such that the corresponding scalar amplitudes are proportional to simple t-channel parity-conserving helicity amplitudes. We give a relatively complete discussion of the convergence of the sum rules in a Regge model. (auth)
Neuromotor development in relation to birth weight in rabbits.
Harel, S; Shapira, Y; Hartzler, J; Teng, E L; Quiligan, E; Van Der Meulen, J P
1978-01-01
The development of neuromotor patterns in relation to birth weight was studied in the rabbit, a perinatal brain developer. In order to induce intrauterine growth retardation and to increase the number of low birth weight rabbits, experimental ischemia to half the fetuses in each doe was achieved by total ligation of approximately 30% of spiral vessels to the placenta, during the last trimester of gestation. Following natural delivery, the rabbit pups were periodically observed for the appearance of eye-opening and righting reflex, and for the cessations of falling, circling and dragging of hind limbs. An index of neuromotor development was assigned to each rabbit by summing up the age (in days) of appearance of each of the neuromotor milestones. An association was found between low birth weight and delayed neuromotor development at 2 weeks of age. The most significant correlation was found between low birth weight and delayed disappearance of falling. The latter may represent incoordination as an expression of cerebellar dysfunction.
Structure of the two-neutrino double-β decay matrix elements within perturbation theory
NASA Astrophysics Data System (ADS)
Štefánik, Dušan; Šimkovic, Fedor; Faessler, Amand
2015-06-01
The two-neutrino double-β Gamow-Teller and Fermi transitions are studied within an exactly solvable model, which allows a violation of both spin-isospin SU(4) and isospin SU(2) symmetries, and is expressed with generators of the SO(8) group. It is found that this model reproduces the main features of realistic calculation within the quasiparticle random-phase approximation with isospin symmetry restoration concerning the dependence of the two-neutrino double-β decay matrix elements on isovector and isoscalar particle-particle interactions. By using perturbation theory an explicit dependence of the two-neutrino double-β decay matrix elements on the like-nucleon pairing, particle-particle T =0 and T =1 , and particle-hole proton-neutron interactions is obtained. It is found that double-β decay matrix elements do not depend on the mean field part of Hamiltonian and that they are governed by a weak violation of both SU(2) and SU(4) symmetries by the particle-particle interaction of Hamiltonian. It is pointed out that there is a dominance of two-neutrino double-β decay transition through a single state of intermediate nucleus. The energy position of this state relative to energies of initial and final ground states is given by a combination of strengths of residual interactions. Further, energy-weighted Fermi and Gamow-Teller sum rules connecting Δ Z =2 nuclei are discussed. It is proposed that these sum rules can be used to study the residual interactions of the nuclear Hamiltonian, which are relevant for charge-changing nuclear transitions.
Ball, Helen L; Santorelli, Gillian; West, Jane; Barber, Sally E; McEachan, Rosemary RC; Wright, John
2017-01-01
Abstract Study Objectives: To examine independent associations of sleep duration with total and abdominal adiposity, and the bidirectionality of these associations, in a young biethnic sample of children from a disadvantaged location. Methods: Child sleep duration (h/day) was parent-reported by questionnaire and indices of total (body weight, body mass index, percent body fat (%BF), sum of skinfolds) and abdominal adiposity (waist circumference) were measured using standard anthropometric procedures at approximately 12, 18, 24, and 36 months of age in 1,338 children (58% South Asian; 42% White). Mixed effects models were used to quantify independent associations (expressed as standardised β-coefficients (95% confidence interval (CI)) of sleep duration with adiposity indices using data from all four time-points. Factors considered for adjustment in models included basic demographics, pregnancy and birth characteristics, and lifestyle behaviours. Results: With the exception of the sum of skinfolds, sleep duration was inversely and independently associated with indices of total and abdominal adiposity in South Asian children. For example, one standard deviation (SD) higher sleep duration was associated with reduced %BF by -0.029 (95% CI: −0.053, −0.0043) SDs. Higher adiposity was also independently associated with shorter sleep duration in South Asian children (for example, %BF: β = -0.10 (-0.16, -0.028) SDs). There were no significant associations in White children. Conclusions: Associations between sleep duration and adiposity are bidirectional and independent among South Asian children from a disadvantaged location. The results highlight the importance of considering adiposity as both a determinant of decreased sleep and a potential consequence. PMID:28364513
Convex lattice polygons of fixed area with perimeter-dependent weights.
Rajesh, R; Dhar, Deepak
2005-01-01
We study fully convex polygons with a given area, and variable perimeter length on square and hexagonal lattices. We attach a weight tm to a convex polygon of perimeter m and show that the sum of weights of all polygons with a fixed area s varies as s(-theta(conv))eK(t)square root(s) for large s and t less than a critical threshold tc, where K(t) is a t-dependent constant, and theta(conv) is a critical exponent which does not change with t. Using heuristic arguments, we find that theta(conv) is 1/4 for the square lattice, but -1/4 for the hexagonal lattice. The reason for this unexpected nonuniversality of theta(conv) is traced to existence of sharp corners in the asymptotic shape of these polygons.
Eriksson, Ulrika; Haglund, Peter; Kärrman, Anna
2017-11-01
Per- and polyfluoroalkyl substances (PFASs) are ubiquitous in sludge and water from waste water treatment plants, as a result of their incorporation in everyday products and industrial processes. In this study, we measured several classes of persistent PFASs, precursors, transformation intermediates, and newly identified PFASs in influent and effluent sewage water and sludge from three municipal waste water treatment plants in Sweden, sampled in 2015. For sludge, samples from 2012 and 2014 were analyzed as well. Levels of precursors in sludge exceeded those of perfluoroalkyl acids and sulfonic acids (PFCAs and PFSAs), in 2015 the sum of polyfluoroalkyl phosphoric acid esters (PAPs) were 15-20ng/g dry weight, the sum of fluorotelomer sulfonic acids (FTSAs) was 0.8-1.3ng/g, and the sum of perfluorooctane sulfonamides and ethanols ranged from non-detected to 3.2ng/g. Persistent PFSAs and PFCAs were detected at 1.9-3.9ng/g and 2.4-7.3ng/g dry weight, respectively. The influence of precursor compounds was further demonstrated by an observed substantial increase for a majority of the persistent PFCAs and PFSAs in water after waste water treatment. Perfluorohexanoic acid (PFHxA), perfluorooctanoic acid (PFOA), perfluorohexane sulfonic acid (PFHxS), and perfluorooctane sulfonic acid (PFOS) had a net mass increase in all WWTPs, with mean values of 83%, 28%, 37% and 58%, respectively. The load of precursors and intermediates in influent water and sludge combined with net mass increase support the hypothesis that degradation of precursor compounds is a significant contributor to PFAS contamination in the environment. Copyright © 2017. Published by Elsevier B.V.
Vogel, Heike; Wolf, Stefanie; Rabasa, Cristina; Rodriguez-Pacheco, Francisca; Babaei, Carina S; Stöber, Franziska; Goldschmidt, Jürgen; DiMarchi, Richard D; Finan, Brian; Tschöp, Matthias H; Dickson, Suzanne L; Schürmann, Annette; Skibicka, Karolina P
2016-11-01
The obesity epidemic continues unabated and currently available pharmacological treatments are not sufficiently effective. Combining gut/brain peptide, GLP-1, with estrogen into a conjugate may represent a novel, safe and potent, strategy to treat diabesity. Here we demonstrate that the central administration of GLP-1-estrogen conjugate reduced food reward, food intake, and body weight in rats. In order to determine the brain location of the interaction of GLP-1 with estrogen, we avail of single-photon emission computed tomography imaging of regional cerebral blood flow and pinpoint a brain site unexplored for its role in feeding and reward, the supramammillary nucleus (SUM) as a potential target of the conjugated GLP-1-estrogen. We confirm that conjugated GLP-1 and estrogen directly target the SUM with site-specific microinjections. Additional microinjections of GLP-1-estrogen into classic energy balance controlling nuclei, the lateral hypothalamus (LH) and the nucleus of the solitary tract (NTS) revealed that the metabolic benefits resulting from GLP-1-estrogen injections are mediated through the LH and to some extent by the NTS. In contrast, no additional benefit of the conjugate was noted on food reward when the compound was microinjected into the LH or the NTS, identifying the SUM as the only neural substrate identified here to underlie the reward reducing benefits of GLP-1 and estrogen conjugate. Collectively we discover a surprising neural substrate underlying food intake and reward effects of GLP-1 and estrogen and uncover a new brain area capable of regulating energy balance and reward. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Belury, Martha A; Cole, Rachel M; Bailey, Brittney E; Ke, Jia-Yu; Andridge, Rebecca R; Kiecolt-Glaser, Janice K
2016-05-01
Supplementation with linoleic acid (LA; 18:2Ω6)-rich oils increases lean mass and decreases trunk adipose mass in people. Erythrocyte fatty acids reflect the dietary pattern of fatty acid intake and endogenous metabolism of fatty acids. The aim of this study is to determine the relationship of erythrocyte LA, with aspects of body composition, insulin resistance, and inflammation. Additionally, we tested for relationships of oleic acid (OA) and the sum of long chain omega-three fatty acids (LC-Ω3-SUM), on the same outcomes. Men and women (N = 139) were evaluated for body composition, insulin resistance, and serum inflammatory markers, IL-6, and c-reactive protein (CRP) and erythrocyte fatty acid composition after an overnight fast. LA was positively related to appendicular lean mass/body mass index and inversely related to trunk adipose mass. Additionally, LA was inversely related to insulin resistance and IL-6. While there was an inverse relationship between OA or LC-Ω3-SUM with markers of inflammation, there were no relationships between OA or LC-Ω3-SUM with body composition or HOMA-IR. Higher erythrocyte LA was associated with improved body composition, insulin resistance, and inflammation. Erythrocyte OA or LC-Ω3-SUM was unrelated to body composition and insulin resistance. There is much controversy about whether all unsaturated fats have the same benefits for metabolic syndrome and weight gain. We sought to test the strength of the relationships between three unsaturated fatty acid in erythrocytes with measurements of body composition, metabolism, and inflammation in healthy adults. Linoleic acid, but not oleic acid or the sum of long-chain omega 3 fatty acids (w3), was associated with increased appendicular lean mass and decreased trunk adipose mass and insulin resistance. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less
Schwabe, Inga; Boomsma, Dorret I; van den Berg, Stéphanie M
2017-12-01
Genotype by environment interaction in behavioral traits may be assessed by estimating the proportion of variance that is explained by genetic and environmental influences conditional on a measured moderating variable, such as a known environmental exposure. Behavioral traits of interest are often measured by questionnaires and analyzed as sum scores on the items. However, statistical results on genotype by environment interaction based on sum scores can be biased due to the properties of a scale. This article presents a method that makes it possible to analyze the actually observed (phenotypic) item data rather than a sum score by simultaneously estimating the genetic model and an item response theory (IRT) model. In the proposed model, the estimation of genotype by environment interaction is based on an alternative parametrization that is uniquely identified and therefore to be preferred over standard parametrizations. A simulation study shows good performance of our method compared to analyzing sum scores in terms of bias. Next, we analyzed data of 2,110 12-year-old Dutch twin pairs on mathematical ability. Genetic models were evaluated and genetic and environmental variance components estimated as a function of a family's socio-economic status (SES). Results suggested that common environmental influences are less important in creating individual differences in mathematical ability in families with a high SES than in creating individual differences in mathematical ability in twin pairs with a low or average SES.
Electrostatic interaction energy and factor 1.23
NASA Astrophysics Data System (ADS)
Rubčić, A.; Arp, H.; Rubčić, J.
The factor F≫1.23 has originally been found in the redshift of quasars. Recently, it has been found in very different physical phenomena: the life-time of muonium, the masses of elementary particles (leptons, quarks,...), the correlation of atomic weight (A) and atomic number (Z) and the correlation of the sum of masses of all orbiting bodies with the mass of the central body in gravitational systems.
Interface Evaluation for Open System Architectures
2014-03-01
maker (SDM) is responsible for balancing all of the influences of the IPT when making decisions. Coalescing the IPT perspectives for a single IIM...factors are considered in IIM decisions and that decisions are consistent with the preferences of the SDM, ultimately leading to a balance of schedule... board to perform ranking and weighting determinations. Rank sum, rank exponent, rank reciprocal and ROC leverage a subjective assessment of the
David Frankman; Brent W. Webb; Bret W. Butler
2007-01-01
Thermal radiation emission from a simulated black flame surface to a fuel bed is analyzed by a ray-tracing technique, tracking emission from points along the flame to locations along the fuel bed while accounting for absorption by environmental water vapor in the intervening medium. The Spectral Line Weighted-sum-of-gray-gases approach was adopted for treating the...
Optimization of Long Range Major Rehabilitation of Airfield Pavements.
1983-01-01
the network level, the mathematical representation of choosing those projects that maximize the sum of the user value weighted structural performanceof ...quantitatively be compared . In addition, an estimate of an appropriate level of funding for the entire system can be made. The simple example shows a...pavement engineers to only working in the present. The designing and comparing of pavement maintenance and rehabilitation alternatives remain directed
ERIC Educational Resources Information Center
Kaycheng, Soh
2015-01-01
World university ranking systems used the weight-and-sum approach to combined indicator scores into overall scores on which the universities are then ranked. This approach assumes that the indicators all independently contribute to the overall score in the specified proportions. In reality, this assumption is doubtful as the indicators tend to…
Prioritizing the Components of Vulnerability: A Genetic Algorithm Minimization of Flood Risk
NASA Astrophysics Data System (ADS)
Bongolan, Vena Pearl; Ballesteros, Florencio; Baritua, Karessa Alexandra; Junne Santos, Marie
2013-04-01
We define a flood resistant city as an optimal arrangement of communities according to their traits, with the goal of minimizing the flooding vulnerability via a genetic algorithm. We prioritize the different components of flooding vulnerability, giving each component a weight, thus expressing vulnerability as a weighted sum. This serves as the fitness function for the genetic algorithm. We also allowed non-linear interactions among related but independent components, viz, poverty and mortality rate, and literacy and radio/ tv penetration. The designs produced reflect the relative importance of the components, and we observed a synchronicity between the interacting components, giving us a more consistent design.
High fidelity chemistry and radiation modeling for oxy -- combustion scenarios
NASA Astrophysics Data System (ADS)
Abdul Sater, Hassan A.
To account for the thermal and chemical effects associated with the high CO2 concentrations in an oxy-combustion atmosphere, several refined gas-phase chemistry and radiative property models have been formulated for laminar to highly turbulent systems. This thesis examines the accuracies of several chemistry and radiative property models employed in computational fluid dynamic (CFD) simulations of laminar to transitional oxy-methane diffusion flames by comparing their predictions against experimental data. Literature review about chemistry and radiation modeling in oxy-combustion atmospheres considered turbulent systems where the predictions are impacted by the interplay and accuracies of the turbulence, radiation and chemistry models. Thus, by considering a laminar system we minimize the impact of turbulence and the uncertainties associated with turbulence models. In the first section of this thesis, an assessment and validation of gray and non-gray formulations of a recently proposed weighted-sum-of-gray gas model in oxy-combustion scenarios was undertaken. Predictions of gas, wall temperatures and flame lengths were in good agreement with experimental measurements. The temperature and flame length predictions were not sensitive to the radiative property model employed. However, there were significant variations between the gray and non-gray model radiant fraction predictions with the variations in general increasing with decrease in Reynolds numbers possibly attributed to shorter flames and steeper temperature gradients. The results of this section confirm that non-gray model predictions of radiative heat fluxes are more accurate than gray model predictions especially at steeper temperature gradients. In the second section, the accuracies of three gas-phase chemistry models were assessed by comparing their predictions against experimental measurements of temperature, species concentrations and flame lengths. The chemistry was modeled employing the Eddy Dissipation Concept (EDC) employing a 41-step detailed chemistry mechanism, the non-adiabatic extension of the equilibrium Probability Density Function (PDF) based mixture-fraction model and a two-step global finite rate chemistry model with modified rate constants proposed to work well in oxy-methane flames. Based on the results from this section, the equilibrium PDF model in conjunction with a high-fidelity non-gray model for the radiative properties of the gas-phase may be deemed as accurate to capture the major gas species concentrations, temperatures and flame lengths in oxy-methane flames. The third section examines the variations in radiative transfer predictions due to the choice of chemistry and gas-phase radiative property models. The radiative properties were estimated employing four weighted-sum-of-gray-gases models (WSGGM) that were formulated employing different spectroscopic/model databases. An average variation of 14 -- 17% in the wall incident radiative fluxes was observed between the EDC and equilibrium mixture fraction chemistry models, due to differences in their temperature predictions within the flame. One-dimensional, line-of-sight radiation calculations showed a 15 -- 25 % reduction in the directional radiative fluxes at lower axial locations as a result of ignoring radiation from CO and CH4. Under the constraints of fixed temperature and species distributions, the flame radiant power estimates and average wall incident radiative fluxes varied by nearly 60% and 11% respectively among the different WSGG models.
Sampey, Brante P; Vanhoose, Amanda M; Winfield, Helena M; Freemerman, Alex J; Muehlbauer, Michael J; Fueger, Patrick T; Newgard, Christopher B; Makowski, Liza
2011-06-01
Obesity has reached epidemic proportions worldwide and reports estimate that American children consume up to 25% of calories from snacks. Several animal models of obesity exist, but studies are lacking that compare high-fat diets (HFD) traditionally used in rodent models of diet-induced obesity (DIO) to diets consisting of food regularly consumed by humans, including high-salt, high-fat, low-fiber, energy dense foods such as cookies, chips, and processed meats. To investigate the obesogenic and inflammatory consequences of a cafeteria diet (CAF) compared to a lard-based 45% HFD in rodent models, male Wistar rats were fed HFD, CAF or chow control diets for 15 weeks. Body weight increased dramatically and remained significantly elevated in CAF-fed rats compared to all other diets. Glucose- and insulin-tolerance tests revealed that hyperinsulinemia, hyperglycemia, and glucose intolerance were exaggerated in the CAF-fed rats compared to controls and HFD-fed rats. It is well-established that macrophages infiltrate metabolic tissues at the onset of weight gain and directly contribute to inflammation, insulin resistance, and obesity. Although both high fat diets resulted in increased adiposity and hepatosteatosis, CAF-fed rats displayed remarkable inflammation in white fat, brown fat and liver compared to HFD and controls. In sum, the CAF provided a robust model of human metabolic syndrome compared to traditional lard-based HFD, creating a phenotype of exaggerated obesity with glucose intolerance and inflammation. This model provides a unique platform to study the biochemical, genomic and physiological mechanisms of obesity and obesity-related disease states that are pandemic in western civilization today.
Evaluation of a spatially-distributed Thornthwaite water-balance model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lough, J.A.
1993-03-01
A small watershed of low relief in coastal New Hampshire was divided into hydrologic sub-areas in a geographic information system on the basis of soils, sub-basins and remotely-sensed landcover. Three variables were spatially modeled for input to 49 individual water-balances: available water content of the root zone, water input and potential evapotranspiration (PET). The individual balances were weight-summed to generate the aggregate watershed-balance, which saw 9% (48--50 mm) less annual actual-evapotranspiration (AET) compared to a lumped approach. Analysis of streamflow coefficients suggests that the spatially-distributed approach is more representative of the basin dynamics. Variation of PET by landcover accounted formore » the majority of the 9% AET reduction. Variation of soils played a near-negligible role. As a consequence of the above points, estimates of landcover proportions and annual PET by landcover are sufficient to correct a lumped water-balance in the Northeast. If remote sensing is used to estimate the landcover area, a sensor with a high spatial resolution is required. Finally, while the lower Thornthwaite model has conceptual limitations for distributed application, the upper Thornthwaite model is highly adaptable to distributed problems and may prove useful in many earth-system models.« less
Neural network application to aircraft control system design
NASA Technical Reports Server (NTRS)
Troudet, Terry; Garg, Sanjay; Merrill, Walter C.
1991-01-01
The feasibility of using artificial neural networks as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research are identified to enhance the practical applicability of neural networks to flight control design.
Neural network application to aircraft control system design
NASA Technical Reports Server (NTRS)
Troudet, Terry; Garg, Sanjay; Merrill, Walter C.
1991-01-01
The feasibility of using artificial neural network as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research identified to enhance the practical applicability of neural networks to flight control design.
Truncated Sum Rules and Their Use in Calculating Fundamental Limits of Nonlinear Susceptibilities
NASA Astrophysics Data System (ADS)
Kuzyk, Mark G.
Truncated sum rules have been used to calculate the fundamental limits of the nonlinear susceptibilities and the results have been consistent with all measured molecules. However, given that finite-state models appear to result in inconsistencies in the sum rules, it may seem unclear why the method works. In this paper, the assumptions inherent in the truncation process are discussed and arguments based on physical grounds are presented in support of using truncated sum rules in calculating fundamental limits. The clipped harmonic oscillator is used as an illustration of how the validity of truncation can be tested and several limiting cases are discussed as examples of the nuances inherent in the method.
Learning to represent spatial transformations with factored higher-order Boltzmann machines.
Memisevic, Roland; Hinton, Geoffrey E
2010-06-01
To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.
GalMod: A Galactic Synthesis Population Model
NASA Astrophysics Data System (ADS)
Pasetto, Stefano; Grebel, Eva K.; Chiosi, Cesare; Crnojević, Denija; Zeidler, Peter; Busso, Giorgia; Cassarà, Letizia P.; Piovan, Lorenzo; Tantalo, Rosaria; Brogliato, Claudio
2018-06-01
We present a new Galaxy population synthesis Model, GalMod. GalMod is a star-count model featuring an asymmetric bar/bulge as well as spiral arms and related extinction. The model, initially introduced in Pasetto et al., has been here completed with a central bar, a new bulge description, new disk vertical profiles, and several new bolometric corrections. The model can generate synthetic mock catalogs of visible portions of the Milky Way, external galaxies like M31, or N-body simulation initial conditions. At any given time, e.g., at a chosen age of the Galaxy, the model contains a sum of discrete stellar populations, namely the bulge/bar, disk, and halo. These populations are in turn the sum of different components: the disk is the sum of the spiral arms, thin disks, a thick disk, and various gas components, while the halo is the sum of a stellar component, a hot coronal gas, and a dark-matter component. The Galactic potential is computed from these population density profiles and used to generate detailed kinematics by considering up to the first four moments of the collisionless Boltzmann equation. The same density profiles are then used to define the observed color–magnitude diagrams in a user-defined field of view (FoV) from an arbitrary solar location. Several photometric systems have been included and made available online, and no limits on the size of the FoV are imposed thus allowing full-sky simulations, too. Finally, we model the extinction by adopting a dust model with advanced ray-tracing solutions. The model's Web page (and tutorial) can be accessed at www.GalMod.org and support is provided at Galaxy.Model@yahoo.com.
Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.
Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M
2017-09-01
Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.
Separating OR, SUM, and XOR Circuits☆
Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H.
2017-01-01
Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O(n), but require SUM-circuits of size Ω(n3/2/log2n).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis. PMID:28529379
Rasch-built Overall Disability Scale (R-ODS) for immune-mediated peripheral neuropathies.
van Nes, S I; Vanhoutte, E K; van Doorn, P A; Hermans, M; Bakkers, M; Kuitwaard, K; Faber, C G; Merkies, I S J
2011-01-25
To develop a patient-based, linearly weighted scale that captures activity and social participation limitations in patients with Guillain-Barré syndrome (GBS), chronic inflammatory demyelinating polyradiculoneuropathy (CIDP), and gammopathy-related polyneuropathy (MGUSP). A preliminary Rasch-built Overall Disability Scale (R-ODS) containing 146 activity and participation items was constructed, based on the WHO International Classification of Functioning, Disability and Health, literature search, and patient interviews. The preliminary R-ODS was assessed twice (interval: 2-4 weeks; test-retest reliability studies) in 294 patients who experienced GBS in the past (n = 174) or currently have stable CIDP (n = 80) or MGUSP (n = 40). Data were analyzed using the Rasch unidimensional measurement model (RUMM2020). The preliminary R-ODS did not meet the Rasch model expectations. Based on disordered thresholds, misfit statistics, item bias, and local dependency, items were systematically removed to improve the model fit, regularly controlling the class intervals and model statistics. Finally, we succeeded in constructing a 24-item scale that fulfilled all Rasch requirements. "Reading a newspaper/book" and "eating" were the 2 easiest items; "standing for hours" and "running" were the most difficult ones. Good validity and reliability were obtained. The R-ODS is a linearly weighted scale that specifically captures activity and social participation limitations in patients with GBS, CIDP, and MGUSP. Compared to the Overall Disability Sum Score, the R-ODS represents a wider range of item difficulties, thereby better targeting patients with different ability levels. If responsive, the R-ODS will be valuable for future clinical trials and follow-up studies in these conditions.
Carlisle, Daren M.; Bryant, Wade L.
2011-01-01
Many physicochemical factors potentially impair stream ecosystems in urbanizing basins, but few studies have evaluated their relative importance simultaneously, especially in different environmental settings. We used data collected in 25 to 30 streams along a gradient of urbanization in each of 6 metropolitan areas (MAs) to evaluate the relative importance of 11 physicochemical factors on the condition of algal, macroinvertebrate, and fish assemblages. For each assemblage, biological condition was quantified using 2 separate metrics, nonmetric multidimensional scaling ordination site scores and the ratio of observed/expected taxa, both derived in previous studies. Separate linear regression models with 1 or 2 factors as predictors were developed for each MA and assemblage metric. Model parsimony was evaluated based on Akaike’s Information Criterion for small sample size (AICc) and Akaike weights, and variable importance was estimated by summing the Akaike weights across models containing each stressor variable. Few of the factors were strongly correlated (Pearson |r| > 0.7) within MAs. Physicochemical factors explained 17 to 81% of variance in biological condition. Most (92 of 118) of the most plausible models contained 2 predictors, and generally more variance could be explained by the additive effects of 2 factors than by any single factor alone. None of the factors evaluated was universally important for all MAs or biological assemblages. The relative importance of factors varied for different measures of biological condition, biological assemblages, and MA. Our results suggest that the suite of physicochemical factors affecting urban stream ecosystems varies across broad geographic areas, along gradients of urban intensity, and among basins within single MAs.
Multisensor Arrays for Greater Reliability and Accuracy
NASA Technical Reports Server (NTRS)
Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff
2004-01-01
Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that approximate each other sufficiently closely to constitute a majority for the purpose of quantifying reliability. This criterion is, simply, that if there do not exist at least three sensors having weights greater than a prescribed minimum acceptable value, then the array as a whole is deemed to have failed.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic
2018-07-01
Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.
Occupational-Specific Strength Predicts Astronaut-Related Task Performance in a Weighted Suit.
Taylor, Andrew; Kotarsky, Christopher J; Bond, Colin W; Hackney, Kyle J
2018-01-01
Future space missions beyond low Earth orbit will require deconditioned astronauts to perform occupationally relevant tasks within a planetary spacesuit. The prediction of time-to-completion (TTC) of astronaut tasks will be critical for crew safety, autonomous operations, and mission success. This exploratory study determined if the addition of task-specific strength testing to current standard lower body testing would enhance the prediction of TTC in a 1-G test battery. Eight healthy participants completed NASA lower body strength tests, occupationally specific strength tests, and performed six task simulations (hand drilling, construction wrenching, incline walking, collecting weighted samples, and dragging an unresponsive crewmember to safety) in a 48-kg weighted suit. The TTC for each task was recorded and summed to obtain a total TTC for the test battery. Linear regression was used to predict total TTC with two models: 1) NASA lower body strength tests; and 2) NASA lower body strength tests + occupationally specific strength tests. Total TTC of the test battery ranged from 20.2-44.5 min. The lower body strength test alone accounted for 61% of the variability in total TTC. The addition of hand drilling and wrenching strength tests accounted for 99% of the variability in total TTC. Adding occupationally specific strength tests (hand drilling and wrenching) to standard lower body strength tests successfully predicted total TTC in a performance test battery within a weighted suit. Future research should couple these strength tests with higher fidelity task simulations to determine the utility and efficacy of task performance prediction.Taylor A, Kotarsky CJ, Bond CW, Hackney KJ. Occupational-specific strength predicts astronaut-related task performance in a weighted suit. Aerosp Med Hum Perform. 2018; 89(1):58-62.
OpCost: an open-source system for estimating costs of stand-level forest operations
Conor K. Bell; Robert F. Keefe; Jeremy S. Fried
2017-01-01
This report describes and documents the OpCost forest operations cost model, a key component of the BioSum analysis framework. OpCost is available in two editions: as a callable module for use with BioSum, and in a stand-alone edition that can be run directly from R. OpCost model logic and assumptions for this open-source tool are explained, references to the...
Fast Inference with Min-Sum Matrix Product.
Felzenszwalb, Pedro F; McAuley, Julian J
2011-12-01
The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.
NASA Astrophysics Data System (ADS)
Narison, Stephan
2004-05-01
About Stephan Narison; Outline of the book; Preface; Acknowledgements; Part I. General Introduction: 1. A short flash on particle physics; 2. The pre-QCD era; 3. The QCD story; 4. Field theory ingredients; Part II. QCD Gauge Theory: 5. Lagrangian and gauge invariance; 6. Quantization using path integral; 7. QCD and its global invariance; Part III. MS scheme for QCD and QED: Introduction; 8. Dimensional regularization; 9. The MS renormalization scheme; 10. Renormalization of operators using the background field method; 11. The renormalization group; 12. Other renormalization schemes; 13. MS scheme for QED; 14. High-precision low-energy QED tests; Part IV. Deep Inelastic Scattering at Hadron Colliders: 15. OPE for deep inelastic scattering; 16. Unpolarized lepton-hadron scattering; 17. The Altarelli-Parisi equation; 18. More on unpolarized deep inelastic scatterings; 19. Polarized deep-inelastic processes; 20. Drell-Yan process; 21. One 'prompt photon' inclusive production; Part V. Hard Processes in e+e- Collisions: Introduction; 22. One hadron inclusive production; 23. gg scatterings and the 'spin' of the photon; 24. QCD jets; 25. Total inclusive hadron productions; Part VI. Summary of QCD Tests and as Measurements; Part VII. Power Corrections in QCD: 26. Introduction; 27. The SVZ expansion; 28. Technologies for evaluating Wilson coefficients; 29. Renormalons; 30. Beyond the SVZ expansion; Part VIII. QCD Two-Point Functions: 31. References guide to original works; 32. (Pseudo)scalar correlators; 33. (Axial-)vector two-point functions; 34. Tensor-quark correlator; 35. Baryonic correlators; 36. Four-quark correlators; 37. Gluonia correlators; 38. Hybrid correlators; 39. Correlators in x-space; Part IX. QCD Non-Perturbative Methods: 40. Introduction; 41. Lattice gauge theory; 42. Chiral perturbation theory; 43. Models of the QCD effective action; 44. Heavy quark effective theory; 45. Potential approaches to quarkonia; 46. On monopole and confinement; Part X. QCD Spectral Sum Rules: 47. Introduction; 48. Theoretical foundations; 49. Survey of QCD spectral sum rules; 50. Weinberg and DMO sum rules; 51. The QCD coupling as; 52. The QCD condensates; 53. Light and heavy quark masses, etc.; 54. Hadron spectroscopy; 55. D, B and Bc exclusive weak decays; 56. B0(s)-B0(s) mixing, kaon CP violation; 57. Thermal behaviour of QCD; 58. More on spectral sum rules; Part XI. Appendix A: physical constants and unites; Appendix B: weight factors for SU(N)c; Appendix C: coordinates and momenta; Appendix D: Dirac equation and matrices; Appendix E: Feynman rules; Appendix F: Feynman integrals; Appendix G: useful formulae for the sum rules; Bibliography; Index.
NASA Astrophysics Data System (ADS)
Narison, Stephan
2007-07-01
About Stephan Narison; Outline of the book; Preface; Acknowledgements; Part I. General Introduction: 1. A short flash on particle physics; 2. The pre-QCD era; 3. The QCD story; 4. Field theory ingredients; Part II. QCD Gauge Theory: 5. Lagrangian and gauge invariance; 6. Quantization using path integral; 7. QCD and its global invariance; Part III. MS scheme for QCD and QED: Introduction; 8. Dimensional regularization; 9. The MS renormalization scheme; 10. Renormalization of operators using the background field method; 11. The renormalization group; 12. Other renormalization schemes; 13. MS scheme for QED; 14. High-precision low-energy QED tests; Part IV. Deep Inelastic Scattering at Hadron Colliders: 15. OPE for deep inelastic scattering; 16. Unpolarized lepton-hadron scattering; 17. The Altarelli-Parisi equation; 18. More on unpolarized deep inelastic scatterings; 19. Polarized deep-inelastic processes; 20. Drell-Yan process; 21. One 'prompt photon' inclusive production; Part V. Hard Processes in e+e- Collisions: Introduction; 22. One hadron inclusive production; 23. gg scatterings and the 'spin' of the photon; 24. QCD jets; 25. Total inclusive hadron productions; Part VI. Summary of QCD Tests and as Measurements; Part VII. Power Corrections in QCD: 26. Introduction; 27. The SVZ expansion; 28. Technologies for evaluating Wilson coefficients; 29. Renormalons; 30. Beyond the SVZ expansion; Part VIII. QCD Two-Point Functions: 31. References guide to original works; 32. (Pseudo)scalar correlators; 33. (Axial-)vector two-point functions; 34. Tensor-quark correlator; 35. Baryonic correlators; 36. Four-quark correlators; 37. Gluonia correlators; 38. Hybrid correlators; 39. Correlators in x-space; Part IX. QCD Non-Perturbative Methods: 40. Introduction; 41. Lattice gauge theory; 42. Chiral perturbation theory; 43. Models of the QCD effective action; 44. Heavy quark effective theory; 45. Potential approaches to quarkonia; 46. On monopole and confinement; Part X. QCD Spectral Sum Rules: 47. Introduction; 48. Theoretical foundations; 49. Survey of QCD spectral sum rules; 50. Weinberg and DMO sum rules; 51. The QCD coupling as; 52. The QCD condensates; 53. Light and heavy quark masses, etc.; 54. Hadron spectroscopy; 55. D, B and Bc exclusive weak decays; 56. B0(s)-B0(s) mixing, kaon CP violation; 57. Thermal behaviour of QCD; 58. More on spectral sum rules; Part XI. Appendix A: physical constants and unites; Appendix B: weight factors for SU(N)c; Appendix C: coordinates and momenta; Appendix D: Dirac equation and matrices; Appendix E: Feynman rules; Appendix F: Feynman integrals; Appendix G: useful formulae for the sum rules; Bibliography; Index.
SU-F-T-142: An Analytical Model to Correct the Aperture Scattered Dose in Clinical Proton Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, B; Liu, S; Zhang, T
2016-06-15
Purpose: Apertures or collimators are used to laterally shape proton beams in double scattering (DS) delivery and to sharpen the penumbra in pencil beam (PB) delivery. However, aperture-scattered dose is not included in the current dose calculations of treatment planning system (TPS). The purpose of this study is to provide a method to correct the aperture-scattered dose based on an analytical model. Methods: A DS beam with a non-divergent aperture was delivered using a single-room proton machine. Dose profiles were measured with an ion-chamber scanning in water and a 2-D ion chamber matrix with solid-water buildup at various depths. Themore » measured doses were considered as the sum of the non-contaminated dose and the aperture-scattered dose. The non-contaminated dose was calculated by TPS and subtracted from the measured dose. Aperture scattered-dose was modeled as a 1D Gaussian distribution. For 2-D fields, to calculate the scatter-dose from all the edges of aperture, a sum of weighted distance was used in the model based on the distance from calculation point to aperture edge. The gamma index was calculated between the measured and calculated dose with and without scatter correction. Results: For a beam with range of 23 cm and aperture size of 20 cm, the contribution of the scatter horn was ∼8% of the total dose at 4 cm depth and diminished to 0 at 15 cm depth. The amplitude of scatter-dose decreased linearly with the depth increase. The 1D gamma index (2%/2 mm) between the calculated and measured profiles increased from 63% to 98% for 4 cm depth and from 83% to 98% at 13 cm depth. The 2D gamma index (2%/2 mm) at 4 cm depth has improved from 78% to 94%. Conclusion: Using the simple analytical method the discrepancy between the measured and calculated dose has significantly improved.« less
Extending the Distributed Lag Model framework to handle chemical mixtures.
Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris
2017-07-01
Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.
Psychometric functions for informational masking
NASA Astrophysics Data System (ADS)
Lutfi, Robert A.; Kistler, Doris J.; Callahan, Michael R.; Wightman, Frederic L.
2003-04-01
The method of constant stimuli was used to obtain complete psychometric functions (PFs) from 44 normal-hearing listeners in conditions known to produce varying amounts of informational masking. The task was to detect a pure-tone signal in the presence of a broadband noise and in the presence of multitone maskers with frequencies and amplitudes that varied at random from one presentation to the next. Relative to the broadband noise condition, significant reductions were observed in both the slope and the upper asymptote of the PF for multitone maskers producing large amounts of informational masking. Slope was affected more for some listeners while asymptote was affected more for others. Mean slopes and asymptotes varied nonmonotonically with the number of masker components in much the same manner as mean thresholds. The results are consistent with a model that assumes trial-by-trial judgments are based on a weighted sum of dB levels at the output of independent auditory filters. For many listeners, however, the weights appear to reflect how often a nonsignal auditory filter is mistaken for the signal filter. For these listeners adaptive procedures may produce a significant bias in the estimates of threshold for conditions of informational masking. [Work supported by NIDCD.
NASA Astrophysics Data System (ADS)
Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.
2018-05-01
The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.
Vegetated land cover near residence is associated with ...
Abstract Background: Greater exposure to urban green spaces has been linked to reduced risks of depression, cardiovascular disease, diabetes and premature death. Alleviation of chronic stress is a hypothesized pathway to improved health. Previous studies linked chronic stress with biomarker-based measures of physiological dysregulation known as allostatic load. This study aimed to assess the relationship between vegetated land cover near residences and allostatic load. Methods: This cross-sectional population-based study involved 204 adult residents of the Durham-Chapel Hill, North Carolina metropolitan area. Exposure was quantified using high-resolution metrics of trees and herbaceous vegetation within 500 m of each residence derived from the U.S. Environmental Protection Agency’s EnviroAtlas land cover dataset. Eighteen biomarkers of immune, neuroendocrine, and metabolic functions were measured in serum or saliva samples. Allostatic load was defined as a sum of biomarker values dichotomized at specific percentiles of sample distribution. Regression analysis was conducted using generalized additive models with two-dimensional spline smoothing function of geographic coordinates, weighted measures of vegetated land cover allowing decay of effects with distance, and geographic and demographic covariates. Results: An inter-quartile range increase in distance-weighted vegetated land cover was associated with 37% (46%; 27%) reduced allostatic load; significantly
NASA Astrophysics Data System (ADS)
Luy, N. T.
2018-04-01
The design of distributed cooperative H∞ optimal controllers for multi-agent systems is a major challenge when the agents' models are uncertain multi-input and multi-output nonlinear systems in strict-feedback form in the presence of external disturbances. In this paper, first, the distributed cooperative H∞ optimal tracking problem is transformed into controlling the cooperative tracking error dynamics in affine form. Second, control schemes and online algorithms are proposed via adaptive dynamic programming (ADP) and the theory of zero-sum differential graphical games. The schemes use only one neural network (NN) for each agent instead of three from ADP to reduce computational complexity as well as avoid choosing initial NN weights for stabilising controllers. It is shown that despite not using knowledge of cooperative internal dynamics, the proposed algorithms not only approximate values to Nash equilibrium but also guarantee all signals, such as the NN weight approximation errors and the cooperative tracking errors in the closed-loop system, to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is shown by simulation results of an application to wheeled mobile multi-robot systems.
Castagna, Antonella; Csepregi, Kristóf; Neugart, Susanne; Zipoli, Gaetano; Večeřová, Kristýna; Jakab, Gábor; Jug, Tjaša; Llorens, Laura; Martínez-Abaigar, Javier; Martínez-Lüscher, Johann; Núñez-Olivera, Encarnación; Ranieri, Annamaria; Schoedl-Hummel, Katharina; Schreiner, Monika; Teszlák, Péter; Tittmann, Susanne; Urban, Otmar; Verdaguer, Dolors; Jansen, Marcel A K; Hideg, Éva
2017-11-01
A 2-year study explored metabolic and phenotypic plasticity of sun-acclimated Vitis vinifera cv. Pinot noir leaves collected from 12 locations across a 36.69-49.98°N latitudinal gradient. Leaf morphological and biochemical parameters were analysed in the context of meteorological parameters and the latitudinal gradient. We found that leaf fresh weight and area were negatively correlated with both global and ultraviolet (UV) radiation, cumulated global radiation being a stronger correlator. Cumulative UV radiation (sumUVR) was the strongest correlator with most leaf metabolites and pigments. Leaf UV-absorbing pigments, total antioxidant capacities, and phenolic compounds increased with increasing sumUVR, whereas total carotenoids and xanthophylls decreased. Despite of this reallocation of metabolic resources from carotenoids to phenolics, an increase in xanthophyll-cycle pigments (the sum of the amounts of three xanthophylls: violaxanthin, antheraxanthin, and zeaxanthin) with increasing sumUVR indicates active, dynamic protection for the photosynthetic apparatus. In addition, increased amounts of flavonoids (quercetin glycosides) and constitutive β-carotene and α-tocopherol pools provide antioxidant protection against reactive oxygen species. However, rather than a continuum of plant acclimation responses, principal component analysis indicates clusters of metabolic states across the explored 1,500-km-long latitudinal gradient. This study emphasizes the physiological component of plant responses to latitudinal gradients and reveals the physiological plasticity that may act to complement genetic adaptations. © 2017 John Wiley & Sons Ltd.
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.
2015-01-01
We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905