Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks
NASA Astrophysics Data System (ADS)
Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.
Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.
Independent component analysis for brain FMRI does indeed select for maximal independence.
Calhoun, Vince D; Potluru, Vamsi K; Phlypo, Ronald; Silva, Rogers F; Pearlmutter, Barak A; Caprihan, Arvind; Plis, Sergey M; Adalı, Tülay
2013-01-01
A recent paper by Daubechies et al. claims that two independent component analysis (ICA) algorithms, Infomax and FastICA, which are widely used for functional magnetic resonance imaging (fMRI) analysis, select for sparsity rather than independence. The argument was supported by a series of experiments on synthetic data. We show that these experiments fall short of proving this claim and that the ICA algorithms are indeed doing what they are designed to do: identify maximally independent sources.
Response Independence, Matching, and Maximizing: A Reply to Heyman.
ERIC Educational Resources Information Center
Staddon, J. E. R.; Motheral, Susan
1979-01-01
Heyman's major criticism (TM 504 810) of Staddon and Motheral's reinforcement maximization model is that it does not consider "local" and "interchangeover" interresponse times separately. We show that this separation may not be necessary. Heyman's apparent gain in comprehensiveness may not be worth the added complexity. (Author/RD)
The maximally entangled set of 4-qubit states
NASA Astrophysics Data System (ADS)
Spee, C.; de Vicente, J. I.; Kraus, B.
2016-05-01
Entanglement is a resource to overcome the natural restriction of operations used for state manipulation to Local Operations assisted by Classical Communication (LOCC). Hence, a bipartite maximally entangled state is a state which can be transformed deterministically into any other state via LOCC. In the multipartite setting no such state exists. There, rather a whole set, the Maximally Entangled Set of states (MES), which we recently introduced, is required. This set has on the one hand the property that any state outside of this set can be obtained via LOCC from one of the states within the set and on the other hand, no state in the set can be obtained from any other state via LOCC. Recently, we studied LOCC transformations among pure multipartite states and derived the MES for three and generic four qubit states. Here, we consider the non-generic four qubit states and analyze their properties regarding local transformations. As already the most coarse grained classification, due to Stochastic LOCC (SLOCC), of four qubit states is much richer than in case of three qubits, the investigation of possible LOCC transformations is correspondingly more difficult. We prove that most SLOCC classes show a similar behavior as the generic states, however we also identify here three classes with very distinct properties. The first consists of the GHZ and W class, where any state can be transformed into some other state non-trivially. In particular, there exists no isolation. On the other hand, there also exist classes where all states are isolated. Last but not least we identify an additional class of states, whose transformation properties differ drastically from all the other classes. Although the possibility of transforming states into local-unitary inequivalent states by LOCC turns out to be very rare, we identify those states (with exception of the latter class) which are in the MES and those, which can be obtained (transformed) non-trivially from (into) other states
Maximum independent set on diluted triangular lattices
NASA Astrophysics Data System (ADS)
Fay, C. W., IV; Liu, J. W.; Duxbury, P. M.
2006-05-01
Core percolation and maximum independent set on random graphs have recently been characterized using the methods of statistical physics. Here we present a statistical physics study of these problems on bond diluted triangular lattices. Core percolation critical behavior is found to be consistent with the standard percolation values, though there are strong finite size effects. A transfer matrix method is developed and applied to find accurate values of the density and degeneracy of the maximum independent set on lattices of limited width but large length. An extrapolation of these results to the infinite lattice limit yields high precision results, which are tabulated. These results are compared to results found using both vertex based and edge based local probability recursion algorithms, which have proven useful in the analysis of hard computational problems, such as the satisfiability problem.
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
An inability to set independent attentional control settings by hemifield.
Becker, Mark W; Ravizza, Susan M; Peltier, Chad
2015-11-01
Recent evidence suggests that people can simultaneously activate attentional control setting for two distinct colors. However, it is unclear whether both attentional control settings must operate globally across the visual field or whether each can be constrained to a particular spatial location. Using two different paradigms, we investigated participants' ability to apply independent color attentional control settings to distinct regions of space. In both experiments, participants were told to identify red letters in one hemifield and green letters in the opposite hemifield. Additionally, some trials used a "relevant distractor"-a letter that matched the opposite side's target color. In Experiment 1, eight letters appeared (four per hemifield) simultaneously for a brief amount of time and then were masked. Relevant distractors increased the error rate and resulted in a greater number of distractor intrusions than irrelevant distractors. Similar results were observed in Experiment 2 in which red and green targets were presented in two rapid serial visual presentation streams. Relevant distractors were found to produce an attentional blink similar in magnitude to an actual target. The results of both experiments suggest that letters matching either attentional control setting were selected by attention and were processed as if they were targets, providing strong evidence that both attentional control settings were applied globally, rather than being constrained to a particular location. PMID:26220268
Maximizing Social Model Principles in Residential Recovery Settings
Polcin, Douglas; Mericle, Amy; Howell, Jason; Sheridan, Dave; Christensen, Jeff
2014-01-01
Abstract Peer support is integral to a variety of approaches to alcohol and drug problems. However, there is limited information about the best ways to facilitate it. The “social model” approach developed in California offers useful suggestions for facilitating peer support in residential recovery settings. Key principles include using 12-step or other mutual-help group strategies to create and facilitate a recovery environment, involving program participants in decision making and facility governance, using personal recovery experience as a way to help others, and emphasizing recovery as an interaction between the individual and their environment. Although limited in number, studies have shown favorable outcomes for social model programs. Knowledge about social model recovery and how to use it to facilitate peer support in residential recovery homes varies among providers. This article presents specific, practical suggestions for enhancing social model principles in ways that facilitate peer support in a range of recovery residences. PMID:25364996
Maximizing social model principles in residential recovery settings.
Polcin, Douglas; Mericle, Amy; Howell, Jason; Sheridan, Dave; Christensen, Jeff
2014-01-01
Peer support is integral to a variety of approaches to alcohol and drug problems. However, there is limited information about the best ways to facilitate it. The "social model" approach developed in California offers useful suggestions for facilitating peer support in residential recovery settings. Key principles include using 12-step or other mutual-help group strategies to create and facilitate a recovery environment, involving program participants in decision making and facility governance, using personal recovery experience as a way to help others, and emphasizing recovery as an interaction between the individual and their environment. Although limited in number, studies have shown favorable outcomes for social model programs. Knowledge about social model recovery and how to use it to facilitate peer support in residential recovery homes varies among providers. This article presents specific, practical suggestions for enhancing social model principles in ways that facilitate peer support in a range of recovery residences.
Speeding up Growth: Selection for Mass-Independent Maximal Metabolic Rate Alters Growth Rates.
Downs, Cynthia J; Brown, Jessi L; Wone, Bernard W M; Donovan, Edward R; Hayes, Jack P
2016-03-01
Investigations into relationships between life-history traits, such as growth rate and energy metabolism, typically focus on basal metabolic rate (BMR). In contrast, investigators rarely examine maximal metabolic rate (MMR) as a relevant metric of energy metabolism, even though it indicates the maximal capacity to metabolize energy aerobically, and hence it might also be important in trade-offs. We studied the relationship between energy metabolism and growth in mice (Mus musculus domesticus Linnaeus) selected for high mass-independent metabolic rates. Selection for high mass-independent MMR increased maximal growth rate, increased body mass at 20 weeks of age, and generally altered growth patterns in both male and female mice. In contrast, there was little evidence that the correlated response in mass-adjusted BMR altered growth patterns. The relationship between mass-adjusted MMR and growth rate indicates that MMR is an important mediator of life histories. Studies investigating associations between energy metabolism and life histories should consider MMR because it is potentially as important in understanding life history as BMR.
Agreement Measure Comparisons between Two Independent Sets of Raters.
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W., Jr.
1997-01-01
Describes a FORTRAN software program that calculates the probability of an observed difference between agreement measures obtained from two independent sets of raters. An example illustrates the use of the DIFFER program in evaluating undergraduate essays. (Author/SLD)
State-independent contextuality sets for a qutrit
NASA Astrophysics Data System (ADS)
Xu, Zhen-Peng; Chen, Jing-Ling; Su, Hong-Yi
2015-09-01
We present a generalized set of complex rays for a qutrit in terms of parameter q =e i 2 π / k, a k-th root of unity. Remarkably, when k = 2 , 3, the set reduces to two well-known state-independent contextuality (SIC) sets: the Yu-Oh set and the Bengtsson-Blanchfield-Cabello set. Based on the Ramanathan-Horodecki criterion and the violation of a noncontextuality inequality, we have proven that the sets with k = 3 m and k = 4 are SIC sets, while the set with k = 5 is not. Our generalized set of rays will theoretically enrich the study of SIC proofs, and stimulate the novel application to quantum information processing.
Influence maximization in social networks under an independent cascade-based model
NASA Astrophysics Data System (ADS)
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
Downs, Cynthia J.; Brown, Jessi L.; Wone, Bernard; Donovan, Edward R.; Hunter, Kenneth; Hayes, Jack P.
2013-01-01
Both appropriate metabolic rates and sufficient immune function are essential for survival. Consequently, eco-immunologists have hypothesized that animals may experience trade-offs between metabolic rates and immune function. Previous work has focused on how basal metabolic rate (BMR) may trade-off with immune function, but maximal metabolic rate (MMR), the upper limit to aerobic activity, might also trade-off with immune function. We used mice artificially selected for high mass-independent MMR to test for trade-offs with immune function. We assessed (i) innate immune function by quantifying cytokine production in response to injection with lipopolysaccharide and (ii) adaptive immune function by measuring antibody production in response to injection with keyhole limpet haemocyanin. Selection for high mass-independent MMR suppressed innate immune function, but not adaptive immune function. However, analyses at the individual level also indicate a negative correlation between MMR and adaptive immune function. By contrast BMR did not affect immune function. Evolutionarily, natural selection may favour increasing MMR to enhance aerobic performance and endurance, but the benefits of high MMR may be offset by impaired immune function. This result could be important in understanding the selective factors acting on the evolution of metabolic rates. PMID:23303541
Existence of independent [1, 2]-sets in caterpillars
NASA Astrophysics Data System (ADS)
Santoso, Eko Budi; Marcelo, Reginaldo M.
2016-02-01
Given a graph G, a subset S ⊆ V (G) is an independent [1, 2]-set if no two vertices in S are adjacent and for every vertex ν ∈ V (G)/S, 1 ≤ |N(ν) ∩ S| ≤ 2, that is, every vertex ν ∈ V (G)/S is adjacent to at least one but not more than two vertices in S. In this paper, we discuss the existence of independent [1, 2]-sets in a family of trees called caterpillars.
Luthy, Sarah K; Marinkovic, Aleksandar; Weiner, Daniel J
2011-06-01
High-frequency chest compression (HFCC) is a therapy for cystic fibrosis (CF). We hypothesized that the resonant frequency (f(res)), as measured by impulse oscillometry, could be used to determine what HFCC vest settings produce maximal airflow or volume in pediatric CF patients. In 45 subjects, we studied: f(res), HFCC vest frequencies that subjects used (f(used)), and the HFCC vest frequencies that generated the greatest volume (f(vol)) and airflow (f(flow)) changes as measured by pneumotachometer. Median f(used) for 32 subjects was 14 Hz (range, 6-30). The rank order of the three most common f(used) was 15 Hz (28%) and 12 Hz (21%); three frequencies tied for third: 10, 11, and 14 Hz (5% each). Median f(res) for 43 subjects was 20.30 Hz (range, 7.85-33.65). Nineteen subjects underwent vest-tuning to determine f(vol) and f(flow). Median f(vol) was 8 Hz (range, 6-30). The rank order of the three most common f(vol) was: 8 Hz (42%), 6 Hz (32%), and 10 Hz (21%). Median f(flow) was 26 Hz (range, 8-30). The rank order of the three most common f(flow) was: 30 Hz (26%) and 28 Hz (21%); three frequencies tied for third: 8, 14, and 18 Hz (11% each). There was no correlation between f(used) and f(flow) (r(2) = -0.12) or f(vol) (r(2) = 0.031). There was no correlation between f(res) and f(flow) (r(2) = 0.19) or f(vol) (r(2) = 0.023). Multivariable analysis showed no independent variables were predictive of f(flow) or f(vol). Vest-tuning may be required to optimize clinical utility of HFCC. Multiple HFCC frequencies may need to be used to incorporate f(flow) and f(vol).
Balance between Noise and Information Flow Maximizes Set Complexity of Network Dynamics
Mäki-Marttunen, Tuomo; Kesseli, Juha; Nykter, Matti
2013-01-01
Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content. PMID:23516395
San Martín, René; Appelbaum, Lawrence G; Pearson, John M; Huettel, Scott A; Woldorff, Marty G
2013-04-17
Success in many decision-making scenarios depends on the ability to maximize gains and minimize losses. Even if an agent knows which cues lead to gains and which lead to losses, that agent could still make choices yielding suboptimal rewards. Here, by analyzing event-related potentials (ERPs) recorded in humans during a probabilistic gambling task, we show that individuals' behavioral tendencies to maximize gains and to minimize losses are associated with their ERP responses to the receipt of those gains and losses, respectively. We focused our analyses on ERP signals that predict behavioral adjustment: the frontocentral feedback-related negativity (FRN) and two P300 (P3) subcomponents, the frontocentral P3a and the parietal P3b. We found that, across participants, gain maximization was predicted by differences in amplitude of the P3b for suboptimal versus optimal gains (i.e., P3b amplitude difference between the least good and the best gains). Conversely, loss minimization was predicted by differences in the P3b amplitude to suboptimal versus optimal losses (i.e., difference between the worst and the least bad losses). Finally, we observed that the P3a and P3b, but not the FRN, predicted behavioral adjustment on subsequent trials, suggesting a specific adaptive mechanism by which prior experience may alter ensuing behavior. These findings indicate that individual differences in gain maximization and loss minimization are linked to individual differences in rapid neural responses to monetary outcomes.
PMCR-Miner: parallel maximal confident association rules miner algorithm for microarray data set.
Zakaria, Wael; Kotb, Yasser; Ghaleb, Fayed F M
2015-01-01
The MCR-Miner algorithm is aimed to mine all maximal high confident association rules form the microarray up/down-expressed genes data set. This paper introduces two new algorithms: IMCR-Miner and PMCR-Miner. The IMCR-Miner algorithm is an extension of the MCR-Miner algorithm with some improvements. These improvements implement a novel way to store the samples of each gene into a list of unsigned integers in order to benefit using the bitwise operations. In addition, the IMCR-Miner algorithm overcomes the drawbacks faced by the MCR-Miner algorithm by setting some restrictions to ignore repeated comparisons. The PMCR-Miner algorithm is a parallel version of the new proposed IMCR-Miner algorithm. The PMCR-Miner algorithm is based on shared-memory systems and task parallelism, where no time is needed in the process of sharing and combining data between processors. The experimental results on real microarray data sets show that the PMCR-Miner algorithm is more efficient and scalable than the counterparts.
Beyond Maximum Independent Set: AN Extended Model for Point-Feature Label Placement
NASA Astrophysics Data System (ADS)
Haunert, Jan-Henrik; Wolff, Alexander
2016-06-01
Map labeling is a classical problem of cartography that has frequently been approached by combinatorial optimization. Given a set of features in the map and for each feature a set of label candidates, a common problem is to select an independent set of labels (that is, a labeling without label-label overlaps) that contains as many labels as possible and at most one label for each feature. To obtain solutions of high cartographic quality, the labels can be weighted and one can maximize the total weight (rather than the number) of the selected labels. We argue, however, that when maximizing the weight of the labeling, interdependences between labels are insufficiently addressed. Furthermore, in a maximum-weight labeling, the labels tend to be densely packed and thus the map background can be occluded too much. We propose extensions of an existing model to overcome these limitations. Since even without our extensions the problem is NP-hard, we cannot hope for an efficient exact algorithm for the problem. Therefore, we present a formalization of our model as an integer linear program (ILP). This allows us to compute optimal solutions in reasonable time, which we demonstrate for randomly generated instances.
Zarzoso, Vicente; Comon, Pierre
2010-02-01
Independent component analysis (ICA) aims at decomposing an observed random vector into statistically independent variables. Deflation-based implementations, such as the popular one-unit FastICA algorithm and its variants, extract the independent components one after another. A novel method for deflationary ICA, referred to as RobustICA, is put forward in this paper. This simple technique consists of performing exact line search optimization of the kurtosis contrast function. The step size leading to the global maximum of the contrast along the search direction is found among the roots of a fourth-degree polynomial. This polynomial rooting can be performed algebraically, and thus at low cost, at each iteration. Among other practical benefits, RobustICA can avoid prewhitening and deals with real- and complex-valued mixtures of possibly noncircular sources alike. The absence of prewhitening improves asymptotic performance. The algorithm is robust to local extrema and shows a very high convergence speed in terms of the computational cost required to reach a given source extraction quality, particularly for short data records. These features are demonstrated by a comparative numerical analysis on synthetic data. RobustICA's capabilities in processing real-world data involving noncircular complex strongly super-Gaussian sources are illustrated by the biomedical problem of atrial activity (AA) extraction in atrial fibrillation (AF) electrocardiograms (ECGs), where it outperforms an alternative ICA-based technique.
Milder, David A; Sutherland, Emily J; Gandevia, Simon C; McNulty, Penelope A
2014-01-01
The repetitive discharges required to produce a sustained muscle contraction results in activity-dependent hyperpolarization of the motor axons and a reduction in the force-generating capacity of the muscle. We investigated the relationship between these changes in the adductor pollicis muscle and the motor axons of its ulnar nerve supply, and the reproducibility of these changes. Ten subjects performed a 1-min maximal voluntary contraction. Activity-dependent changes in axonal excitability were measured using threshold tracking with electrical stimulation at the wrist; changes in the muscle were assessed as evoked and voluntary electromyography (EMG) and isometric force. Separate components of axonal excitability and muscle properties were tested at 5 min intervals after the sustained contraction in 5 separate sessions. The current threshold required to produce the target muscle action potential increased immediately after the contraction by 14.8% (p<0.05), reflecting decreased axonal excitability secondary to hyperpolarization. This was not correlated with the decline in amplitude of muscle force or evoked EMG. A late reversal in threshold current after the initial recovery from hyperpolarization peaked at -5.9% at ∼35 min (p<0.05). This pattern was mirrored by other indices of axonal excitability revealing a previously unreported depolarization of motor axons in the late recovery period. Measures of axonal excitability were relatively stable at rest but less so after sustained activity. The coefficient of variation (CoV) for threshold current increase was higher after activity (CoV 0.54, p<0.05) whereas changes in voluntary (CoV 0.12) and evoked twitch (CoV 0.15) force were relatively stable. These results demonstrate that activity-dependent changes in motor axon excitability are unlikely to contribute to concomitant changes in the muscle after sustained activity in healthy people. The variability in axonal excitability after sustained activity suggests that
Wone, B W M; Madsen, P; Donovan, E R; Labocha, M K; Sears, M W; Downs, C J; Sorensen, D A; Hayes, J P
2015-04-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR, and BMR was slightly, but not significantly, higher (2.5%). Compared with controls, MMR was significantly higher (5.3%) in antagonistically selected lines, and BMR was slightly, but not significantly, lower (4.2%). Analysis of breeding values revealed no positive genetic trend for elevated BMR in high-MMR lines. A weak positive genetic correlation was detected between MMR and BMR. That weak positive genetic correlation supports the aerobic capacity model for the evolution of endothermy in the sense that it fails to falsify a key model assumption. Overall, the results suggest that at least in these mice there is significant capacity for independent evolution of metabolic traits. Whether that is true in the ancestral animals that evolved endothermy remains an important but unanswered question.
Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.
Nath, Abhigyan; Subbiah, Karthikeyan
2015-12-01
Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance
Independence of the uniformity principle from Church's thesis in intuitionistic set theory
NASA Astrophysics Data System (ADS)
Khakhanyan, V. Kh
2013-12-01
We prove the independence of the strong uniformity principle from Church's thesis with choice in intuitionistic set theory with the axiom of extensionality extended by Markov's principle and the double complement for sets.
Brigantic, R T; Roggemann, M C; Welsh, B M; Bauer, K W
1998-02-10
We present the results of research aimed at optimizing adaptive-optics closed-loop bandwidth settings to maximize imaging-system performance. The optimum closed-loop bandwidth settings are determined as a function of target-object light levels and atmospheric seeing conditions. Our work shows that, for bright objects, the optimum closed-loop bandwidth is near the Greenwood frequency. However, for dim objects without the use of a laser beacon the preferred closed-loop bandwidth settings are a small fraction of the Greenwood frequency. In addition, under low light levels selection of the proper closed-loop bandwidth is more critical for achieving maximum performance than it is under high light levels. We also present a strategy for selecting the closed-loop bandwidth to provide robust system performance for different target-object light levels.
Brodsky, Stanley J.; Wu, Xing-Gang; /SLAC /Chongqing U.
2012-02-16
A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The extended renormalization group equations, which express the invariance of physical observables under both the renormalization scale- and scheme-parameter transformations, provide a convenient way for estimating the scale- and scheme-dependence of the physical process. In this paper, we present a solution for the scale-equation of the extended renormalization group equations at the four-loop level. Using the principle of maximum conformality (PMC)/Brodsky-Lepage-Mackenzie (BLM) scale-setting method, all non-conformal {beta}{sub i} terms in the perturbative expansion series can be summed into the running coupling, and the resulting scale-fixed predictions are independent of the renormalization scheme. Different schemes lead to different effective PMC/BLM scales, but the final results are scheme independent. Conversely, from the requirement of scheme independence, one not only can obtain scheme-independent commensurate scale relations among different observables, but also determine the scale displacements among the PMC/BLM scales which are derived under different schemes. In principle, the PMC/BLM scales can be fixed order-by-order, and as a useful reference, we present a systematic and scheme-independent procedure for setting PMC/BLM scales up to NNLO. An explicit application for determining the scale setting of R{sub e{sup +}e{sup -}}(Q) up to four loops is presented. By using the world average {alpha}{sub s}{sup {ovr MS}}(MZ) = 0.1184 {+-} 0.0007, we obtain the asymptotic scale for the 't Hooft associated with the {ovr MS} scheme, {Lambda}{sub {ovr MS}}{sup 'tH} = 245{sub -10}{sup +9} MeV, and the asymptotic scale for the conventional {ovr MS} scheme, {Lambda}{sub {ovr MS}} = 213{sub -8}{sup +19} MeV.
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
Martín, René San; Appelbaum, Lawrence G.; Pearson, John M.; Huettel, Scott A.; Woldorff, Marty G.
2013-01-01
Success in many decision-making scenarios depends on the ability to maximize gains and minimize losses. Even if an agent knows which cues lead to gains and which lead to losses, that agent could still make choices yielding suboptimal rewards. Here, by analyzing event-related potentials (ERPs) recorded in humans during a probabilistic gambling task, we show that individuals’ behavioral tendencies to maximize gains and to minimize losses are associated with their ERP responses to the receipt of those gains and losses, respectively. We focused our analyses on ERP signals that predict behavioral adjustment: the fronto-central feedback-related negativity (FRN) and two P300 (P3) subcomponents: the fronto-central P3a and the parietal P3b. We found that, across participants, gain-maximization was predicted by differences in amplitude of the P3b for suboptimal versus optimal gains (i.e., P3b amplitude difference between the least good and the best possible gains). Conversely, loss-minimization was predicted by differences in the P3b amplitude to suboptimal versus optimal losses (i.e., difference between the worst and the least bad losses). Finally, we observed that the P3a and P3b, but not the FRN, predicted behavioral adjustment on subsequent trials, suggesting a specific adaptive mechanism by which prior experience may alter ensuing behavior. These findings indicate that individual differences in gain-maximization and loss-minimization are linked to individual differences in rapid neural responses to monetary outcomes. PMID:23595758
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
NASA Astrophysics Data System (ADS)
Kaloshin, Vadim; Saprykina, Maria
2012-11-01
The famous ergodic hypothesis suggests that for a typical Hamiltonian on a typical energy surface nearly all trajectories are dense. KAM theory disproves it. Ehrenfest (The Conceptual Foundations of the Statistical Approach in Mechanics. Ithaca, NY: Cornell University Press, 1959) and Birkhoff (Collected Math Papers. Vol 2, New York: Dover, pp 462-465, 1968) stated the quasi-ergodic hypothesis claiming that a typical Hamiltonian on a typical energy surface has a dense orbit. This question is wide open. Herman (Proceedings of the International Congress of Mathematicians, Vol II (Berlin, 1998). Doc Math 1998, Extra Vol II, Berlin: Int Math Union, pp 797-808, 1998) proposed to look for an example of a Hamiltonian near {H_0(I)= < I, I rangle/2} with a dense orbit on the unit energy surface. In this paper we construct a Hamiltonian {H_0(I)+\\varepsilon H_1(θ , I , \\varepsilon)} which has an orbit dense in a set of maximal Hausdorff dimension equal to 5 on the unit energy surface.
Pal, Karoly F.; Vertesi, Tamas
2010-08-15
The I{sub 3322} inequality is the simplest bipartite two-outcome Bell inequality beyond the Clauser-Horne-Shimony-Holt (CHSH) inequality, consisting of three two-outcome measurements per party. In the case of the CHSH inequality the maximal quantum violation can already be attained with local two-dimensional quantum systems; however, there is no such evidence for the I{sub 3322} inequality. In this paper a family of measurement operators and states is given which enables us to attain the maximum quantum value in an infinite-dimensional Hilbert space. Further, it is conjectured that our construction is optimal in the sense that measuring finite-dimensional quantum systems is not enough to achieve the true quantum maximum. We also describe an efficient iterative algorithm for computing quantum maximum of an arbitrary two-outcome Bell inequality in any given Hilbert space dimension. This algorithm played a key role in obtaining our results for the I{sub 3322} inequality, and we also applied it to improve on our previous results concerning the maximum quantum violation of several bipartite two-outcome Bell inequalities with up to five settings per party.
Maximally Nonlocal Theories Cannot Be Maximally Random
NASA Astrophysics Data System (ADS)
de la Torre, Gonzalo; Hoban, Matty J.; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio
2015-04-01
Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle.
Maximally nonlocal theories cannot be maximally random.
de la Torre, Gonzalo; Hoban, Matty J; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio
2015-04-24
Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle. PMID:25955039
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
Reliable pre-eclampsia pathways based on multiple independent microarray data sets.
Kawasaki, Kaoru; Kondoh, Eiji; Chigusa, Yoshitsugu; Ujita, Mari; Murakami, Ryusuke; Mogami, Haruta; Brown, J B; Okuno, Yasushi; Konishi, Ikuo
2015-02-01
Pre-eclampsia is a multifactorial disorder characterized by heterogeneous clinical manifestations. Gene expression profiling of preeclamptic placenta have provided different and even opposite results, partly due to data compromised by various experimental artefacts. Here we aimed to identify reliable pre-eclampsia-specific pathways using multiple independent microarray data sets. Gene expression data of control and preeclamptic placentas were obtained from Gene Expression Omnibus. Single-sample gene-set enrichment analysis was performed to generate gene-set activation scores of 9707 pathways obtained from the Molecular Signatures Database. Candidate pathways were identified by t-test-based screening using data sets, GSE10588, GSE14722 and GSE25906. Additionally, recursive feature elimination was applied to arrive at a further reduced set of pathways. To assess the validity of the pre-eclampsia pathways, a statistically-validated protocol was executed using five data sets including two independent other validation data sets, GSE30186, GSE44711. Quantitative real-time PCR was performed for genes in a panel of potential pre-eclampsia pathways using placentas of 20 women with normal or severe preeclamptic singleton pregnancies (n = 10, respectively). A panel of ten pathways were found to discriminate women with pre-eclampsia from controls with high accuracy. Among these were pathways not previously associated with pre-eclampsia, such as the GABA receptor pathway, as well as pathways that have already been linked to pre-eclampsia, such as the glutathione and CDKN1C pathways. mRNA expression of GABRA3 (GABA receptor pathway), GCLC and GCLM (glutathione metabolic pathway), and CDKN1C was significantly reduced in the preeclamptic placentas. In conclusion, ten accurate and reliable pre-eclampsia pathways were identified based on multiple independent microarray data sets. A pathway-based classification may be a worthwhile approach to elucidate the pathogenesis of pre-eclampsia.
Holocene sea level variations on the basis of integration of independent data sets
Sahagian, D.; Berkman, P. . Dept. of Geological Sciences and Byrd Polar Research Center)
1992-01-01
Variations in sea level through earth history have occurred at a wide variety of time scales. Sea level researchers have attacked the problem of measuring these sea level changes through a variety of approaches, each relevant only to the time scale in question, and usually only relevant to the specific locality from which a specific type of data are derived. There is a plethora of different data types that can and have been used (locally) for the measurement of Holocene sea level variations. The problem of merging different data sets for the purpose of constructing a global eustatic sea level curve for the Holocene has not previously been adequately addressed. The authors direct the efforts to that end. Numerous studies have been published regarding Holocene sea level changes. These have involved exposed fossil reef elevations, elevation of tidal deltas, elevation of depth of intertidal peat deposits, caves, tree rings, ice cores, moraines, eolian dune ridges, marine-cut terrace elevations, marine carbonate species, tide gauges, and lake level variations. Each of these data sets is based on particular set of assumptions, and is valid for a specific set of environments. In order to obtain the most accurate possible sea level curve for the Holocene, these data sets must be merged so that local and other influences can be filtered out of each data set. Since each data set involves very different measurements, each is scaled in order to define the sensitivity of the proxy measurement parameter to sea level, including error bounds. This effectively determines the temporal and spatial resolution of each data set. The level of independence of data sets is also quantified, in order to rule out the possibility of a common non-eustatic factor affecting more than one variety of data. The Holocene sea level curve is considered to be independent of other factors affecting the proxy data, and is taken to represent the relation between global ocean water and basin volumes.
Tesch, Carmen M; de Vivie-Riedle, Regina
2004-12-22
The phase of quantum gates is one key issue for the implementation of quantum algorithms. In this paper we first investigate the phase evolution of global molecular quantum gates, which are realized by optimally shaped femtosecond laser pulses. The specific laser fields are calculated using the multitarget optimal control algorithm, our modification of the optimal control theory relevant for application in quantum computing. As qubit system we use vibrational modes of polyatomic molecules, here the two IR-active modes of acetylene. Exemplarily, we present our results for a Pi gate, which shows a strong dependence on the phase, leading to a significant decrease in quantum yield. To correct for this unwanted behavior we include pressure on the quantum phase in our multitarget approach. In addition the accuracy of these phase corrected global quantum gates is enhanced. Furthermore we could show that in our molecular approach phase corrected quantum gates and basis set independence are directly linked. Basis set independence is also another property highly required for the performance of quantum algorithms. By realizing the Deutsch-Jozsa algorithm in our two qubit molecular model system, we demonstrate the good performance of our phase corrected and basis set independent quantum gates.
Meta-analysis of pathway enrichment: combining independent and dependent omics data sets.
Kaever, Alexander; Landesfeind, Manuel; Feussner, Kirstin; Morgenstern, Burkhard; Feussner, Ivo; Meinicke, Peter
2014-01-01
A major challenge in current systems biology is the combination and integrative analysis of large data sets obtained from different high-throughput omics platforms, such as mass spectrometry based Metabolomics and Proteomics or DNA microarray or RNA-seq-based Transcriptomics. Especially in the case of non-targeted Metabolomics experiments, where it is often impossible to unambiguously map ion features from mass spectrometry analysis to metabolites, the integration of more reliable omics technologies is highly desirable. A popular method for the knowledge-based interpretation of single data sets is the (Gene) Set Enrichment Analysis. In order to combine the results from different analyses, we introduce a methodical framework for the meta-analysis of p-values obtained from Pathway Enrichment Analysis (Set Enrichment Analysis based on pathways) of multiple dependent or independent data sets from different omics platforms. For dependent data sets, e.g. obtained from the same biological samples, the framework utilizes a covariance estimation procedure based on the nonsignificant pathways in single data set enrichment analysis. The framework is evaluated and applied in the joint analysis of Metabolomics mass spectrometry and Transcriptomics DNA microarray data in the context of plant wounding. In extensive studies of simulated data set dependence, the introduced correlation could be fully reconstructed by means of the covariance estimation based on pathway enrichment. By restricting the range of p-values of pathways considered in the estimation, the overestimation of correlation, which is introduced by the significant pathways, could be reduced. When applying the proposed methods to the real data sets, the meta-analysis was shown not only to be a powerful tool to investigate the correlation between different data sets and summarize the results of multiple analyses but also to distinguish experiment-specific key pathways.
ERIC Educational Resources Information Center
Stephenson, Margaret E.
2000-01-01
Discusses the four planes of development and the periods of creation and crystallization within each plane. Identifies the type of independence that should be achieved by the end of the first two planes of development. Maintains that it is through individual work on the environment that one achieves independence. (KB)
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
ERIC Educational Resources Information Center
Trapp, Georgina; Giles-Corti, Billie; Martin, Karen; Timperio, Anna; Villanueva, Karen
2012-01-01
Background: Schools are an ideal setting in which to involve children in research. Yet for investigators wishing to work in these settings, there are few method papers providing insights into working efficiently in this setting. Objective: The aim of this paper is to describe the five strategies used to increase response rates, data quality and…
Cell Wall Invertase Promotes Fruit Set under Heat Stress by Suppressing ROS-Independent Cell Death.
Liu, Yong-Hua; Offler, Christina E; Ruan, Yong-Ling
2016-09-01
Reduced cell wall invertase (CWIN) activity has been shown to be associated with poor seed and fruit set under abiotic stress. Here, we examined whether genetically increasing native CWIN activity would sustain fruit set under long-term moderate heat stress (LMHS), an important factor limiting crop production, by using transgenic tomato (Solanum lycopersicum) with its CWIN inhibitor gene silenced and focusing on ovaries and fruits at 2 d before and after pollination, respectively. We found that the increase of CWIN activity suppressed LMHS-induced programmed cell death in fruits. Surprisingly, measurement of the contents of H2O2 and malondialdehyde and the activities of a cohort of antioxidant enzymes revealed that the CWIN-mediated inhibition on programmed cell death is exerted in a reactive oxygen species-independent manner. Elevation of CWIN activity sustained Suc import into fruits and increased activities of hexokinase and fructokinase in the ovaries in response to LMHS Compared to the wild type, the CWIN-elevated transgenic plants exhibited higher transcript levels of heat shock protein genes Hsp90 and Hsp100 in ovaries and HspII17.6 in fruits under LMHS, which corresponded to a lower transcript level of a negative auxin responsive factor IAA9 but a higher expression of the auxin biosynthesis gene ToFZY6 in fruits at 2 d after pollination. Collectively, the data indicate that CWIN enhances fruit set under LMHS through suppression of programmed cell death in a reactive oxygen species-independent manner that could involve enhanced Suc import and catabolism, HSP expression, and auxin response and biosynthesis. PMID:27462084
Lee, Wei-Ning; Qian, Zhen; Tosti, Christina L.; Brown, Truman R.; Metaxas, Dimitris N.; Konofagou, Elisa E.
2014-01-01
Myocardial Elastography (ME), a radio-frequency (RF) based speckle tracking technique, was employed in order to image the entire two-dimensional (2D) transmural deformation field in full view, and validated against tagged Magnetic Resonance Imaging (tMRI) in normal as well as reperfused (i.e., treated myocardial infarction (MI)) human left ventricles. RF ultrasound and tMRI frames were acquired at the papillary muscle level in 2D short-axis (SA) views at nominal frame rates of 136 (fps; real time) and 33 fps (electrocardiogram (ECG)-gated), respectively. In ultrasound, in-plane, 2D (lateral and axial) incremental displacements were iteratively estimated using one-dimensional (1D) cross-correlation and recorrelation techniques in a 2D search with a 1D matching kernel. In tMRI, cardiac motion was estimated by a template-matching algorithm on a 2D grid-shaped mesh. In both ME and tMRI, cumulative 2D displacements were estimated and then used to estimate 2D Lagrangian finite systolic strains, from which polar (i.e., radial and circumferential) strains, namely angle-independent measures, were further obtained through coordinate transformation. Principal strains, which are angle-independent and less centroid-dependent than polar strains, were also computed and imaged based on the 2D finite strains with a previously established strategy. Both qualitatively and quantitatively, angle-independent ME is shown to be capable of 1) estimating myocardial deformation in good agreement with tMRI estimates in a clinical setting and of 2) differentiating abnormal from normal myocardium in a full left-ventricular view. Finally, the principal strains are suggested to be an alternative diagnostic tool of detecting cardiac disease with the characteristics of their reduced centroid dependence. PMID:18952364
NASA Astrophysics Data System (ADS)
Hebenstreit, M.; Spee, C.; Kraus, B.
2016-01-01
Entanglement is the resource to overcome the restriction of operations to local operations assisted by classical communication (LOCC). The maximally entangled set (MES) of states is the minimal set of n -partite pure states with the property that any truly n -partite entangled pure state can be obtained deterministically via LOCC from some state in this set. Hence, this set contains the most useful states for applications. In this work, we characterize the MES for generic three-qutrit states. Moreover, we analyze which generic three-qutrit states are reachable (and convertible) under LOCC transformations. To this end, we study reachability via separable operations (SEP), a class of operations that is strictly larger than LOCC. Interestingly, we identify a family of pure states that can be obtained deterministically via SEP but not via LOCC. This gives an affirmative answer to the question of whether there is a difference between SEP and LOCC for transformations among pure states.
ERIC Educational Resources Information Center
Hume, Kara; Plavnick, Joshua; Odom, Samuel L.
2012-01-01
Strategies that promote the independent demonstration of skills across educational settings are critical for improving the accessibility of general education settings for students with ASD. This research assessed the impact of an individual work system on the accuracy of task completion and level of adult prompting across educational setting.…
A Method Defining a Limited Set of Character-Strings with Maximal Coverage of a Sample of Text.
ERIC Educational Resources Information Center
Hultgren, Jan; Larsson, Rolf
This is a progress report on a project attempting to design a system of compacting text for storage appropriate to disc oriented demand searching. After noting a number of previously designed methods of compression, it offers a tentative solution which couples a dictionary of most frequent character-strings with a set of variable-length codes. The…
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N; Kazerooni, Ella A
2009-08-01
The authors are developing a computer-aided detection system for pulmonary emboli (PE) in computed tomographic pulmonary angiography (CTPA) scans. The pulmonary vessel tree is extracted using a 3D expectation-maximization segmentation method based on the analysis of eigen-values of Hessian matrices at multiple scales. A parallel multiprescreening method is applied to the segmented vessels to identify volume of interests (VOIs) that contained suspicious PE. A linear discriminant analysis (LDA) classifier with feature selection is designed to reduce false positives (FPs). Features that characterize the contrast, gray level, and size of PE are extracted as input predictor variables to the LDA classifier. With the IRB approval, 59 CTPA PE cases were collected retrospectively from the patient files (UM cases). With access permission, 69 CTPA PE cases were randomly selected from the data set of the prospective investigation of pulmonary embolism diagnosis (PIOPED) II clinical trial. Extensive lung parenchymal or pleural diseases were present in 22/59 UM and 26/69 PIOPED cases. Experienced thoracic radiologists manually marked 595 and 800 PE as the reference standards in the UM and PIOPED data sets, respectively. PE occlusion of arteries ranged from 5% to 100%, with PE located from the main pulmonary artery to the subsegmental artery levels. Of the 595 PE identified in the UM cases, 245 and 350 PE were located in the subsegmental arteries and the more proximal arteries, respectively. The detection performance was assessed by free response ROC (FROC) analysis. The FROC analysis indicated that the PE detection system could achieve an overall sensitivity of 80% at 18.9 FPs/case for the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases was 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED cases. The detection performance depended on the arterial level where the PE was located and on the
ERIC Educational Resources Information Center
Dunlap, William R.; Iceman, Deanna J.
1985-01-01
This paper reports on the development and content validation of a set of seven assessment instruments designed to be used to estimate a student's current level of functioning in seven independent living skills areas. (Author/LMO)
Strategies for reducing large fMRI data sets for independent component analysis.
Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R
2006-06-01
In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.
NASA Astrophysics Data System (ADS)
Douthett, Elwood (Jack) Moser, Jr.
1999-10-01
Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can
Testing Bell's inequality with cosmic photons: closing the setting-independence loophole.
Gallicchio, Jason; Friedman, Andrew S; Kaiser, David I
2014-03-21
We propose a practical scheme to use photons from causally disconnected cosmic sources to set the detectors in an experimental test of Bell's inequality. In current experiments, with settings determined by quantum random number generators, only a small amount of correlation between detector settings and local hidden variables, established less than a millisecond before each experiment, would suffice to mimic the predictions of quantum mechanics. By setting the detectors using pairs of quasars or patches of the cosmic microwave background, observed violations of Bell's inequality would require any such coordination to have existed for billions of years-an improvement of 20 orders of magnitude.
Wang, Hua; Schauer, Nicolas; Usadel, Bjoern; Frasse, Pierre; Zouine, Mohamed; Hernould, Michel; Latché, Alain; Pech, Jean-Claude; Fernie, Alisdair R; Bouzayen, Mondher
2009-05-01
Indole Acetic Acid 9 (IAA9) is a negative auxin response regulator belonging to the Aux/IAA transcription factor gene family whose downregulation triggers fruit set before pollination, thus giving rise to parthenocarpy. In situ hybridization experiments revealed that a tissue-specific gradient of IAA9 expression is established during flower development, the release of which upon pollination triggers the initiation of fruit development. Comparative transcriptome and targeted metabolome analysis uncovered important features of the molecular events underlying pollination-induced and pollination-independent fruit set. Comprehensive transcriptomic profiling identified a high number of genes common to both types of fruit set, among which only a small subset are dependent on IAA9 regulation. The fine-tuning of Aux/IAA and ARF genes and the downregulation of TAG1 and TAGL6 MADS box genes are instrumental in triggering the fruit set program. Auxin and ethylene emerged as the most active signaling hormones involved in the flower-to-fruit transition. However, while these hormones affected only a small number of transcriptional events, dramatic shifts were observed at the metabolic and developmental levels. The activation of photosynthesis and sucrose metabolism-related genes is an integral regulatory component of fruit set process. The combined results allow a far greater comprehension of the regulatory and metabolic events controlling early fruit development both in the presence and absence of pollination/fertilization. PMID:19435935
The Repulsive Lattice Gas, the Independent-Set Polynomial, and the Lovász Local Lemma
NASA Astrophysics Data System (ADS)
Scott, Alexander D.; Sokal, Alan D.
2005-03-01
We elucidate the close connection between the repulsive lattice gas in equilibrium statistical mechanics and the Lovász local lemma in probabilistic combinatorics. We show that the conclusion of the Lovász local lemma holds for dependency graph G and probabilities { p x } if and only if the independent-set polynomial for G is nonvanishing in the polydisc of radii { p x }. Furthermore, we show that the usual proof of the Lovász local lemma - which provides a sufficient condition for this to occur - corresponds to a simple inductive argument for the nonvanishing of the independent-set polynomial in a polydisc, which was discovered implicitly by Shearer(98) and explicitly by Dobrushin.(37,38) We also present some refinements and extensions of both arguments, including a generalization of the Lovász local lemma that allows for ``soft'' dependencies. In addition, we prove some general properties of the partition function of a repulsive lattice gas, most of which are consequences of the alternating-sign property for the Mayer coefficients. We conclude with a brief discussion of the repulsive lattice gas on countably infinite graphs.
2016-01-01
Reduced cell wall invertase (CWIN) activity has been shown to be associated with poor seed and fruit set under abiotic stress. Here, we examined whether genetically increasing native CWIN activity would sustain fruit set under long-term moderate heat stress (LMHS), an important factor limiting crop production, by using transgenic tomato (Solanum lycopersicum) with its CWIN inhibitor gene silenced and focusing on ovaries and fruits at 2 d before and after pollination, respectively. We found that the increase of CWIN activity suppressed LMHS-induced programmed cell death in fruits. Surprisingly, measurement of the contents of H2O2 and malondialdehyde and the activities of a cohort of antioxidant enzymes revealed that the CWIN-mediated inhibition on programmed cell death is exerted in a reactive oxygen species-independent manner. Elevation of CWIN activity sustained Suc import into fruits and increased activities of hexokinase and fructokinase in the ovaries in response to LMHS. Compared to the wild type, the CWIN-elevated transgenic plants exhibited higher transcript levels of heat shock protein genes Hsp90 and Hsp100 in ovaries and HspII17.6 in fruits under LMHS, which corresponded to a lower transcript level of a negative auxin responsive factor IAA9 but a higher expression of the auxin biosynthesis gene ToFZY6 in fruits at 2 d after pollination. Collectively, the data indicate that CWIN enhances fruit set under LMHS through suppression of programmed cell death in a reactive oxygen species-independent manner that could involve enhanced Suc import and catabolism, HSP expression, and auxin response and biosynthesis. PMID:27462084
Ali, Aamir; Veeranki, Sailaja Naga; Tyagi, Shweta
2014-01-01
MLL, the trithorax ortholog, is a well-characterized histone 3 lysine 4 methyltransferase that is crucial for proper regulation of the Hox genes during embryonic development. Chromosomal translocations, disrupting the Mll gene, lead to aggressive leukemia with poor prognosis. However, the functions of MLL in cellular processes like cell-cycle regulation are not well studied. Here we show that the MLL has a regulatory role during multiple phases of the cell cycle. RNAi-mediated knockdown reveals that MLL regulates S-phase progression and, proper segregation and cytokinesis during M phase. Using deletions and mutations, we narrow the cell-cycle regulatory role to the C subunit of MLL. Our analysis reveals that the transactivation domain and not the SET domain is important for the S-phase function of MLL. Surprisingly, disruption of MLL–WRAD interaction is sufficient to disrupt proper mitotic progression. These mitotic functions of WRAD are independent of SET domain of MLL and, therefore, define a new role of WRAD in subset of MLL functions. Finally, we address the overlapping and unique roles of the different SET family members in the cell cycle. PMID:24880690
Ali, Aamir; Veeranki, Sailaja Naga; Tyagi, Shweta
2014-07-01
MLL, the trithorax ortholog, is a well-characterized histone 3 lysine 4 methyltransferase that is crucial for proper regulation of the Hox genes during embryonic development. Chromosomal translocations, disrupting the Mll gene, lead to aggressive leukemia with poor prognosis. However, the functions of MLL in cellular processes like cell-cycle regulation are not well studied. Here we show that the MLL has a regulatory role during multiple phases of the cell cycle. RNAi-mediated knockdown reveals that MLL regulates S-phase progression and, proper segregation and cytokinesis during M phase. Using deletions and mutations, we narrow the cell-cycle regulatory role to the C subunit of MLL. Our analysis reveals that the transactivation domain and not the SET domain is important for the S-phase function of MLL. Surprisingly, disruption of MLL-WRAD interaction is sufficient to disrupt proper mitotic progression. These mitotic functions of WRAD are independent of SET domain of MLL and, therefore, define a new role of WRAD in subset of MLL functions. Finally, we address the overlapping and unique roles of the different SET family members in the cell cycle. PMID:24880690
10 Questions about Independent Reading
ERIC Educational Resources Information Center
Truby, Dana
2012-01-01
Teachers know that establishing a robust independent reading program takes more than giving kids a little quiet time after lunch. But how do they set up a program that will maximize their students' gains? Teachers have to know their students' reading levels inside and out, help them find just-right books, and continue to guide them during…
Resources and energetics determined dinosaur maximal size
McNab, Brian K.
2009-01-01
Some dinosaurs reached masses that were ≈8 times those of the largest, ecologically equivalent terrestrial mammals. The factors most responsible for setting the maximal body size of vertebrates are resource quality and quantity, as modified by the mobility of the consumer, and the vertebrate's rate of energy expenditure. If the food intake of the largest herbivorous mammals defines the maximal rate at which plant resources can be consumed in terrestrial environments and if that limit applied to dinosaurs, then the large size of sauropods occurred because they expended energy in the field at rates extrapolated from those of varanid lizards, which are ≈22% of the rates in mammals and 3.6 times the rates of other lizards of equal size. Of 2 species having the same energy income, the species that uses the most energy for mass-independent maintenance of necessity has a smaller size. The larger mass found in some marine mammals reflects a greater resource abundance in marine environments. The presumptively low energy expenditures of dinosaurs potentially permitted Mesozoic communities to support dinosaur biomasses that were up to 5 times those found in mammalian herbivores in Africa today. The maximal size of predatory theropods was ≈8 tons, which if it reflected the maximal capacity to consume vertebrates in terrestrial environments, corresponds in predatory mammals to a maximal mass less than a ton, which is what is observed. Some coelurosaurs may have evolved endothermy in association with the evolution of feathered insulation and a small mass. PMID:19581600
Parker, Sarah J; Rost, Hannes; Rosenberger, George; Collins, Ben C; Malmström, Lars; Amodei, Dario; Venkatraman, Vidya; Raedschelders, Koen; Van Eyk, Jennifer E; Aebersold, Ruedi
2015-10-01
Accurate knowledge of retention time (RT) in liquid chromatography-based mass spectrometry data facilitates peptide identification, quantification, and multiplexing in targeted and discovery-based workflows. Retention time prediction is particularly important for peptide analysis in emerging data-independent acquisition (DIA) experiments such as SWATH-MS. The indexed RT approach, iRT, uses synthetic spiked-in peptide standards (SiRT) to set RT to a unit-less scale, allowing for normalization of peptide RT between different samples and chromatographic set-ups. The obligatory use of SiRTs can be costly and complicates comparisons and data integration if standards are not included in every sample. Reliance on SiRTs also prevents the inclusion of archived mass spectrometry data for generation of the peptide assay libraries central to targeted DIA-MS data analysis. We have identified a set of peptide sequences that are conserved across most eukaryotic species, termed Common internal Retention Time standards (CiRT). In a series of tests to support the appropriateness of the CiRT-based method, we show: (1) the CiRT peptides normalized RT in human, yeast, and mouse cell lysate derived peptide assay libraries and enabled merging of archived libraries for expanded DIA-MS quantitative applications; (2) CiRTs predicted RT in SWATH-MS data within a 2-min margin of error for the majority of peptides; and (3) normalization of RT using the CiRT peptides enabled the accurate SWATH-MS-based quantification of 340 synthetic isotopically labeled peptides that were spiked into either human or yeast cell lysate. To automate and facilitate the use of these CiRT peptide lists or other custom user-defined internal RT reference peptides in DIA workflows, an algorithm was designed to automatically select a high-quality subset of datapoints for robust linear alignment of RT for use. Implementations of this algorithm are available for the OpenSWATH and Skyline platforms. Thus, CiRT peptides can
MAZZETTI, SCOTT A.; WOLFF, CHRISTOPHER; COLLINS, BRITTANY; KOLANKOWSKI, MICHAEL T.; WILKERSON, BRITTANY; OVERSTREET, MATTHEW; GRUBE, TROY
2011-01-01
With resistance exercise, greater intensity typically elicits increased energy expenditure, but heavier loads require that the lifter perform more sets of fewer repetitions, which alters the kilograms lifted per set. Thus, the effect of exercise-intensity on energy expenditure has yielded varying results, especially with explosive resistance exercise. This study was designed to examine the effect of exercise-intensity and kilograms/set on energy expenditure during explosive resistance exercise. Ten resistance-trained men (22±3.6 years; 84±6.4 kg, 180±5.1 cm, and 13±3.8 %fat) performed squat and bench press protocols once/week using different exercise-intensities including 48% (LIGHT-48), 60% (MODERATE-60), and 72% of 1-repetition-maximum (1-RM) (HEAVY-72), plus a no-exercise protocol (CONTROL). To examine the effects of kilograms/set, an additional protocol using 72% of 1-RM was performed (HEAVY-72MATCHED) with kilograms/set matched with LIGHT-48 and MODERATE-60. LIGHT-48 was 4 sets of 10 repetitions (4×10); MODERATE-60 4×8; HEAVY-72 5×5; and HEAVY-72MATCHED 4×6.5. Eccentric and concentric repetition speeds, ranges-of-motion, rest-intervals, and total kilograms were identical between protocols. Expired air was collected continuously throughout each protocol using a metabolic cart, [Blood lactate] using a portable analyzer, and bench press peak power were measured. Rates of energy expenditure were significantly greater (p≤0.05) with LIGHT-48 and HEAVY-72MATCHED than HEAVY-72 during squat (7.3±0.7; 6.9±0.6 > 6.1±0.7 kcal/min), bench press (4.8±0.3; 4.7±0.3 > 4.0±0.4 kcal/min), and +5min after (3.7±0.1; 3.7±0.2 > 3.3±0.3 kcal/min), but there were no significant differences in total kcal among protocols. Therefore, exercise-intensity may not effect energy expenditure with explosive contractions, but light loads (~50% of 1-RM) may be preferred because of higher rates of energy expenditure, and since heavier loading requires more sets with lower
ERIC Educational Resources Information Center
Velastegui, Pamela J.
2013-01-01
This hypothesis-generating case study investigates the naturally emerging roles of technology brokers and technology leaders in three independent schools in New York involving 92 school educators. A multiple and mixed method design utilizing Social Network Analysis (SNA) and fuzzy set Qualitative Comparative Analysis (FSQCA) involved gathering…
Pelletier, Alexandra; Sunthara, Gajen; Gujral, Nitin; Mittal, Vandna; Bourgeois, Fabienne C
2016-01-01
understanding what features should be built into the app. Phase 3 involved deployment of TaskList on a clinical floor at BCH. Lastly, Phase 4 gathered the lessons learned from the pilot to refine the guideline. Results Fourteen practical recommendations were identified to create the BCH Mobile Application Development Guideline to safeguard custom applications in hospital BYOD settings. The recommendations were grouped into four categories: (1) authentication and authorization, (2) data management, (3) safeguarding app environment, and (4) remote enforcement. Following the guideline, the TaskList app was developed and then was piloted with an inpatient ward team. Conclusions The Mobile Application Development guideline was created and used in the development of TaskList. The guideline is intended for use by developers when addressing integration with hospital information systems, deploying apps in BYOD health care settings, and meeting compliance standards, such as Health Insurance Portability and Accountability Act (HIPAA) regulations. PMID:27169345
Receptor-independent 4D-QSAR analysis of a set of norstatine derived inhibitors of HIV-1 protease.
Senese, Craig L; Hopfinger, A J
2003-01-01
A training set of 27 norstatine derived inhibitors of HIV-1 protease, based on the 3(S)-amino-2(S)-hydroxyl-4-phenylbutanoic acid core (AHPBA), for which the -log IC(50) values were measured, was used to construct 4D-QSAR models. Five unique RI-4D-QSAR models, from two different alignments, were identified (q(2) = 0.86-0.95). These five models were used to map the atom type morphology of the lining of the inhibitor binding site at the HIV protease receptor site as well as predict the inhibition potencies of seven test set compounds for model validation. The five models, overall, predict the -log IC(50) activity values for the test set compounds in a manner consistent with their q(2) values. The models also correctly identify the hydrophobic nature of the HIV protease receptor site, and inferences are made as to further structural modifications to improve the potency of the AHPBA inhibitors of HIV protease. The finding of five unique, and nearly statistically equivalent, RI-4D-QSAR models for the training set demonstrates that there can be more than one way to fit structure-activity data even within a given QSAR methodology. This set of unique, equally good individual models is referred to as the manifold model.
Řezáč, Jan; de la Lande, Aurélien
2015-02-10
Separation of the energetic contribution of charge transfer to interaction energy in noncovalent complexes would provide important insight into the mechanisms of the interaction. However, the calculation of charge-transfer energy is not an easy task. It is not a physically well-defined term, and the results might depend on how it is described in practice. Commonly, the charge transfer is defined in terms of molecular orbitals; in this framework, however, the charge transfer vanishes as the basis set size increases toward the complete basis set limit. This can be avoided by defining the charge transfer in terms of the spatial extent of the electron densities of the interacting molecules, but the schemes used so far do not reflect the actual electronic structure of each particular system and thus are not reliable. We propose a spatial partitioning of the system, which is based on a charge transfer-free reference state, namely superimposition of electron densities of the noninteracting fragments. We show that this method, employing constrained DFT for the calculation of the charge-transfer energy, yields reliable results and is robust with respect to the strength of the charge transfer, the basis set size, and the DFT functional used. Because it is based on DFT, the method is applicable to rather large systems.
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean W.; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Wei, Jun; Patel, Smita
2015-03-01
We have developed a computer-aided detection (CAD) system for assisting radiologists in detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images. The CAD system includes stages of pulmonary vessel segmentation, prescreening of PE candidates and false positive (FP) reduction to identify suspicious PEs. The system was trained with 59 CTPA PE cases collected retrospectively from our patient files (UM set) with IRB approval. Five feature groups containing 139 features that characterized the intensity texture, gradient, intensity homogeneity, shape, and topology of PE candidates were initially extracted. Stepwise feature selection guided by simplex optimization was used to select effective features for FP reduction. A linear discriminant analysis (LDA) classifier was formulated to differentiate true PEs from FPs. The purpose of this study is to evaluate the performance of our CAD system using an independent test set of CTPA cases. The test set consists of 50 PE cases from the PIOPED II data set collected by multiple institutions with access permission. A total of 537 PEs were manually marked by experienced thoracic radiologists as reference standard for the test set. The detection performance was evaluated by freeresponse receiver operating characteristic (FROC) analysis. The FP classifier obtained a test Az value of 0.847 and the FROC analysis indicated that the CAD system achieved an overall sensitivity of 80% at 8.6 FPs/case for the PIOPED test set.
Maximally Expressive Task Modeling
NASA Technical Reports Server (NTRS)
Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.
Maximal combustion temperature estimation
NASA Astrophysics Data System (ADS)
Golodova, E.; Shchepakina, E.
2006-12-01
This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models.
Elliptic functions and maximal unitarity
NASA Astrophysics Data System (ADS)
Søgaard, Mads; Zhang, Yang
2015-04-01
Scattering amplitudes at loop level can be reduced to a basis of linearly independent Feynman integrals. The integral coefficients are extracted from generalized unitarity cuts which define algebraic varieties. The topology of an algebraic variety characterizes the difficulty of applying maximal cuts. In this work, we analyze a novel class of integrals of which the maximal cuts give rise to an algebraic variety with irrational irreducible components. As a phenomenologically relevant example, we examine the two-loop planar double-box contribution with internal massive lines. We derive unique projectors for all four master integrals in terms of multivariate residues along with Weierstrass' elliptic functions. We also show how to generate the leading-topology part of otherwise infeasible integration-by-parts identities analytically from exact meromorphic differential forms.
Bradshaw, P J; Ko, D T; Newman, A M; Donovan, L R
2006-01-01
Objective To determine the validity of the GRACE (Global Registry of Acute Coronary Events) prediction model for death six months after discharge in all forms of acute coronary syndrome in an independent dataset of a community based cohort of patients with acute myocardial infarction (AMI). Design Independent validation study based on clinical data collected retrospectively for a clinical trial in a community based population and record linkage to administrative databases. Setting Study conducted among patients from the EFFECT (enhanced feedback for effective cardiac treatment) study from Ontario, Canada. Patients Randomly selected men and women hospitalised for AMI between 1999 and 2001. Main outcome measure Discriminatory capacity and calibration of the GRACE prediction model for death within six months of hospital discharge in the contemporaneous EFFECT AMI study population. Results Post‐discharge crude mortality at six months for the EFFECT study patients with AMI was 7.0%. The discriminatory capacity of the GRACE model was good overall (C statistic 0.80) and for patients with ST segment elevation AMI (STEMI) (0.81) and non‐STEMI (0.78). Observed and predicted deaths corresponded well in each stratum of risk at six months, although the risk was underestimated by up to 30% in the higher range of scores among patients with non‐STEMI. Conclusions In an independent validation the GRACE risk model had good discriminatory capacity for predicting post‐discharge death at six months and was generally well calibrated, suggesting that it is suitable for clinical use in general populations. PMID:16387810
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
Polarity Related Influence Maximization in Signed Social Networks
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites
ERIC Educational Resources Information Center
Penev, Spiridon; Raykov, Tenko
2006-01-01
A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Froeschke, John T.; Stunz, Gregory W.; Sterba-Boatwright, Blair; Wildhaber, Mark L.
2010-01-01
Using a long-term fisheries-independent data set, we tested the 'shark nursery area concept' proposed by Heupel et al. (2007) with the suggested working assumptions that a shark nursery habitat would: (1) have an abundance of immature sharks greater than the mean abundance across all habitats where they occur; (2) be used by sharks repeatedly through time (years); and (3) see immature sharks remaining within the habitat for extended periods of time. We tested this concept using young-of-the-year (age 0) and juvenile (age 1+ yr) bull sharks Carcharhinus leucas from gill-net surveys conducted in Texas bays from 1976 to 2006 to estimate the potential nursery function of 9 coastal bays. Of the 9 bay systems considered as potential nursery habitat, only Matagorda Bay satisfied all 3 criteria for young-of-the-year bull sharks. Both Matagorda and San Antonio Bays met the criteria for juvenile bull sharks. Through these analyses we examined the utility of this approach for characterizing nursery areas and we also describe some practical considerations, such as the influence of the temporal or spatial scales considered when applying the nursery role concept to shark populations.
Maximal Outboxes of Quadrilaterals
ERIC Educational Resources Information Center
Zhao, Dongsheng
2011-01-01
An outbox of a quadrilateral is a rectangle such that each vertex of the given quadrilateral lies on one side of the rectangle and different vertices lie on different sides. We first investigate those quadrilaterals whose every outbox is a square. Next, we consider the maximal outboxes of rectangles and those quadrilaterals with perpendicular…
NASA Astrophysics Data System (ADS)
Wang, Y.; Penning de Vries, M.; Xie, P. H.; Beirle, S.; Dörner, S.; Remmers, J.; Li, A.; Wagner, T.
2015-12-01
Multi-axis differential optical absorption spectroscopy (MAX-DOAS) observations of trace gases can be strongly influenced by clouds and aerosols. Thus it is important to identify clouds and characterize their properties. In a recent study Wagner et al. (2014) developed a cloud classification scheme based on the MAX-DOAS measurements themselves with which different "sky conditions" (e.g., clear sky, continuous clouds, broken clouds) can be distinguished. Here we apply this scheme to long-term MAX-DOAS measurements from 2011 to 2013 in Wuxi, China (31.57° N, 120.31° E). The original algorithm has been adapted to the characteristics of the Wuxi instrument, and extended towards smaller solar zenith angles (SZA). Moreover, a method for the determination and correction of instrumental degradation is developed to avoid artificial trends of the cloud classification results. We compared the results of the MAX-DOAS cloud classification scheme to several independent measurements: aerosol optical depth from a nearby Aerosol Robotic Network (AERONET) station and from two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments, visibility derived from a visibility meter and various cloud parameters from different satellite instruments (MODIS, the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment (GOME-2)). Here it should be noted that no quantitative comparison between the MAX-DOAS results and the independent data sets is possible, because (a) not exactly the same quantities are measured, and (b) the spatial and temporal sampling is quite different. Thus our comparison is performed in a semi-quantitative way: the MAX-DOAS cloud classification results are studied as a function of the external quantities. The most important findings from these comparisons are as follows: (1) most cases characterized as clear sky with low or high aerosol load were associated with the respective aerosol optical depth (AOD) ranges obtained by AERONET and MODIS
Generation and Transmission Maximization Model
2001-04-05
GTMax was developed to study complex marketing and system operational issues facing electric utility power systems. The model maximizes the value of the electric system taking into account not only a single system''s limited energy and transmission resources but also firm contracts, independent power producer (IPP) agreements, and bulk power transaction opportunities on the spot market. GTMax maximizes net revenues of power systems by finding a solution that increases income while keeping expenses at amore » minimum. It does this while ensuring that market transactions and system operations are within the physical and institutional limitations of the power system. When multiple systems are simulated, GTMax identifies utilities that can successfully compete on the market by tracking hourly energy transactions, costs, and revenues. Some limitations that are modeled are power plant seasonal capabilities and terms specified in firm and IPP contracts. GTMax also considers detaile operational limitations such as power plant ramp rates and hydropower reservoir constraints.« less
Infrared Maximally Abelian Gauge
Mendes, Tereza; Cucchieri, Attilio; Mihara, Antonio
2007-02-27
The confinement scenario in Maximally Abelian gauge (MAG) is based on the concepts of Abelian dominance and of dual superconductivity. Recently, several groups pointed out the possible existence in MAG of ghost and gluon condensates with mass dimension 2, which in turn should influence the infrared behavior of ghost and gluon propagators. We present preliminary results for the first lattice numerical study of the ghost propagator and of ghost condensation for pure SU(2) theory in the MAG.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
NASA Technical Reports Server (NTRS)
Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris
2004-01-01
The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.
Quantum theory allows for absolute maximal contextuality
NASA Astrophysics Data System (ADS)
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
The Winning Edge: Maximizing Success in College.
ERIC Educational Resources Information Center
Schmitt, David E.
This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…
Maximally spaced projection sequencing in electron paramagnetic resonance imaging
Redler, Gage; Epel, Boris; Halpern, Howard J.
2015-01-01
Electron paramagnetic resonance imaging (EPRI) provides 3D images of absolute oxygen concentration (pO2) in vivo with excellent spatial and pO2 resolution. When investigating such physiologic parameters in living animals, the situation is inherently dynamic. Improvements in temporal resolution and experimental versatility are necessary to properly study such a system. Uniformly distributed projections result in efficient use of data for image reconstruction. This has dictated current methods such as equal-solid-angle (ESA) spacing of projections. However, acquisition sequencing must still be optimized to achieve uniformity throughout imaging. An object-independent method for uniform acquisition of projections, using the ESA uniform distribution for the final set of projections, is presented. Each successive projection maximizes the distance in the gradient space between itself and prior projections. This maximally spaced projection sequencing (MSPS) method improves image quality for intermediate images reconstructed from incomplete projection sets, enabling useful real-time reconstruction. This method also provides improved experimental versatility, reduced artifacts, and the ability to adjust temporal resolution post factum to best fit the data and its application. The MSPS method in EPRI provides the improvements necessary to more appropriately study a dynamic system. PMID:26185490
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth; Richardson, Lea
2004-01-01
Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.
Maximal Transcendentality and Integrability
NASA Astrophysics Data System (ADS)
Lipatov, L. N.
2008-09-01
The Hamiltonian describing possible interactions of the Reggeized gluons in the leading logarithmic approximation (LLA) of the multicolor QCD has the properties of conformal invariance, holomorphic separability and duality. It coincides with the Hamiltonian of the integrable Heisenberg model with the spins being the Möbius group generators. With the use of the Baxter-Sklyanin representation we calculate intercepts of the colorless states constructed from three and four Reggeized gluons and anomalous dimensions of the corresponding high twist operators. The integrability properties of the BFKL equation at a finite temperature are reviewed. Maximal transcendentality is used to construct anomalous dimensions of twist-2 operators up to 4 loops. It is shown that the asymptotic Bethe Ansatz in the 4-loop approximation is not in an agreement with predictions of the BFKL equation in N=4 SUSY.
ERIC Educational Resources Information Center
National Council on Disability, Washington, DC.
The National Council on Disability (NCD) held a National Summit on Disability Policy on April 27-29, 1996 at which 300 grassroots disability leaders gathered to discuss how to achieve independence in the next decade. Following an analysis of disability demographics and disability rights and culture, disability policy is assessed in 11 areas:…
Independent component representations for face recognition
NASA Astrophysics Data System (ADS)
Stewart Bartlett, Marian; Lades, Martin H.; Sejnowski, Terrence J.
1998-07-01
In a task such as face recognition, much of the important information may be contained in the high-order relationships among the image pixels. A number of face recognition algorithms employ principal component analysis (PCA), which is based on the second-order statistics of the image set, and does not address high-order statistical dependencies such as the relationships among three or more pixels. Independent component analysis (ICA) is a generalization of PCA which separates the high-order moments of the input in addition to the second-order moments. ICA was performed on a set of face images by an unsupervised learning algorithm derived from the principle of optimal information transfer through sigmoidal neurons. The algorithm maximizes the mutual information between the input and the output, which produces statistically independent outputs under certain conditions. ICA was performed on the face images under two different architectures. The first architecture provided a statistically independent basis set for the face images that can be viewed as a set of independent facial features. The second architecture provided a factorial code, in which the probability of any combination of features can be obtained from the product of their individual probabilities. Both ICA representations were superior to representations based on principal components analysis for recognizing faces across sessions and changes in expression.
Kemi, Ole J; Rognmo, Oivind; Amundsen, Brage H; Stordahl, Stig; Richardson, Russel S; Helgerud, Jan; Hoff, Jan
2011-01-01
Maximal strength training with a focus on maximal mobilization of force in the concentric phase improves endurance performance that employs a large muscle mass. However, this has not been studied during work with a small muscle mass, which does not challenge convective oxygen supply. We therefore randomized 23 adult females with no arm-training history to either one-arm maximal strength training or a control group. The training group performed five sets of five repetitions of dynamic arm curls against a near-maximal load, 3 days a week for 8 weeks. This training increased maximal strength by 75% and improved rate of force development during both strength and endurance exercise, suggesting that each arm curl became more efficient. This coincided with a 17-18% reduction in oxygen cost at standardized submaximal workloads (work economy), and a 21% higher peak oxygen uptake and 30% higher peak load during maximal arm endurance exercise. Blood flow assessed by Doppler ultrasound in the axillary artery supplying the working biceps brachii and brachialis muscles could not explain the training-induced adaptations. These data suggest that maximal strength training improved work economy and endurance performance in the skeletal muscle, and that these effects are independent of convective oxygen supply.
Origin of constrained maximal CP violation in flavor symmetry
NASA Astrophysics Data System (ADS)
He, Hong-Jian; Rodejohann, Werner; Xu, Xun-Jie
2015-12-01
Current data from neutrino oscillation experiments are in good agreement with δ = -π/2 and θ23 =π/4 under the standard parametrization of the mixing matrix. We define the notion of "constrained maximal CP violation" (CMCPV) for predicting these features and study their origin in flavor symmetry. We derive the parametrization-independent solution of CMCPV and give a set of equivalent definitions for it. We further present a theorem on how the CMCPV can be realized. This theorem takes the advantage of residual symmetries in neutrino and charged lepton mass matrices, and states that, up to a few minor exceptions, (| δ | ,θ23) = (π/2 ,π/4) is generated when those symmetries are real. The often considered μ- τ reflection symmetry, as well as specific discrete subgroups of O(3), is a special case of our theorem.
Maximal switchability of centralized networks
NASA Astrophysics Data System (ADS)
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
Aberration correction by maximizing generalized sharpness metrics.
Fienup, J R; Miller, J J
2003-04-01
The technique of maximizing sharpness metrics has been used to estimate and compensate for aberrations with adaptive optics, to correct phase errors in synthetic-aperture radar, and to restore images. The largest class of sharpness metrics is the sum over a nonlinear point transformation of the image intensity. How the second derivative of the point nonlinearity varies with image intensity determines the effects of various metrics on the imagery. Some metrics emphasize making shadows darker, and other emphasize making bright points brighter. One can determine the image content needed to pick the best metric by computing the statistics of the image autocorrelation or of the Fourier magnitude, either of which is independent of the phase error. Computationally efficient, closed-form expressions for the gradient make possible efficient search algorithms to maximize sharpness.
Maximized Posttest Contrasts: A Clarification.
ERIC Educational Resources Information Center
Hollingsworth, Holly
1980-01-01
A solution to some problems of maximized contrasts for analysis of variance situations when the cell sizes are unequal is offered. It is demonstrated that a contrast is maximized relative to the analysis used to compute the sum of squares between groups. Interpreting a maximum contrast is discussed. (Author/GK)
NASA Astrophysics Data System (ADS)
Wang, Y.; Penning de Vries, M.; Xie, P. H.; Beirle, S.; Dörner, S.; Remmers, J.; Li, A.; Wagner, T.
2015-05-01
Multi-Axis-Differential Optical Absorption Spectroscopy (MAX-DOAS) observations of trace gases can be strongly influenced by clouds and aerosols. Thus it is important to identify clouds and characterise their properties. In a recent study Wagner et al. (2014) developed a cloud classification scheme based on the MAX-DOAS measurements themselves with which different "sky conditions" (e.g. clear sky, continuous clouds, broken clouds) can be distinguished. Here we apply this scheme to long term MAX-DOAS measurements from 2011 to 2013 in Wuxi, China (31.57° N, 120.31° E). The original algorithm has been modified, in particular in order to account for smaller solar zenith angles (SZA). Instrumental degradation is accounted for to avoid artificial trends of the cloud classification. We compared the results of the MAX-DOAS cloud classification scheme to several independent measurements: aerosol optical depth from a nearby AERONET station and from MODIS, visibility derived from a visibility meter; and various cloud parameters from different satellite instruments (MODIS, OMI, and GOME-2). The most important findings from these comparisons are: (1) most cases characterized as clear sky with low or high aerosol load were associated with the respective AOD ranges obtained by AERONET and MODIS, (2) the observed dependences of MAX-DOAS results on cloud optical thickness and effective cloud fraction from satellite indicate that the cloud classification scheme is sensitive to cloud (optical) properties, (3) separation of cloudy scenes by cloud pressure shows that the MAX-DOAS cloud classification scheme is also capable of detecting high clouds, (4) some clear sky conditions, especially with high aerosol load, classified from MAX-DOAS observations corresponding to the optically thin and low clouds derived by satellite observations probably indicate that the satellite cloud products contain valuable information on aerosols.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2009-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Jacob, Christian P; Nguyen, Thuy Trang; Dempfle, Astrid; Heine, Monika; Windemuth-Kieselbach, Christine; Baumann, Katarina; Jacob, Florian; Prechtl, Julian; Wittlich, Maike; Herrmann, Martin J; Gross-Lesch, Silke; Lesch, Klaus-Peter; Reif, Andreas
2010-06-01
While an interactive effect of genes with adverse life events is increasingly appreciated in current concepts of depression etiology, no data are presently available on interactions between genetic and environmental (G x E) factors with respect to personality and related disorders. The present study therefore aimed to detect main effects as well as interactions of serotonergic candidate genes (coding for the serotonin transporter, 5-HTT; the serotonin autoreceptor, HTR1A; and the enzyme which synthesizes serotonin in the brain, TPH2) with the burden of life events (#LE) in two independent samples consisting of 183 patients suffering from personality disorders and 123 patients suffering from adult attention deficit/hyperactivity disorder (aADHD). Simple analyses ignoring possible G x E interactions revealed no evidence for associations of either #LE or of the considered polymorphisms in 5-HTT and TPH2. Only the G allele of HTR1A rs6295 seemed to increase the risk of emotional-dramatic cluster B personality disorders (p = 0.019, in the personality disorder sample) and to decrease the risk of anxious-fearful cluster C personality disorders (p = 0.016, in the aADHD sample). We extended the initial simple model by taking a G x E interaction term into account, since this approach may better fit the data indicating that the effect of a gene is modified by stressful life events or, vice versa, that stressful life events only have an effect in the presence of a susceptibility genotype. By doing so, we observed nominal evidence for G x E effects as well as main effects of 5-HTT-LPR and the TPH2 SNP rs4570625 on the occurrence of personality disorders. Further replication studies, however, are necessary to validate the apparent complexity of G x E interactions in disorders of human personality.
Excap: maximization of haplotypic diversity of linked markers.
Kahles, André; Sarqume, Fahad; Savolainen, Peter; Arvestad, Lars
2013-01-01
Genetic markers, defined as variable regions of DNA, can be utilized for distinguishing individuals or populations. As long as markers are independent, it is easy to combine the information they provide. For nonrecombinant sequences like mtDNA, choosing the right set of markers for forensic applications can be difficult and requires careful consideration. In particular, one wants to maximize the utility of the markers. Until now, this has mainly been done by hand. We propose an algorithm that finds the most informative subset of a set of markers. The algorithm uses a depth first search combined with a branch-and-bound approach. Since the worst case complexity is exponential, we also propose some data-reduction techniques and a heuristic. We implemented the algorithm and applied it to two forensic caseworks using mitochondrial DNA, which resulted in marker sets with significantly improved haplotypic diversity compared to previous suggestions. Additionally, we evaluated the quality of the estimation with an artificial dataset of mtDNA. The heuristic is shown to provide extensive speedup at little cost in accuracy.
Power Converters Maximize Outputs Of Solar Cell Strings
NASA Technical Reports Server (NTRS)
Frederick, Martin E.; Jermakian, Joel B.
1993-01-01
Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.
Hohman, Timothy J; Bush, William S; Jiang, Lan; Brown-Gentry, Kristin D; Torstenson, Eric S; Dudek, Scott M; Mukherjee, Shubhabrata; Naj, Adam; Kunkle, Brian W; Ritchie, Marylyn D; Martin, Eden R; Schellenberg, Gerard D; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Haines, Jonathan L; Thornton-Wells, Tricia A
2016-02-01
Late-onset Alzheimer disease (AD) has a complex genetic etiology, involving locus heterogeneity, polygenic inheritance, and gene-gene interactions; however, the investigation of interactions in recent genome-wide association studies has been limited. We used a biological knowledge-driven approach to evaluate gene-gene interactions for consistency across 13 data sets from the Alzheimer Disease Genetics Consortium. Fifteen single nucleotide polymorphism (SNP)-SNP pairs within 3 gene-gene combinations were identified: SIRT1 × ABCB1, PSAP × PEBP4, and GRIN2B × ADRA1A. In addition, we extend a previously identified interaction from an endophenotype analysis between RYR3 × CACNA1C. Finally, post hoc gene expression analyses of the implicated SNPs further implicate SIRT1 and ABCB1, and implicate CDH23 which was most recently identified as an AD risk locus in an epigenetic analysis of AD. The observed interactions in this article highlight ways in which genotypic variation related to disease may depend on the genetic context in which it occurs. Further, our results highlight the utility of evaluating genetic interactions to explain additional variance in AD risk and identify novel molecular mechanisms of AD pathogenesis.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-03-01
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-03-01
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
Maximal possible accretion rates for slim disks
NASA Astrophysics Data System (ADS)
Lin, Yiqing; Jiao, Chengliang
2009-12-01
It was proved in the previous work that there must be a maximal possible accretion rate dot M_{max} for a slim disk. Here we discuss how the value of dot M_{max} depends on the two fundamental parameters of the disk, namely the mass of the central black hole M and the viscosity parameter α. It is shown that dot M_{max} increases with decreasing α, but is almost independent of M if dot M_{max} is measured by the Eddington accretion rate dot M_{Edd} , which is in turn proportional to M.
ERIC Educational Resources Information Center
Lange, L. H.
1974-01-01
Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)
All maximally entangling unitary operators
Cohen, Scott M.
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
NASA Astrophysics Data System (ADS)
Salvio, Alberto; Staub, Florian; Strumia, Alessandro; Urbano, Alfredo
2016-03-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into γγ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
Maximizing the usefulness of hypnosis in forensic investigative settings.
Hibler, Neil S; Scheflin, Alan W
2012-07-01
This is an article written for mental health professionals interested in using investigative hypnosis with law enforcement agencies in the effort to enhance the memory of witnesses and victims. Discussion focuses on how to work with law enforcement agencies so as to control for factors that can interfere with recall. Specifics include what police need to know about how to conduct case review, to prepare interviewees, to conduct interviews, and what to do with the results. Case examples are used to illustrate applications of this guidance in actual investigations. PMID:22913226
ERIC Educational Resources Information Center
Stickney, Sharon
This workbook is designed to help participants of the Independence Training Program (ITP) to achieve a definition of "independence." The program was developed for teenage girls. The process for developing the concept of independence consists of four steps. Step one instructs the participant to create an imaginary situation where she is completely…
Are Independent Probes Truly Independent?
ERIC Educational Resources Information Center
Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene
2009-01-01
The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…
Maximally polarized states for quantum light fields
Sanchez-Soto, Luis L.; Yustas, Eulogio C.; Bjoerk, Gunnar; Klimov, Andrei B.
2007-10-15
The degree of polarization of a quantum field can be defined as its distance to an appropriate set of states. When we take unpolarized states as this reference set, the states optimizing this degree for a fixed average number of photons N present a fairly symmetric, parabolic photon statistic, with a variance scaling as N{sup 2}. Although no standard optical process yields such a statistic, we show that, to an excellent approximation, a highly squeezed vacuum can be taken as maximally polarized. We also consider the distance of a field to the set of its SU(2) transformed, finding that certain linear superpositions of SU(2) coherent states make this degree to be unity.
Algebraic curves of maximal cyclicity
NASA Astrophysics Data System (ADS)
Caubergh, Magdalena; Dumortier, Freddy
2006-01-01
The paper deals with analytic families of planar vector fields, studying methods to detect the cyclicity of a non-isolated closed orbit, i.e. the maximum number of limit cycles that can locally bifurcate from it. It is known that this multi-parameter problem can be reduced to a single-parameter one, in the sense that there exist analytic curves in parameter space along which the maximal cyclicity can be attained. In that case one speaks about a maximal cyclicity curve (mcc) in case only the number is considered and of a maximal multiplicity curve (mmc) in case the multiplicity is also taken into account. In view of obtaining efficient algorithms for detecting the cyclicity, we investigate whether such mcc or mmc can be algebraic or even linear depending on certain general properties of the families or of their associated Bautin ideal. In any case by well chosen examples we show that prudence is appropriate.
The maximization of overall reinforcement rate on concurrent chains.
Houston, A I; Sumida, B H; McNamara, J M
1987-07-01
We model behavioral allocation on concurrent chains in which the initial links are independent variable-interval schedules. We also quantify the relationship between behavior during the initial links and the probability of entering a terminal link. The behavior that maximizes overall reinforcement rate is then considered and compared with published experimental data. Although all the trends in the data are predicted by rate maximization, there are considerable deviations from the predictions of rate maximization when reward magnitudes are unequal. We argue from our results that optimal allocation on concurrent chains, and prey choice as used in the theory of optimal diets, are distinct concepts. We show that the maximization of overall rate can lead to apparent violations of stochastic transitivity.
Maximal strength training improves aerobic endurance performance.
Hoff, J; Gran, A; Helgerud, J
2002-10-01
The aim of this experiment was to examine the effects of maximal strength training with emphasis on neural adaptations on strength- and endurance-performance for endurance trained athletes. Nineteen male cross-country skiers about 19.7 +/- 4.0 years of age and a maximal oxygen uptake (VO(2 max)) of 69.4 +/- 2.2 mL x kg(-1) x min(-1) were randomly assigned to a training group (n = 9) or a control group (n = 10). Strength training was performed, three times a week for 8 weeks, using a cable pulley simulating the movements in double poling in cross-country skiing, and consisted of three sets of six repetitions at a workload of 85% of one repetition maximum emphasizing maximal mobilization of force in the concentric movement. One repetition maximum improved significantly from 40.3 +/- 4.5 to 44.3 +/- 4.9 kg. Time to peak force (TPF) was reduced by 50 and 60% on two different submaximal workloads. Endurance performance measured as time to exhaustion (TTE) on a double poling ski ergometer at maximum aerobic velocity, improved from 6.49 to 10.18 min; 20.5% over the control group. Work economy changed significantly from 1.02 +/- 0.14 to 0.74 +/- 0.10 mL x kg(-0.67) x min(-1). Maximal strength training with emphasis on neural adaptations improves strength, particularly rate of force development, and improves aerobic endurance performance by improved work economy.
ERIC Educational Resources Information Center
Peters, Erin
2004-01-01
Student time on task is the most influential factor in student achievement. High motivation and engagement in learning have consistently been linked to increased levels of student success. At the same time, a lack of interest in schoolwork becomes increasingly common in more and more middle school students. To maximize time on task, teachers need…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... systems advocacy--to maximize the leadership, empowerment, independence and productivity of individuals..., empowerment, independence and productivity of individuals with significant disabilities and to promote...
Optimizing Population Variability to Maximize Benefit
Izu, Leighton T.; Bányász, Tamás; Chen-Izu, Ye
2015-01-01
Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the “right mix” of individuals in a population. PMID:26650247
Simple conditions constraining the set of quantum correlations
NASA Astrophysics Data System (ADS)
de Vicente, Julio I.
2015-09-01
The characterization of the set of quantum correlations in Bell scenarios is a problem of paramount importance for both the foundations of quantum mechanics and quantum information processing in the device-independent scenario. However, a clear-cut (physical or mathematical) characterization of this set remains elusive and many of its properties are still unknown. We provide here simple and general analytical conditions that are necessary for an arbitrary bipartite behavior to be quantum. Although the conditions are not sufficient, we illustrate the strength and nontriviality of these conditions with a few examples. Moreover, we provide several applications of this result: we prove a quantitative separation of the quantum set from extremal nonlocal no-signaling behaviors in several general scenarios, we provide a relation to obtain Tsirelson bounds for arbitrary Bell inequalities and a construction of Bell expressions whose maximal quantum value is attained by a maximally entangled state of any given dimension.
Energy expenditure in maximal jumps on sand.
Muramatsu, Shigeru; Fukudome, Akinori; Miyama, Motoyoshi; Arimoto, Morio; Kijima, Akira
2006-01-01
The purpose of this study was to comparatively investigate the energy expenditure of jumping on sand and on a firm surface. Eight male university volleyball players were recruited in this study and performed 3 sets of 10 repetitive jumps on sand (the S condition), and also on a force platform (the F condition). The subjects jumped every two seconds during a set, and the interval between sets was 20 seconds. The subjects performed each jump on sand with maximal exertion while in the F condition they jumped as high as they did on sand. The oxygen requirement for jumping was defined as the total oxygen uptake consecutively measured between the first set of jumps and the point that oxygen uptake recovers to the resting value, and the energy expenditure was calculated. The jump height in the S condition was equivalent to 64.0 +/- 4.4% of the height in the maximal jump on the firm surface. The oxygen requirement was 7.39 +/- 0.33 liters in S condition and 6.24 +/- 0.69 liters in the F condition, and the energy expenditure was 37.0 +/- 1.64 kcal and 31.2 +/- 3.46 kcal respectively. The differences in the two counter values were both statistically significant (p < 0.01). The energy expenditure of jumping in the S condition was equivalent to 119.4 +/- 10.1% of the one in the F condition, which ratio was less than in walking and close to in running. PMID:16617210
NASA Astrophysics Data System (ADS)
Fraser, Gordon
2009-01-01
In his kind review of my biography of the Nobel laureate Abdus Salam (December 2008 pp45-46), John W Moffat wrongly claims that Salam had "independently thought of the idea of parity violation in weak interactions".
ERIC Educational Resources Information Center
Upah-Bant, Marilyn
1978-01-01
Describes the over-all business and production operation of the "Daily Illini" at the University of Illinois to show how this college publication has assumed the burdens and responsibilities of true independence. (GW)
Left atrial strain after maximal exercise in competitive waterpolo players.
Santoro, Amato; Alvino, Federico; Antonelli, Giovanni; Molle, Roberta; Mondillo, Sergio
2016-03-01
Left atrial (LA) function is a determinant of left ventricular (LV) filling. It carries out three main functions: reservoir, conduit, contractile. Aim of this study was to evaluate the role of LA and its deformation properties on LV filling at rest (R) and immediately after a maximal exercise (ME) through the speckle tracking echocardiography. Population enrolled was composed by 23 water polo athletes who performed a ME of six repeats of 100 m freestyle swim sets. At ME peak atrial longitudinal strain was reduced but all strain rate (SR) parameters increased, respectively positive peak SR at reservoir phase, SR negative peak at rapid ventricular filling (SRep) and SR negative peak at late ventricular filling (SRlp), that corresponds to atrial contraction phase. We showed a parallel increase in E and A pulsed Doppler wave and SRep and SRlp; particularly at ME, A wave and SRlp increased more respectively than E wave and SRep. SRlp was related to ejection fraction (EF) (r = -0.47; p < 0.01). At multivariate analysis SRlp was an independent predictor of EF (β: -0.47; p = 0.016). The increased sympathetic tone results into increased late diastolic LV filling with augmented atrial contractility and a decrease in diastolic filling time. During exercise LV filling was probably optimized by an enhanced and rapid LA conduit phase and by a vigorous atrial contraction during late LV filling. PMID:26472580
Multivariate residues and maximal unitarity
NASA Astrophysics Data System (ADS)
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
ERIC Educational Resources Information Center
Nathanson, Jeanne H., Ed.
1994-01-01
This issue of "OSERS" addresses the subject of independent living of individuals with disabilities. The issue includes a message from Judith E. Heumann, the Assistant Secretary of the Office of Special Education and Rehabilitative Services (OSERS), and 10 papers. Papers have the following titles and authors: "Changes in the Rehabilitation Act of…
Maximizing algebraic connectivity in air transportation networks
NASA Astrophysics Data System (ADS)
Wei, Peng
In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the
Knowledge discovery by accuracy maximization.
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-04-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
Mixtures of maximally entangled pure states
NASA Astrophysics Data System (ADS)
Flores, M. M.; Galapon, E. A.
2016-09-01
We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.
Maximal energy extraction under discrete diffusive exchange
Hay, M. J.; Schiff, J.; Fisch, N. J.
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake
NASA Technical Reports Server (NTRS)
Convertino, V. A.
1997-01-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake.
Convertino, V A
1997-02-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
NASA Astrophysics Data System (ADS)
Annan, James; Hargreaves, Julia
2016-04-01
In order to perform any Bayesian processing of a model ensemble, we need a prior over the ensemble members. In the case of multimodel ensembles such as CMIP, the historical approach of ``model democracy'' (i.e. equal weight for all models in the sample) is no longer credible (if it ever was) due to model duplication and inbreeding. The question of ``model independence'' is central to the question of prior weights. However, although this question has been repeatedly raised, it has not yet been satisfactorily addressed. Here I will discuss the issue of independence and present a theoretical foundation for understanding and analysing the ensemble in this context. I will also present some simple examples showing how these ideas may be applied and developed.
Enumerating all maximal frequent subtrees in collections of phylogenetic trees
2014-01-01
Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474
Reif, Maria M.; Huenenberger, Philippe H.
2011-04-14
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [M. A. Kastenholz and P. H. Huenenberger, J. Chem. Phys. 124, 224501 (2006); M. M. Reif and P. H. Huenenberger, J. Chem. Phys. 134, 144103 (2010)], the application of appropriate correction terms permits to obtain methodology-independent results. The corrected values are then exclusively characteristic of the underlying molecular model including in particular the ion-solvent van der Waals interaction parameters, determining the effective ion size and the magnitude of its dispersion interactions. In the present study, the comparison of calculated (corrected) hydration free energies with experimental data (along with the consideration of ionic polarizabilities) is used to calibrate new sets of ion-solvent van der Waals (Lennard-Jones) interaction parameters for the alkali (Li{sup +}, Na{sup +}, K{sup +}, Rb{sup +}, Cs{sup +}) and halide (F{sup -}, Cl{sup -}, Br{sup -}, I{sup -}) ions along with either the SPC or the SPC/E water models. The experimental dataset is defined by conventional single-ion hydration free energies [Tissandier et al., J. Phys. Chem. A 102, 7787 (1998); Fawcett, J. Phys. Chem. B 103, 11181] along with three plausible choices for the (experimentally elusive) value of the absolute (intrinsic) hydration free energy of the proton, namely, {Delta}G{sub hyd} {sup O-minus} [H{sup +}]=-1100, -1075 or -1050 kJ mol{sup -1}, resulting in three sets L, M, and H for the SPC water model and three sets L{sub E}, M{sub E}, and H{sub E} for the SPC/E water model (alternative sets can easily be interpolated to intermediate {Delta}G{sub hyd} {sup O-minus} [H{sup +}] values). The residual sensitivity of the calculated (corrected) hydration free energies on the volume-pressure boundary conditions and on the effective
Maximal acceleration and radiative processes
NASA Astrophysics Data System (ADS)
Papini, Giorgio
2015-08-01
We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.
Dimension independence in exterior algebra.
Hawrylycz, M
1995-01-01
The identities between homogeneous expressions in rank 1 vectors and rank n - 1 covectors in a Grassmann-Cayley algebra of rank n, in which one set occurs multilinearly, are shown to represent a set of dimension-independent identities. The theorem yields an infinite set of nontrivial geometric identities from a given identity. PMID:11607520
Many parameter sets in a multicompartment model oscillator are robust to temperature perturbations.
Caplan, Jonathan S; Williams, Alex H; Marder, Eve
2014-04-01
Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca(2+) buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators.
Many parameter sets in a multicompartment model oscillator are robust to temperature perturbations.
Caplan, Jonathan S; Williams, Alex H; Marder, Eve
2014-04-01
Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca(2+) buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators. PMID:24695714
Maximizing the optical network capacity.
Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I
2016-03-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity.
Maximizing the optical network capacity
Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.
2016-01-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
Maximizing the optical network capacity.
Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I
2016-03-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Maximal Oxygen Intake and Maximal Work Performance of Active College Women.
ERIC Educational Resources Information Center
Higgs, Susanne L.
Maximal oxygen intake and associated physiological variables were measured during strenuous exercise on women subjects (N=20 physical education majors). Following assessment of maximal oxygen intake, all subjects underwent a performance test at the work level which had elicited their maximal oxygen intake. Mean maximal oxygen intake was 41.32…
Use of a Best Estimate Power Monitoring Tool to Maximize Power Plant Generation
Dziuba, Lindsey L.
2006-07-01
The Best Estimate Power Monitor (BEPM) is a tool that was developed to maximize nuclear power plant generation, while ensuring regulatory compliance in the face of venturi fouling, industry ultra-sonic flowmeter issues and other technical challenges. The BEPM uses ASME approved 'best estimate' methodology described in PTC 19.1-1985, 'Measurement Uncertainty', Section 3.8, 'Weighting Method'. The BEPM method utilizes many different and independent indicators of core thermal power and independently computes the core thermal power (CTP) from each parameter. The uncertainty of each measurement is used to weight the results of the best estimate computation of CTP such that those with lower uncertainties are weighted more heavily in the computed result. The independence of these measurements is used to minimize the uncertainty of the aggregate result, and the overall uncertainty can be much lower than the uncertainties of any of the individual measured parameters. Examples of the Balance of Plant parameters used in the BEPM are turbine first stage pressure, venturi feedwater flow, condensate flow, main steam flow, high pressure turbine exhaust pressure, low pressure turbine inlet pressure, the two highest pressure feedwater heater extraction pressures, and final feedwater temperature. The BEPM typically makes use of installed plant instrumentation that provide data to the plant computer. Therefore, little or no plant modification is required. In order to compute core thermal power from the independent indicators, a set of baseline data is used for comparison. These baseline conditions are taken from a day when confidence in the value of core thermal power is high (i.e., immediately post outage when venturi fouling is not an issue or from a formal tracer test). This provides the reference point on which to base the core thermal power calculations for each of the independent parameters. The BEPM is effective only at the upper end of the power range, where the independent
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Turnbull, A P; Turnbull, H R
1985-03-01
The transition from living a life as others want (dependence) to living it as the adolescent wants to live it (independence) is extraordinarily difficult for most teen-agers and their families. The difficulty is compounded in the case of adolescents with disabilities. They are often denied access to the same opportunities of life that are accessible to the nondisabled. They face special problems in augmenting their inherent capacities so that they can take fuller advantage of the accommodations that society makes in an effort to grant them access. In particular, they need training designed to increase their capacities to make, communicate, implement, and evaluate their own life-choices. The recommendations made in this paper are grounded in the long-standing tradition of parens patriae and enlightened paternalism; they seek to be deliberately and cautiously careful about the lives of adolescents with disabilities and their families. We based them on the recent tradition of anti-institutionalism and they are also consistent with some of the major policy directions of the past 15-20 years. These include: normalization, integration, and least-restrictive alternatives; the unity and integrity of the family; the importance of opportunities for self-advocacy; the role of consumer consent and choice in consumer-professional relationships; the need for individualized services; the importance of the developmental model as a basis for service delivery; the value of economic productivity of people with disabilities; and the rights of habilitation, amelioration, and prevention. PMID:3156827
Turnbull, A P; Turnbull, H R
1985-03-01
The transition from living a life as others want (dependence) to living it as the adolescent wants to live it (independence) is extraordinarily difficult for most teen-agers and their families. The difficulty is compounded in the case of adolescents with disabilities. They are often denied access to the same opportunities of life that are accessible to the nondisabled. They face special problems in augmenting their inherent capacities so that they can take fuller advantage of the accommodations that society makes in an effort to grant them access. In particular, they need training designed to increase their capacities to make, communicate, implement, and evaluate their own life-choices. The recommendations made in this paper are grounded in the long-standing tradition of parens patriae and enlightened paternalism; they seek to be deliberately and cautiously careful about the lives of adolescents with disabilities and their families. We based them on the recent tradition of anti-institutionalism and they are also consistent with some of the major policy directions of the past 15-20 years. These include: normalization, integration, and least-restrictive alternatives; the unity and integrity of the family; the importance of opportunities for self-advocacy; the role of consumer consent and choice in consumer-professional relationships; the need for individualized services; the importance of the developmental model as a basis for service delivery; the value of economic productivity of people with disabilities; and the rights of habilitation, amelioration, and prevention.
Does mental exertion alter maximal muscle activation?
Rozand, Vianney; Pageaux, Benjamin; Marcora, Samuele M.; Papaxanthis, Charalambos; Lepers, Romuald
2014-01-01
Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 min each: (i) high mental exertion (incongruent Stroop task), (ii) moderate mental exertion (congruent Stroop task), (iii) low mental exertion (watching a movie). In each condition, mental exertion was combined with 10 intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 min). Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors. PMID:25309404
Inflation in maximal gauged supergravities
Kodama, Hideo; Nozawa, Masato
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Reflection quasilattices and the maximal quasilattice
NASA Astrophysics Data System (ADS)
Boyle, Latham; Steinhardt, Paul J.
2016-08-01
We introduce the concept of a reflection quasilattice, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e., Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we explain that reflection quasilattices only exist in dimensions two, three, and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. Unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. We tabulate the complete set of scale factors for all reflection quasilattices in dimension d >2 , and for all those with quadratic irrational scale factors in d =2 .
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Maximally Expressive Modeling of Operations Tasks
NASA Technical Reports Server (NTRS)
Jaap, John; Richardson, Lea; Davis, Elizabeth
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.
Maximizing TDRS Command Load Lifetime
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2002-01-01
was therefore the key to achieving this goal. This goal was eventually realized through development of an Excel spreadsheet tool called EMMIE (Excel Mean Motion Interactive Estimation). EMMIE utilizes ground ephemeris nodal data to perform a least-squares fit to inferred mean anomaly as a function of time, thus generating an initial estimate for mean motion. This mean motion in turn drives a plot of estimated downtrack position difference versus time. The user can then manually iterate the mean motion, and determine an optimal value that will maximize command load lifetime. Once this optimal value is determined, the mean motion initially calculated by the command builder tool is overwritten with the new optimal value, and the command load is built for uplink to ISS. EMMIE also provides the capability for command load lifetime to be tracked through multiple TORS ephemeris updates. Using EMMIE, TORS command load lifetimes of approximately 30 days have been achieved.
Maximally Entangled Multipartite States: A Brief Survey
NASA Astrophysics Data System (ADS)
Enríquez, M.; Wintrowicz, I.; Życzkowski, K.
2016-03-01
The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used.
Specificity of a Maximal Step Exercise Test
ERIC Educational Resources Information Center
Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.
2007-01-01
To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…
The maximal affinity of ligands
Kuntz, I. D.; Chen, K.; Sharp, K. A.; Kollman, P. A.
1999-01-01
We explore the question of what are the best ligands for macromolecular targets. A survey of experimental data on a large number of the strongest-binding ligands indicates that the free energy of binding increases with the number of nonhydrogen atoms with an initial slope of ≈−1.5 kcal/mol (1 cal = 4.18 J) per atom. For ligands that contain more than 15 nonhydrogen atoms, the free energy of binding increases very little with relative molecular mass. This nonlinearity is largely ascribed to nonthermodynamic factors. An analysis of the dominant interactions suggests that van der Waals interactions and hydrophobic effects provide a reasonable basis for understanding binding affinities across the entire set of ligands. Interesting outliers that bind unusually strongly on a per atom basis include metal ions, covalently attached ligands, and a few well known complexes such as biotin–avidin. PMID:10468550
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-01
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it.
The effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake
Oh, Deuk-Ja; Hong, Hyeon-Ok; Lee, Bo-Ae
2016-01-01
The purpose of this study is to investigate the effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake. To achieve the purpose of the study, a total of 30 subjects were selected, including 15 people who performed continued regular exercises and 15 people as the control group. With regard to data processing, the IBM SPSS Statistics ver. 21.0 was used to calculate the mean and standard deviation. The difference of mean change between groups was verified through an independent t-test. As a result, there were significant differences in resting heart rate, maximal heart rate, maximal systolic blood pressure, and maximal oxygen uptake. However, the maximal systolic blood pressure was found to be an exercise-induced high blood pressure. Thus, it is thought that a risk diagnosis for it through a regular exercise stress test is necessary. PMID:26933659
Maximal tree size of few-qubit states
NASA Astrophysics Data System (ADS)
Le, Huy Nguyen; Cai, Yu; Wu, Xingyao; Rabelo, Rafael; Scarani, Valerio
2014-06-01
Tree size (TS) is an interesting measure of complexity for multiqubit states: not only is it in principle computable, but one can obtain lower bounds for it. In this way, it has been possible to identify families of states whose complexity scales superpolynomially in the number of qubits. With the goal of progressing in the systematic study of the mathematical property of TS, in this work we characterize the tree size of pure states for the case where the number of qubits is small, namely, 3 or 4. The study of three qubits does not hold great surprises, insofar as the structure of entanglement is rather simple; the maximal TS is found to be 8, reached for instance by the |W> state. The study of four qubits yields several insights: in particular, the most economic description of a state is found not to be recursive. The maximal TS is found to be 16, reached for instance by a state called |Ψ(4)> which was already discussed in the context of four-photon down-conversion experiments. We also find that the states with maximal tree size form a set of zero measure: a smoothed version of tree size over a neighborhood of a state (ɛ-TS) reduces the maximal values to 6 and 14, respectively. Finally, we introduce a notion of tree size for mixed states and discuss it for a one-parameter family of states.
Maximizing your return on people.
Bassi, Laurie; McMurrer, Daniel
2007-03-01
Though most traditional HR performance metrics don't predict organizational performance, alternatives simply have not existed--until now. During the past ten years, researchers Laurie Bassi and Daniel McMurrer have worked to develop a system that allows executives to assess human capital management (HCM) and to use those metrics both to predict organizational performance and to guide organizations' investments in people. The new framework is based on a core set of HCM drivers that fall into five major categories: leadership practices, employee engagement, knowledge accessibility, workforce optimization, and organizational learning capacity. By employing rigorously designed surveys to score a company on the range of HCM practices across the five categories, it's possible to benchmark organizational HCM capabilities, identify HCM strengths and weaknesses, and link improvements or back-sliding in specific HCM practices with improvements or shortcomings in organizational performance. The process requires determining a "maturity" score for each practice, based on a scale of 1 (low) to 5 (high). Over time, evolving maturity scores from multiple surveys can reveal progress in each of the HCM practices and help a company decide where to focus improvement efforts that will have a direct impact on performance. The authors draw from their work with American Standard, South Carolina's Beaufort County School District, and a bevy of financial firms to show how improving HCM scores led to increased sales, safety, academic test scores, and stock returns. Bassi and McMurrer urge HR departments to move beyond the usual metrics and begin using HCM measurement tools to gauge how well people are managed and developed throughout the organization. In this new role, according to the authors, HR can take on strategic responsibility and ensure that superior human capital management becomes central to the organization's culture.
Are all maximally entangled states pure?
NASA Astrophysics Data System (ADS)
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
Matching, maximizing, and hill-climbing
Hinson, John M.; Staddon, J. E. R.
1983-01-01
In simple situations, animals consistently choose the better of two alternatives. On concurrent variable-interval variable-interval and variable-interval variable-ratio schedules, they approximately match aggregate choice and reinforcement ratios. The matching law attempts to explain the latter result but does not address the former. Hill-climbing rules such as momentary maximizing can account for both. We show that momentary maximizing constrains molar choice to approximate matching; that molar choice covaries with pigeons' momentary-maximizing estimate; and that the “generalized matching law” follows from almost any hill-climbing rule. PMID:16812350
Purification of Gaussian maximally mixed states
NASA Astrophysics Data System (ADS)
Jeong, Kabgyun; Lim, Youngrong
2016-10-01
We find that the purifications of several Gaussian maximally mixed states (GMMSs) correspond to some Gaussian maximally entangled states (GMESs) in the continuous-variable regime. Here, we consider a two-mode squeezed vacuum (TMSV) state as a purification of the thermal state and construct a general formalism of the Gaussian purification process. Moreover, we introduce other kind of GMESs via the process. All of our purified states of the GMMSs exhibit Gaussian profiles; thus, the states show maximal quantum entanglement in the Gaussian regime.
Are all maximally entangled states pure?
Cavalcanti, D.; Brandao, F.G.S.L.; Terra Cunha, M.O.
2005-10-15
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
wannier90: A tool for obtaining maximally-localised Wannier functions
NASA Astrophysics Data System (ADS)
Mostofi, Arash A.; Yates, Jonathan R.; Lee, Young-Su; Souza, Ivo; Vanderbilt, David; Marzari, Nicola
2008-05-01
We present wannier90, a program for calculating maximally-localised Wannier functions (MLWF) from a set of Bloch energy bands that may or may not be attached to or mixed with other bands. The formalism works by minimising the total spread of the MLWF in real space. This is done in the space of unitary matrices that describe rotations of the Bloch bands at each k-point. As a result, wannier90 is independent of the basis set used in the underlying calculation to obtain the Bloch states. Therefore, it may be interfaced straightforwardly to any electronic structure code. The locality of MLWF can be exploited to compute band-structure, density of states and Fermi surfaces at modest computational cost. Furthermore, wannier90 is able to output MLWF for visualisation and other post-processing purposes. Wannier functions are already used in a wide variety of applications. These include analysis of chemical bonding in real space; calculation of dielectric properties via the modern theory of polarisation; and as an accurate and minimal basis set in the construction of model Hamiltonians for large-scale systems, in linear-scaling quantum Monte Carlo calculations, and for efficient computation of material properties, such as the anomalous Hall coefficient. wannier90 is freely available under the GNU General Public License from http://www.wannier.org/. Program summaryProgram title: wannier90 Catalogue identifier: AEAK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 556 495 No. of bytes in distributed program, including test data, etc.: 5 709 419 Distribution format: tar.gz Programming language: Fortran 90, perl Computer: any architecture with a Fortran 90 compiler Operating system: Linux, Windows, Solaris, AIX, Tru64
Maximal hypersurfaces in asymptotically stationary spacetimes
NASA Astrophysics Data System (ADS)
Chrusciel, Piotr T.; Wald, Robert M.
1992-12-01
The purpose of the work is to extend the results on the existence of maximal hypersurfaces to encompass some situations considered by other authors. The existence of maximal hypersurface in asymptotically stationary spacetimes is proven. Existence of maximal surface and of foliations by maximal hypersurfaces is proven in two classes of asymptotically flat spacetimes which possess a one parameter group of isometries whose orbits are timelike 'near infinity'. The first class consists of strongly causal asymptotically flat spacetimes which contain no 'blackhole or white hole' (but may contain 'ergoregions' where the Killing orbits fail to be timelike). The second class of space times possess a black hole and a white hole, with the black and white hole horizon intersecting in a compact 2-surface S.
AUC-Maximizing Ensembles through Metalearning
LeDell, Erin; van der Laan, Mark J.; Peterson, Maya
2016-01-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
AUC-Maximizing Ensembles through Metalearning.
LeDell, Erin; van der Laan, Mark J; Peterson, Maya
2016-05-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits.
Continuous Maximal Flows and Wulff Shapes: Application to MRFs
Zach, Christopher; Niethammer, Marc; Frahm, Jan-Michael
2014-01-01
Convex and continuous energy formulations for low level vision problems enable efficient search procedures for the corresponding globally optimal solutions. In this work we extend the well-established continuous, isotropic capacity-based maximal flow framework to the anisotropic setting. By using powerful results from convex analysis, a very simple and efficient minimization procedure is derived. Further, we show that many important properties carry over to the new anisotropic framework, e.g. globally optimal binary results can be achieved simply by thresholding the continuous solution. In addition, we unify the anisotropic continuous maximal flow approach with a recently proposed convex and continuous formulation for Markov random fields, thereby allowing more general smoothness priors to be incorporated. Dense stereo results are included to illustrate the capabilities of the proposed approach. PMID:25729263
NASA Astrophysics Data System (ADS)
Jois, Manjunath Holaykoppa Nanjunda
The conventional Influence Maximization problem is the problem of finding such a team (a small subset) of seed nodes in a social network that would maximize the spread of influence over the whole network. This paper considers a lottery system aimed at maximizing the awareness spread to promote energy conservation behavior as a stochastic Influence Maximization problem with the constraints ensuring lottery fairness. The resulting Multi-Team Influence Maximization problem involves assigning the probabilities to multiple teams of seeds (interpreted as lottery winners) to maximize the expected awareness spread. Such a variation of the Influence Maximization problem is modeled as a Linear Program; however, enumerating all the possible teams is a hard task considering that the feasible team count grows exponentially with the network size. In order to address this challenge, we develop a column generation based approach to solve the problem with a limited number of candidate teams, where new candidates are generated and added to the problem iteratively. We adopt a piecewise linear function to model the impact of including a new team so as to pick only such teams which can improve the existing solution. We demonstrate that with this approach we can solve such influence maximization problems to optimality, and perform computational study with real-world social network data sets to showcase the efficiency of the approach in finding lottery designs for optimal awareness spread. Lastly, we explore other possible scenarios where this model can be utilized to optimally solve the otherwise hard to solve influence maximization problems.
Approximately Independent Features of Languages
NASA Astrophysics Data System (ADS)
Holman, Eric W.
To facilitate the testing of models for the evolution of languages, the present paper offers a set of linguistic features that are approximately independent of each other. To find these features, the adjusted Rand index (R‧) is used to estimate the degree of pairwise relationship among 130 linguistic features in a large published database. Many of the R‧ values prove to be near zero, as predicted for independent features, and a subset of 47 features is found with an average R‧ of -0.0001. These 47 features are recommended for use in statistical tests that require independent units of analysis.
Maximal strength, muscular endurance and inflammatory biomarkers in young adult men.
Vaara, J P; Vasankari, T; Fogelholm, M; Häkkinen, K; Santtila, M; Kyröläinen, H
2014-12-01
The aim was to study associations of maximal strength and muscular endurance with inflammatory biomarkers independent of cardiorespiratory fitness in those with and without abdominal obesity. 686 young healthy men participated (25±5 years). Maximal strength was measured via isometric testing using dynamo-meters to determine maximal strength index. Muscular endurance index consisted of push-ups, sit-ups and repeated squats. An indirect cycle ergometer test until exhaustion was used to estimate maximal aerobic capacity (VO2max). Participants were stratified according to those with (>102 cm) and those without abdominal obesity (<102 cm) based on waist circumference. Inflammatory factors (C-reactive protein, interleukin-6 and tumour necrosis factor alpha) were analysed from serum samples. Maximal strength and muscular endurance were inversely associated with IL-6 in those with (β=-0.49, -0.39, respectively) (p<0.05) and in those without abdominal obesity (β=-0.08, -0.14, respectively) (p<0.05) adjusted for smoking and cardio-respiratory fitness. After adjusting for smoking and cardiorespiratory fitness, maximal strength and muscular endurance were inversely associated with CRP only in those without abdominal obesity (β=-0.11, -0.26, respectively) (p<0.05). This cross-sectional study demonstrated that muscular fitness is inversely associated with C-reactive protein and IL-6 concentrations in young adult men independent of cardiorespi-ratory fitness.
Massive nonplanar two-loop maximal unitarity
NASA Astrophysics Data System (ADS)
Søgaard, Mads; Zhang, Yang
2014-12-01
We explore maximal unitarity for nonplanar two-loop integrals with up to four massive external legs. In this framework, the amplitude is reduced to a basis of master integrals whose coefficients are extracted from maximal cuts. The hepta-cut of the nonplanar double box defines a nodal algebraic curve associated with a multiply pinched genus-3 Riemann surface. All possible configurations of external masses are covered by two distinct topological pictures in which the curve decomposes into either six or eight Riemann spheres. The procedure relies on consistency equations based on vanishing of integrals of total derivatives and Levi-Civita contractions. Our analysis indicates that these constraints are governed by the global structure of the maximal cut. Lastly, we present an algorithm for computing generalized cuts of massive integrals with higher powers of propagators based on the Bezoutian matrix method.
The maximal process of nonlinear shot noise
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2009-05-01
In the nonlinear shot noise system-model shots’ statistics are governed by general Poisson processes, and shots’ decay-dynamics are governed by general nonlinear differential equations. In this research we consider a nonlinear shot noise system and explore the process tracking, along time, the system’s maximal shot magnitude. This ‘maximal process’ is a stationary Markov process following a decay-surge evolution; it is highly robust, and it is capable of displaying both a wide spectrum of statistical behaviors and a rich variety of random decay-surge sample-path trajectories. A comprehensive analysis of the maximal process is conducted, including its Markovian structure, its decay-surge structure, and its correlation structure. All results are obtained analytically and in closed-form.
An information maximization model of eye movements
NASA Technical Reports Server (NTRS)
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
VO2max during successive maximal efforts.
Foster, Carl; Kuffel, Erin; Bradley, Nicole; Battista, Rebecca A; Wright, Glenn; Porcari, John P; Lucia, Alejandro; deKoning, Jos J
2007-12-01
The concept of VO(2)max has been a defining paradigm in exercise physiology for >75 years. Within the last decade, this concept has been both challenged and defended. The purpose of this study was to test the concept of VO(2)max by comparing VO(2) during a second exercise bout following a preliminary maximal effort exercise bout. The study had two parts. In Study #1, physically active non-athletes performed incremental cycle exercise. After 1-min recovery, a second bout was performed at a higher power output. In Study #2, competitive runners performed incremental treadmill exercise and, after 3-min recovery, a second bout at a higher speed. In Study #1 the highest VO(2) (bout 1 vs. bout 2) was not significantly different (3.95 +/- 0.75 vs. 4.06 +/- 0.75 l min(-1)). Maximal heart rate was not different (179 +/- 14 vs. 180 +/- 13 bpm) although maximal V(E) was higher in the second bout (141 +/- 36 vs. 151 +/- 34 l min(-1)). In Study #2 the highest VO(2) (bout 1 vs. bout 2) was not significantly different (4.09 +/- 0.97 vs. 4.03 +/- 1.16 l min(-1)), nor was maximal heart rate (184 + 6 vs. 181 +/- 10 bpm) or maximal V(E) (126 +/- 29 vs. 126 +/- 34 l min(-1)). The results support the concept that the highest VO(2) during a maximal incremental exercise bout is unlikely to change during a subsequent exercise bout, despite higher muscular power output. As such, the results support the "classical" view of VO(2)max. PMID:17891414
VO2max during successive maximal efforts.
Foster, Carl; Kuffel, Erin; Bradley, Nicole; Battista, Rebecca A; Wright, Glenn; Porcari, John P; Lucia, Alejandro; deKoning, Jos J
2007-12-01
The concept of VO(2)max has been a defining paradigm in exercise physiology for >75 years. Within the last decade, this concept has been both challenged and defended. The purpose of this study was to test the concept of VO(2)max by comparing VO(2) during a second exercise bout following a preliminary maximal effort exercise bout. The study had two parts. In Study #1, physically active non-athletes performed incremental cycle exercise. After 1-min recovery, a second bout was performed at a higher power output. In Study #2, competitive runners performed incremental treadmill exercise and, after 3-min recovery, a second bout at a higher speed. In Study #1 the highest VO(2) (bout 1 vs. bout 2) was not significantly different (3.95 +/- 0.75 vs. 4.06 +/- 0.75 l min(-1)). Maximal heart rate was not different (179 +/- 14 vs. 180 +/- 13 bpm) although maximal V(E) was higher in the second bout (141 +/- 36 vs. 151 +/- 34 l min(-1)). In Study #2 the highest VO(2) (bout 1 vs. bout 2) was not significantly different (4.09 +/- 0.97 vs. 4.03 +/- 1.16 l min(-1)), nor was maximal heart rate (184 + 6 vs. 181 +/- 10 bpm) or maximal V(E) (126 +/- 29 vs. 126 +/- 34 l min(-1)). The results support the concept that the highest VO(2) during a maximal incremental exercise bout is unlikely to change during a subsequent exercise bout, despite higher muscular power output. As such, the results support the "classical" view of VO(2)max.
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062
Mixed maximal and explosive strength training in recreational endurance runners.
Taipale, Ritva S; Mikkola, Jussi; Salo, Tiina; Hokka, Laura; Vesterinen, Ville; Kraemer, William J; Nummela, Ari; Häkkinen, Keijo
2014-03-01
Supervised periodized mixed maximal and explosive strength training added to endurance training in recreational endurance runners was examined during an 8-week intervention preceded by an 8-week preparatory strength training period. Thirty-four subjects (21-45 years) were divided into experimental groups: men (M, n = 9), women (W, n = 9), and control groups: men (MC, n = 7), women (WC, n = 9). The experimental groups performed mixed maximal and explosive exercises, whereas control subjects performed circuit training with body weight. Endurance training included running at an intensity below lactate threshold. Strength, power, endurance performance characteristics, and hormones were monitored throughout the study. Significance was set at p ≤ 0.05. Increases were observed in both experimental groups that were more systematic than in the control groups in explosive strength (12 and 13% in men and women, respectively), muscle activation, maximal strength (6 and 13%), and peak running speed (14.9 ± 1.2 to 15.6 ± 1.2 and 12.9 ± 0.9 to 13.5 ± 0.8 km Ł h). The control groups showed significant improvements in maximal and explosive strength, but Speak increased only in MC. Submaximal running characteristics (blood lactate and heart rate) improved in all groups. Serum hormones fluctuated significantly in men (testosterone) and in women (thyroid stimulating hormone) but returned to baseline by the end of the study. Mixed strength training combined with endurance training may be more effective than circuit training in recreational endurance runners to benefit overall fitness that may be important for other adaptive processes and larger training loads associated with, e.g., marathon training.
Tanisho, Kei; Hirakawa, Kazufumi
2009-11-01
The purpose of this study was to examine the effects of 2 different training regimens, continuous (CT) and interval (IT), on endurance capacity in maximal intermittent exercise. Eighteen lacrosse players were divided into CT (n = 6), IT (n = 6), and nontraining (n = 6) groups. Both training groups trained for 3 days per week for 15 weeks using bicycle ergometers. Continuous training performed continuous aerobic training for 20-25 minutes, and IT performed high-intensity pedaling comprising 10 sets of 10-second maximal pedaling with 20-second recovery periods. Maximal anaerobic power, maximal oxygen uptake (V(O2max)), and intermittent power output were measured before and after the training period. The intermittent exercise test consisted of a set of ten 10-second maximal sprints with 40-second intervals. Maximal anaerobic power significantly increased in IT (p
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2011-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Ehrenfest's Lottery--Time and Entropy Maximization
ERIC Educational Resources Information Center
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Maximally informative foraging by Caenorhabditis elegans
Calhoun, Adam J; Chalasani, Sreekanth H; Sharpee, Tatyana O
2014-01-01
Animals have evolved intricate search strategies to find new sources of food. Here, we analyze a complex food seeking behavior in the nematode Caenorhabditis elegans (C. elegans) to derive a general theory describing different searches. We show that C. elegans, like many other animals, uses a multi-stage search for food, where they initially explore a small area intensively (‘local search’) before switching to explore a much larger area (‘global search’). We demonstrate that these search strategies as well as the transition between them can be quantitatively explained by a maximally informative search strategy, where the searcher seeks to continuously maximize information about the target. Although performing maximally informative search is computationally demanding, we show that a drift-diffusion model can approximate it successfully with just three neurons. Our study reveals how the maximally informative search strategy can be implemented and adopted to different search conditions. DOI: http://dx.doi.org/10.7554/eLife.04220.001 PMID:25490069
How to Generate Good Profit Maximization Problems
ERIC Educational Resources Information Center
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Maximizing Resource Utilization in Video Streaming Systems
ERIC Educational Resources Information Center
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
A Model of College Tuition Maximization
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.
2009-01-01
This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…
Why contextual preference reversals maximize expected value.
Howes, Andrew; Warren, Paul A; Farmer, George; El-Deredy, Wael; Lewis, Richard L
2016-07-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types-including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. (PsycINFO Database Record
Does evolution lead to maximizing behavior?
Lehmann, Laurent; Alger, Ingela; Weibull, Jörgen
2015-07-01
A long-standing question in biology and economics is whether individual organisms evolve to behave as if they were striving to maximize some goal function. We here formalize this "as if" question in a patch-structured population in which individuals obtain material payoffs from (perhaps very complex multimove) social interactions. These material payoffs determine personal fitness and, ultimately, invasion fitness. We ask whether individuals in uninvadable population states will appear to be maximizing conventional goal functions (with population-structure coefficients exogenous to the individual's behavior), when what is really being maximized is invasion fitness at the genetic level. We reach two broad conclusions. First, no simple and general individual-centered goal function emerges from the analysis. This stems from the fact that invasion fitness is a gene-centered multigenerational measure of evolutionary success. Second, when selection is weak, all multigenerational effects of selection can be summarized in a neutral type-distribution quantifying identity-by-descent between individuals within patches. Individuals then behave as if they were striving to maximize a weighted sum of material payoffs (own and others). At an uninvadable state it is as if individuals would freely choose their actions and play a Nash equilibrium of a game with a goal function that combines self-interest (own material payoff), group interest (group material payoff if everyone does the same), and local rivalry (material payoff differences).
Faculty Salaries and the Maximization of Prestige
ERIC Educational Resources Information Center
Melguizo, Tatiana; Strober, Myra H.
2007-01-01
Through the lens of the emerging economic theory of higher education, we look at the relationship between salary and prestige. Starting from the premise that academic institutions seek to maximize prestige, we hypothesize that monetary rewards are higher for faculty activities that confer prestige. We use data from the 1999 National Study of…
Maximizing the Spectacle of Water Fountains
ERIC Educational Resources Information Center
Simoson, Andrew J.
2009-01-01
For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…
Maximizing the Effective Use of Formative Assessments
ERIC Educational Resources Information Center
Riddell, Nancy B.
2016-01-01
In the current age of accountability, teachers must be able to produce tangible evidence of students' concept mastery. This article focuses on implementation of formative assessments before, during, and after instruction in order to maximize teachers' ability to effectively monitor student achievement. Suggested strategies are included to help…
Why contextual preference reversals maximize expected value.
Howes, Andrew; Warren, Paul A; Farmer, George; El-Deredy, Wael; Lewis, Richard L
2016-07-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types-including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. (PsycINFO Database Record PMID:27337391
Employee and independent contractor relationships.
Wren, K R; Wren, T L; Monti, E J; Turco, S J
1999-05-01
Most practitioners find themselves at a disadvantage in dealing with business issues and relationships. As health care continues to change, knowledge of contracts and business relationships will help CRNA practitioners navigate new as well as traditional practice settings. This article discusses the advantages and disadvantages of two business relationships: employee and independent contractor. PMID:10504911
Selective Influence through Conditional Independence.
ERIC Educational Resources Information Center
Dzhafarov, Ehtibar N.
2003-01-01
Presents a generalization and improvement for the definition proposed by E. Dzhafarov (2001) for selectiveness in the dependence of several random variables on several (sets of) external factors. This generalization links the notion of selective influence with that of conditional independence. (SLD)
Understanding violations of Gricean maxims in preschoolers and adults
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018
2012-03-16
Independent Assessments: DOE's Systems Integrator convenes independent technical reviews to gauge progress toward meeting specific technical targets and to provide technical information necessary for key decisions.
Auctions with Dynamic Populations: Efficiency and Revenue Maximization
NASA Astrophysics Data System (ADS)
Said, Maher
We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.
Maximal violation of Bell inequalities by position measurements
Kiukas, J.; Werner, R. F.
2010-07-15
We show that it is possible to find maximal violations of the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality using only position measurements on a pair of entangled nonrelativistic free particles. The device settings required in the CHSH inequality are done by choosing one of two times at which position is measured. For different assignments of the '+' outcome to positions, namely, to an interval, to a half-line, or to a periodic set, we determine violations of the inequalities and states where they are attained. These results have consequences for the hidden variable theories of Bohm and Nelson, in which the two-time correlations between distant particle trajectories have a joint distribution, and hence cannot violate any Bell inequality.
Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments
NASA Technical Reports Server (NTRS)
Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)
2002-01-01
A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
A New Algorithm to Optimize Maximal Information Coefficient.
Chen, Yuan; Zeng, Ying; Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2006-01-01
Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…
Karbowski, Jan
2015-10-01
The structure and quantitative composition of the cerebral cortex are interrelated with its computational capacity. Empirical data analyzed here indicate a certain hierarchy in local cortical composition. Specifically, neural wire, i.e., axons and dendrites take each about 1/3 of cortical space, spines and glia/astrocytes occupy each about (1/3)(2), and capillaries around (1/3)(4). Moreover, data analysis across species reveals that these fractions are roughly brain size independent, which suggests that they could be in some sense optimal and thus important for brain function. Is there any principle that sets them in this invariant way? This study first builds a model of local circuit in which neural wire, spines, astrocytes, and capillaries are mutually coupled elements and are treated within a single mathematical framework. Next, various forms of wire minimization rule (wire length, surface area, volume, or conduction delays) are analyzed, of which, only minimization of wire volume provides realistic results that are very close to the empirical cortical fractions. As an alternative, a new principle called "spine economy maximization" is proposed and investigated, which is associated with maximization of spine proportion in the cortex per spine size that yields equally good but more robust results. Additionally, a combination of wire cost and spine economy notions is considered as a meta-principle, and it is found that this proposition gives only marginally better results than either pure wire volume minimization or pure spine economy maximization, but only if spine economy component dominates. However, such a combined meta-principle yields much better results than the constraints related solely to minimization of wire length, wire surface area, and conduction delays. Interestingly, the type of spine size distribution also plays a role, and better agreement with the data is achieved for distributions with long tails. In sum, these results suggest that for the
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
The price of anarchy is maximized at the percolation threshold
NASA Astrophysics Data System (ADS)
Skinner, Brian
2015-03-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called ``price of anarchy'' (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly-placed ``congestible'' and ``incongestible'' links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold. PMID:26066138
Price of anarchy is maximized at the percolation threshold
NASA Astrophysics Data System (ADS)
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Rousanoglou, Elissavet N; Oskouei, Ali E; Herzog, Walter
2007-01-01
Mechanical properties of skeletal muscles are often studied for controlled, electrically induced, maximal, or supra-maximal contractions. However, many mechanical properties, such as the force-length relationship and force enhancement following active muscle stretching, are quite different for maximal and sub-maximal, or electrically induced and voluntary contractions. Force depression, the loss of force observed following active muscle shortening, has been observed and is well documented for electrically induced and maximal voluntary contractions. Since sub-maximal voluntary contractions are arguably the most important for everyday movement analysis and for biomechanical models of skeletal muscle function, it is important to study force depression properties under these conditions. Therefore, the purpose of this study was to examine force depression following sub-maximal, voluntary contractions. Sets of isometric reference and isometric-shortening-isometric test contractions at 30% of maximal voluntary effort were performed with the adductor pollicis muscle. All reference and test contractions were executed by controlling force or activation using a feedback system. Test contractions included adductor pollicis shortening over 10 degrees, 20 degrees, and 30 degrees of thumb adduction. Force depression was assessed by comparing the steady-state isometric forces (activation control) or average electromyograms (EMGs) (force control) following active muscle shortening with those obtained in the corresponding isometric reference contractions. Force was decreased by 20% and average EMG was increased by 18% in the shortening test contractions compared to the isometric reference contractions. Furthermore, force depression was increased with increasing shortening amplitudes, and the relative magnitudes of force depression were similar to those found in electrically stimulated and maximal contractions. We conclude from these results that force depression occurs in sub-maximal
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-01-01
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np - 1) ⊗ (np - 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system.
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-01-01
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np − 1) ⊗ (np − 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system. PMID:27440087
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-01-01
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np - 1) ⊗ (np - 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system. PMID:27440087
Iwata, Kazunori; Ikeda, Kazushi; Sakai, Hideaki
2006-01-01
We discuss an important property called the asymptotic equipartition property on empirical sequences in reinforcement learning. This states that the typical set of empirical sequences has probability nearly one, that all elements in the typical set are nearly equi-probable, and that the number of elements in the typical set is an exponential function of the sum of conditional entropies if the number of time steps is sufficiently large. The sum is referred to as stochastic complexity. Using the property we elucidate the fact that the return maximization depends on two factors, the stochastic complexity and a quantity depending on the parameters of environment. Here, the return maximization means that the best sequences in terms of expected return have probability one. We also examine the sensitivity of stochastic complexity, which is a qualitative guide in tuning the parameters of action-selection strategy, and show a sufficient condition for return maximization in probability.
Maximal temperature in a simple thermodynamical system
NASA Astrophysics Data System (ADS)
Dai, De-Chang; Stojkovic, Dejan
2016-06-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Maximally Nonlocal and Monogamous Quantum Correlations
NASA Astrophysics Data System (ADS)
Barrett, Jonathan; Kent, Adrian; Pironio, Stefano
2006-10-01
We introduce a version of the chained Bell inequality for an arbitrary number of measurement outcomes and use it to give a simple proof that the maximally entangled state of two d-dimensional quantum systems has no local component. That is, if we write its quantum correlations as a mixture of local correlations and general (not necessarily quantum) correlations, the coefficient of the local correlations must be zero. This suggests an experimental program to obtain as good an upper bound as possible on the fraction of local states and provides a lower bound on the amount of classical communication needed to simulate a maximally entangled state in d×d dimensions. We also prove that the quantum correlations violating the inequality are monogamous among nonsignaling correlations and, hence, can be used for quantum key distribution secure against postquantum (but nonsignaling) eavesdroppers.
A theory of maximizing sensory information.
van Hateren, J H
1992-01-01
A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power spectrum, the theory enables the computation of filters accomplishing this maximizing of information. Resulting filters are band-pass or high-pass at high signal-to-noise ratios, and low-pass at low signal-to-noise ratios. In spatial vision this corresponds to lateral inhibition and pooling, respectively. The filters comply with Weber's law over a considerable range of signal-to-noise ratios.
Nonlinear trading models through Sharpe Ratio maximization.
Choey, M; Weigend, A S
1997-08-01
While many trading strategies are based on price prediction, traders in financial markets are typically interested in optimizing risk-adjusted performance such as the Sharpe Ratio, rather than the price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can indeed be achieved with this nonlinear approach.
The “Independent Components” of Natural Scenes are Edge Filters
BELL, ANTHONY J.; SEJNOWSKI, TERRENCE J.
2010-01-01
It has previously been suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and it has been reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that a new unsupervised learning algorithm based on information maximization, a nonlinear “infomax” network, when applied to an ensemble of natural scenes produces sets of visual filters that are localized and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximization network. In addition, the outputs of these filters are as independent as possible, since this infomax network performs Independent Components Analysis or ICA, for sparse (super-gaussian) component distributions. We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). The ICA filters have more sparsely distributed (kurtotic) outputs on natural scenes. They also resemble the receptive fields of simple cells in visual cortex, which suggests that these neurons form a natural, information-theoretic coordinate system for natural images. PMID:9425547
Violations of transitivity under fitness maximization.
Houston, Alasdair I; McNamara, John M; Steer, Mark D
2007-08-22
We present a novel demonstration that violations of transitive choice can result from decision strategies that maximize fitness. Our results depend on how the available options, including options not currently chosen, influence a decision-maker's expectations about the future. In particular, they depend on how the presence of an option may act as an insurance against a run of bad luck in the future.
Pulmonary diffusing capacity after maximal exercise.
Manier, G; Moinard, J; Stoïcheff, H
1993-12-01
To determine the effect of maximal exercise on alveolocapillary membrane diffusing capacity (Dm), 12 professional handball players aged 23.4 +/- 3.3 (SD) yr were studied before and during early recovery from a progressive maximal exercise [immediately (t0), 15 min, and 30 min (t30) after exercise]. Lung capillary blood volume and Dm were determined in a one-step maneuver by simultaneous measurement of CO and NO lung transfer (DLCO and DLNO, respectively) with use of the single-breath breath-hold method. At t0, DLCO was elevated (13.1 +/- 12.0%; P < 0.01) but both DLNO and Dm for CO remained unchanged. Between t0 and t30, both DLCO and DLNO decreased significantly. At t30, DLCO was not different from the control resting value. DLNO (and consequently Dm for CO) was significantly lower than the control value at t30 (-8.9 +/- 8.1%; P < 0.01). Lung capillary blood volume was elevated at t0 (18.0 +/- 19.0%; P < 0.01) but progressively decreased to near control resting values at t30. Differences in the postexercise kinetics of both DLCO and DLNO point to a role of the transient increase in pulmonary vascular recruitment during the recovery period. We concluded that Dm was somewhat decreased in the 30 min after maximal exercise of short duration, but the exact pulmonary mechanisms involved remain to be elucidated.
Formation Control for the MAXIM Mission
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
ERIC Educational Resources Information Center
Giorgis, Cyndi; Johnson, Nancy J.
2002-01-01
Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)
Electromyographic activity in sprinting at speeds ranging from sub-maximal to supra-maximal.
Mero, A; Komi, P V
1987-06-01
Eleven male and eight female sprinters were filmed when running at five different speeds from sub-maximal to supra-maximal levels over a force platform. Supra-maximal running was performed by a towing system. The electromyographic (EMG) activity of 10 muscles was recorded telemetrically using surface electrodes. Pre-activity (PRA), activity during ground contact, immediate post-contact activity, and minimum activity were the major EMG parameters analyzed from two consecutive strides. Reproducibility of the variables used was rather high (r = 0.85 to 0.90 and coefficient of variation = 6.6 to 9.7%). The results demonstrated increases (P less than 0.001) in PRA and forces in the braking phase when running speed increased to supra-maximum. PRA correlated (P less than 0.01) with the average resultant force in the braking phase. Relative PRA (percentage of maximal value during ipsilateral contact) remained fairly constant (about 50 to 70%) at each speed. In the propulsion phase of contact, integrated EMG activity and forces increased (P less than 0.001) to maximal running, but at supra-maximal speed the forces decreased non-significantly. Post-contact activity and minimum activity increased (P less than 0.001) to maximal running but the supra-maximal running was characterized by lowered integrated EMG activities in these phases. Post-contact activity correlated (P less than 0.05) with average resultant force in the propulsion phase of the male subjects when running velocity increased. It was suggested that PRA increases are needed for increasing muscle stiffness to resist great impact forces at the beginning of contact during sprint running.
Maximal violation of tight Bell inequalities for maximal high-dimensional entanglement
Lee, Seung-Woo; Jaksch, Dieter
2009-07-15
We propose a Bell inequality for high-dimensional bipartite systems obtained by binning local measurement outcomes and show that it is tight. We find a binning method for even d-dimensional measurement outcomes for which this Bell inequality is maximally violated by maximally entangled states. Furthermore, we demonstrate that the Bell inequality is applicable to continuous variable systems and yields strong violations for two-mode squeezed states.
Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L
2015-05-01
This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides.
Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L
2015-05-01
This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides. PMID:25941139
An information-theoretic analysis of return maximization in reinforcement learning.
Iwata, Kazunori
2011-12-01
We present a general analysis of return maximization in reinforcement learning. This analysis does not require assumptions of Markovianity, stationarity, and ergodicity for the stochastic sequential decision processes of reinforcement learning. Instead, our analysis assumes the asymptotic equipartition property fundamental to information theory, providing a substantially different view from that in the literature. As our main results, we show that return maximization is achieved by the overlap of typical and best sequence sets, and we present a class of stochastic sequential decision processes with the necessary condition for return maximization. We also describe several examples of best sequences in terms of return maximization in the class of stochastic sequential decision processes, which satisfy the necessary condition.
Merging Groups to Maximize Object Partition Comparison.
ERIC Educational Resources Information Center
Klastorin, T. D.
1980-01-01
The problem of objectively comparing two independently determined partitions of N objects or variables is discussed. A similarity measure based on the simple matching coefficient is defined and related to previously suggested measures. (Author/JKS)
RELICA: a method for estimating the reliability of independent components.
Artoni, Fiorenzo; Menicucci, Danilo; Delorme, Arnaud; Makeig, Scott; Micera, Silvestro
2014-12-01
Independent Component Analysis (ICA) is a widely applied data-driven method for parsing brain and non-brain EEG source signals, mixed by volume conduction to the scalp electrodes, into a set of maximally temporally and often functionally independent components (ICs). Many ICs may be identified with a precise physiological or non-physiological origin. However, this process is hindered by partial instability in ICA results that can arise from noise in the data. Here we propose RELICA (RELiable ICA), a novel method to characterize IC reliability within subjects. RELICA first computes IC "dipolarity" a measure of physiological plausibility, plus a measure of IC consistency across multiple decompositions of bootstrap versions of the input data. RELICA then uses these two measures to visualize and cluster the separated ICs, providing a within-subject measure of IC reliability that does not involve checking for its occurrence across subjects. We demonstrate the use of RELICA on EEG data recorded from 14 subjects performing a working memory experiment and show that many brain and ocular artifact ICs are correctly classified as "stable" (highly repeatable across decompositions of bootstrapped versions of the input data). Many stable ICs appear to originate in the brain, while other stable ICs account for identifiable non-brain processes such as line noise. RELICA might be used with any linear blind source separation algorithm to reduce the risk of basing conclusions on unstable or physiologically un-interpretable component processes. PMID:25234117
Electromagnetically induced grating with maximal atomic coherence
Carvalho, Silvania A.; Araujo, Luis E. E. de
2011-10-15
We describe theoretically an atomic diffraction grating that combines an electromagnetically induced grating with a coherence grating in a double-{Lambda} atomic system. With the atom in a condition of maximal coherence between its lower levels, the combined gratings simultaneously diffract both the incident probe beam as well as the signal beam generated through four-wave mixing. A special feature of the atomic grating is that it will diffract any beam resonantly tuned to any excited state of the atom accessible by a dipole transition from its ground state.
Fredriksson, Albin Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goals to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.
Cormie, Prue; McGuigan, Michael R; Newton, Robert U
2011-02-01
This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and
Maximizing Information Diffusion in the Cyber-physical Integrated Network.
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-01-01
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254
Maximizing Information Diffusion in the Cyber-physical Integrated Network.
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-11-11
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
Maximally supersymmetric planar Yang-Mills amplitudes at five loops
Bern, Z.; Carrasco, J. J. M.; Johansson, H.; Kosower, D. A.
2007-12-15
We present an Ansatz for the planar five-loop four-point amplitude in maximally supersymmetric Yang-Mills theory in terms of loop integrals. This Ansatz exploits the recently observed correspondence between integrals with simple conformal properties and those found in the four-point amplitudes of the theory through four loops. We explain how to identify all such integrals systematically. We make use of generalized unitarity in both four and D dimensions to determine the coefficients of each of these integrals in the amplitude. Maximal cuts, in which we cut all propagators of a given integral, are an especially effective means for determining these coefficients. The set of integrals and coefficients determined here will be useful for computing the five-loop cusp anomalous dimension of the theory which is of interest for nontrivial checks of the AdS/CFT duality conjecture. It will also be useful for checking a conjecture that the amplitudes have an iterative structure allowing for their all-loop resummation, whose link to a recent string-side computation by Alday and Maldacena opens a new venue for quantitative AdS/CFT comparisons.
Maximally Entangled States of a Two-Qubit System
NASA Astrophysics Data System (ADS)
Singh, Manu P.; Rajput, B. S.
2013-12-01
Entanglement has been explored as one of the key resources required for quantum computation, the functional dependence of the entanglement measures on spin correlation functions has been established, correspondence between evolution of maximally entangled states (MES) of two-qubit system and representation of SU(2) group has been worked out and the evolution of MES under a rotating magnetic field has been investigated. Necessary and sufficient conditions for the general two-qubit state to be maximally entangled state (MES) have been obtained and a new set of MES constituting a very powerful and reliable eigen basis (different from magic bases) of two-qubit systems has been constructed. In terms of the MES constituting this basis, Bell’s States have been generated and all the qubits of two-qubit system have been obtained. It has shown that a MES corresponds to a point in the SO(3) sphere and an evolution of MES corresponds to a trajectory connecting two points on this sphere. Analysing the evolution of MES under a rotating magnetic field, it has been demonstrated that a rotating magnetic field is equivalent to a three dimensional rotation in real space leading to the evolution of a MES.
Theoretical maximal storage of hydrogen in zeolitic frameworks.
Vitillo, Jenny G; Ricchiardi, Gabriele; Spoto, Giuseppe; Zecchina, Adriano
2005-12-01
Physisorption and encapsulation of molecular hydrogen in tailored microporous materials are two of the options for hydrogen storage. Among these materials, zeolites have been widely investigated. In these materials, the attained storage capacities vary widely with structure and composition, leading to the expectation that materials with improved binding sites, together with lighter frameworks, may represent efficient storage materials. In this work, we address the problem of the determination of the maximum amount of molecular hydrogen which could, in principle, be stored in a given zeolitic framework, as limited by the size, structure and flexibility of its pore system. To this end, the progressive filling with H2 of 12 purely siliceous models of common zeolite frameworks has been simulated by means of classical molecular mechanics. By monitoring the variation of cell parameters upon progressive filling of the pores, conclusions are drawn regarding the maximum storage capacity of each framework and, more generally, on framework flexibility. The flexible non-pentasils RHO, FAU, KFI, LTA and CHA display the highest maximal capacities, ranging between 2.86-2.65 mass%, well below the targets set for automotive applications but still in an interesting range. The predicted maximal storage capacities correlate well with experimental results obtained at low temperature. The technique is easily extendable to any other microporous structure, and it can provide a method for the screening of hypothetical new materials for hydrogen storage applications.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Steps to Independent Living Series.
ERIC Educational Resources Information Center
Lobb, Nancy
This set of six activity books and a teacher's guide is designed to help students from eighth grade to adulthood with special needs to learn independent living skills. The activity books have a reading level of 2.5 and address: (1) "How to Get Well When You're Sick or Hurt," including how to take a temperature, see a doctor, and use medicines…
Independent EEG Sources Are Dipolar
Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott
2012-01-01
Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308
Setting up an in-office independent medical examination company.
Hamilton, Martha
2002-04-01
In a time of declining reimbursement for patient care services, establishing an in-office IME company enables orthopedic practices to generate additional revenue to subsidize clinical activities without compromising the credibility and integrity of their physicians; however, the decision to enter the medical/legal consultation business should be considered carefully. A thorough analysis of the industry, applicable laws, costs-versus-benefits, and the local marketplace is critical in helping the practice to evaluate the feasibility of establishing the new business and develop a product that is well differentiated. The practice should approach the day-to-day management of the IME company with the same careful attention that it pays to the management of its orthopedic service. This includes creating a staffing and information technology infrastructure that supports the new business and allows for its growth. An attitude of continuous learning whereby the physician-reviewer seeks out information about the customer's needs and market shifts enables the practice to respond swiftly to these needs and shifts and further position itself as an innovative provider of medical/legal services.
Distinguishing maximally entangled states by one-way local operations and classical communication
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Chao; Feng, Ke-Qin; Gao, Fei; Wen, Qiao-Yan
2015-01-01
In this paper, we mainly study the local indistinguishability of mutually orthogonal bipartite maximally entangled states. We construct sets of fewer than d orthogonal maximally entangled states which are not distinguished by one-way local operations and classical communication (LOCC) in the Hilbert space of d ⊗d . The proof, based on the Fourier transform of an additive group, is very simple but quite effective. Simultaneously, our results give a general unified upper bound for the minimum number of one-way LOCC indistinguishable maximally entangled states. This improves previous results which only showed sets of N ≥d -2 such states. Finally, our results also show that previous conjectures in Zhang et al. [Z.-C. Zhang, Q.-Y. Wen, F. Gao, G.-J. Tian, and T.-Q. Cao, Quant. Info. Proc. 13, 795 (2014), 10.1007/s11128-013-0691-9] are indeed correct.
Spiders Tune Glue Viscosity to Maximize Adhesion.
Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali
2015-11-24
Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives.
Maximal liquid bridges between horizontal cylinders
NASA Astrophysics Data System (ADS)
Cooray, Himantha; Huppert, Herbert E.; Neufeld, Jerome A.
2016-08-01
We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace-Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace-Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity.
Area coverage maximization in service facility siting
NASA Astrophysics Data System (ADS)
Matisziw, Timothy C.; Murray, Alan T.
2009-06-01
Traditionally, models for siting facilities in order to optimize coverage of area demand have made use of discrete space representations to efficiently handle both candidate facility locations and demand. These discretizations of space are often necessary given the linear functional forms of many siting models and the complexities associated with evaluating continuous space. Recently, several spatial optimization approaches have been proposed to address the more general problem of identifying facility sites that maximize regional coverage for the case where candidate sites and demand are continuously distributed across space. One assumption of existing approaches is that only demand falling within a prescribed radius of the facility can be effectively served. In many practical applications, however, service areas are not necessarily circular, as terrain, transportation, and service characteristics of the facility often result in irregular shapes. This paper develops a generalized service coverage approach, allowing a sited facility to have any continuous service area shape, not simply a circle. Given that demand and facility sites are assumed to be continuous throughout a region, geometrical properties of the demand region and the service facility coverage area are exploited to identify a facility site to optimize the correspondence between the two areas. In particular, we consider the case where demand is uniformly distributed and the service area is translated to maximize coverage. A heuristic approach is proposed for efficient model solution. Application results are presented for siting a facility given differently shaped service areas.
Spiders Tune Glue Viscosity to Maximize Adhesion.
Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali
2015-11-24
Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives. PMID:26513350
Maximal lactate steady state in Judo
de Azevedo, Paulo Henrique Silva Marques; Pithon-Curi, Tania; Zagatto, Alessandro Moura; Oliveira, João; Perez, Sérgio
2014-01-01
Summary Background: the purpose of this study was to verify the validity of respiratory compensation threshold (RCT) measured during a new single judo specific incremental test (JSIT) for aerobic demand evaluation. Methods: to test the validity of the new test, the JSIT was compared with Maximal Lactate Steady State (MLSS), which is the gold standard procedure for aerobic demand measuring. Eight well-trained male competitive judo players (24.3 ± 7.9 years; height of 169.3 ± 6.7cm; fat mass of 12.7 ± 3.9%) performed a maximal incremental specific test for judo to assess the RCT and performed on 30-minute MLSS test, where both tests were performed mimicking the UchiKomi drills. Results: the intensity at RCT measured on JSIT was not significantly different compared to MLSS (p=0.40). In addition, it was observed high and significant correlation between MLSS and RCT (r=0.90, p=0.002), as well as a high agreement. Conclusions: RCT measured during JSIT is a valid procedure to measure the aerobic demand, respecting the ecological validity of Judo. PMID:25332923
Maximizing strain in miniaturized dielectric elastomer actuators
NASA Astrophysics Data System (ADS)
Rosset, Samuel; Araromi, Oluwaseun; Shea, Herbert
2015-04-01
We present a theoretical model to optimise the unidirectional motion of a rigid object bonded to a miniaturized dielectric elastomer actuator (DEA), a configuration found for example in AMI's haptic feedback devices, or in our tuneable RF phase shifter. Recent work has shown that unidirectional motion is maximized when the membrane is both anistropically prestretched and subjected to a dead load in the direction of actuation. However, the use of dead weights for miniaturized devices is clearly highly impractical. Consequently smaller devices use the membrane itself to generate the opposing force. Since the membrane covers the entire frame, one has the same prestretch condition in the active (actuated) and passive zones. Because the passive zone contracts when the active zone expands, it does not provide a constant restoring force, reducing the maximum achievable actuation strain. We have determined the optimal ratio between the size of the electrode (active zone) and the passive zone, as well as the optimal prestretch in both in-plane directions, in order to maximize the absolute displacement of the rigid object placed at the active/passive border. Our model and experiments show that the ideal active ratio is 50%, with a displacement twice smaller than what can be obtained with a dead load. We expand our fabrication process to also show how DEAs can be laser-post-processed to remove carefully chosen regions of the passive elastomer membrane, thereby increasing the actuation strain of the device.
Symmetry-adapted Wannier functions in the maximal localization procedure
NASA Astrophysics Data System (ADS)
Sakuma, R.
2013-06-01
A procedure to construct symmetry-adapted Wannier functions in the framework of the maximally localized Wannier function approach [Marzari and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.56.12847 56, 12847 (1997); Souza, Marzari, and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.65.035109 65, 035109 (2001)] is presented. In this scheme, the minimization of the spread functional of the Wannier functions is performed with constraints that are derived from symmetry properties of the specified set of the Wannier functions and the Bloch functions used to construct them, therefore one can obtain a solution that does not necessarily yield the global minimum of the spread functional. As a test of this approach, results of atom-centered Wannier functions for GaAs and Cu are presented.
Hofer, Scott M; Piccinin, Andrea M
2009-06-01
Replication of research findings across independent longitudinal studies is essential for a cumulative and innovative developmental science. Meta-analysis of longitudinal studies is often limited by the amount of published information on particular research questions, the complexity of longitudinal designs and the sophistication of analyses, and practical limits on full reporting of results. In many cases, cross-study differences in sample composition and measurements impede or lessen the utility of pooled data analysis. A collaborative, coordinated analysis approach can provide a broad foundation for cumulating scientific knowledge by facilitating efficient analysis of multiple studies in ways that maximize comparability of results and permit evaluation of study differences. The goal of such an approach is to maximize opportunities for replication and extension of findings across longitudinal studies through open access to analysis scripts and output for published results, permitting modification, evaluation, and extension of alternative statistical models and application to additional data sets. Drawing on the cognitive aging literature as an example, the authors articulate some of the challenges of meta-analytic and pooled-data approaches and introduce a coordinated analysis approach as an important avenue for maximizing the comparability, replication, and extension of results from longitudinal studies. PMID:19485626
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design.
Zhang, Shaoqiang; Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html. PMID:27487245
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design.
Zhang, Shaoqiang; Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html.
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design
Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html. PMID:27487245
SETS. Set Equation Transformation System
Worrell, R.B.
1992-01-13
SETS is used for symbolic manipulation of Boolean equations, particularly the reduction of equations by the application of Boolean identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze noncoherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access through nullification of sensors in its protection system.
Dispatch Scheduling to Maximize Exoplanet Detection
NASA Astrophysics Data System (ADS)
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
Constrained maximal power in small engines.
Gaveau, B; Moreau, M; Schulman, L S
2010-11-01
Efficiency at maximum power is studied for two simple engines (three- and five-state systems). This quantity is found to be sensitive to the variable with respect to which the maximization is implemented. It can be wildly different from the well-known Curzon-Ahlborn bound (one minus the square root of the temperature ratio), or can be even closer than previously realized. It is shown that when the power is optimized with respect to a maximum number of variables the Curzon-Ahlborn bound is a lower bound, accurate at high temperatures, but a rather poor estimate when the cold reservoir temperature approaches zero (at which point the Carnot limit is achieved).
Characterizing maximally singular phase-space distributions
NASA Astrophysics Data System (ADS)
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
Critical paths: maximizing patient care coordination.
Spath, P L
1995-01-01
1. With today's emphasis on horizontal and vertical integration of patient care services and the new initiatives prompted by these challenges, OR nurses are considering new methods for managing the perioperative period. One such method is the critical path. 2. A critical path defines an optimal sequencing and timing of interventions by physicians, nurses, and other staff members for a particular diagnosis or procedure, designed to better use resources, maximize quality of care, and minimize delays. 3. Hospitals implementing path-based patient care have reported cost reductions and improved team-work. Critical paths have been shown to reduce patient care costs by improving hospital efficiency, not merely by reducing physician practice variations.
The maximal exercise ECG in asymptomatic men.
Cumming, G R; Borysyk, L; Dufresne, C
1972-03-18
Lead MC5 bipolar exercise ECG was obtained in 510 asymptomatic males, aged 40 to 65, utilizing the bicycle ergometer, with maximal stress in 71% of the subjects. "Ischemic changes" occurred in 61 subjects, the frequency increasing from 4% at age 40 to 45, to 20% at age 50 to 55, to 37% at age 61 to 65. Subjects having an ischemic type ECG change on exercise had more frequent minor resting ECG changes, more resting hypertension, and a greater incidence of high cholesterol values than subjects with a normal ECG response to exercise, but there was no difference in the incidence of obesity, low fitness, or high systolic blood pressure after exercise. Current evidence suggests that asymptomatic male subjects with an abnormal exercise ECG develop clinical coronary heart disease from 2.5 to over 30 times more frequently than those with a normal exercise ECG.
Zagatto, A; Redkva, P; Loures, J; Kalva Filho, C; Franco, V; Kaminagakura, E; Papoti, M
2011-12-01
The aims of this study were: (i) to measure energy system contributions in maximal anaerobic running test (MART); and (ii) to verify any correlation between MART and maximal accumulated oxygen deficit (MAOD). Eleven members of the armed forces were recruited for this study. Participants performed MART and MAOD, both accomplished on a treadmill. MART consisted of intermittent exercise, 20 s effort with 100 s recovery, after each spell of effort exercise. Energy system contributions by MART were also determined by excess post-exercise oxygen consumption, lactate response, and oxygen uptake measurements. MAOD was determined by five submaximal intensities and one supramaximal intensity exercises corresponding to 120% at maximal oxygen uptake intensity. Energy system contributions were 65.4±1.1% to aerobic; 29.5±1.1% to anaerobic a-lactic; and 5.1±0.5% to anaerobic lactic system throughout the whole test, while only during effort periods the anaerobic contribution corresponded to 73.5±1.0%. Maximal power found in MART corresponded to 111.25±1.33 mL/kg/min but did not significantly correlate with MAOD (4.69±0.30 L and 70.85±4.73 mL/kg). We concluded that the anaerobic a-lactic system is the main energy system in MART efforts and this test did not significantly correlate to MAOD.
ERIC Educational Resources Information Center
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
Romano, Raffaele; Loock, Peter van
2010-07-15
Quantum teleportation enables deterministic and faithful transmission of quantum states, provided a maximally entangled state is preshared between sender and receiver, and a one-way classical channel is available. Here, we prove that these resources are not only sufficient, but also necessary, for deterministically and faithfully sending quantum states through any fixed noisy channel of maximal rank, when a single use of the cannel is admitted. In other words, for this family of channels, there are no other protocols, based on different (and possibly cheaper) sets of resources, capable of replacing quantum teleportation.
ERIC Educational Resources Information Center
McCook, Byron Alexander
2009-01-01
Pennsylvania public school districts are largely funded through basic education subsidy for providing educational services for resident students and non-resident students who are placed in residential programs within the school district boundaries. Non-resident placements occur through, but are not limited to, adjudication proceedings, foster home…
Fostering Musical Independence
ERIC Educational Resources Information Center
Shieh, Eric; Allsup, Randall Everett
2016-01-01
Musical independence has always been an essential aim of musical instruction. But this objective can refer to everything from high levels of musical expertise to more student choice in the classroom. While most conceptualizations of musical independence emphasize the demonstration of knowledge and skills within particular music traditions, this…
Independent vs. Laboratory Papers.
ERIC Educational Resources Information Center
Wilson, Clint C., II
1981-01-01
Comparisons of independent and laboratory newspapers at selected California colleges indicated that (1) the independent newspapers were superior in editorial opinion and leadership characteristics; (2) the laboratory newspapers made better use of photography, art, and graphics; and (3) professional journalists highly rated their laboratory…
Independence of Internal Auditors.
ERIC Educational Resources Information Center
Montondon, Lucille; Meixner, Wilda F.
1993-01-01
A survey of 288 college and university auditors investigated patterns in their appointment, reporting, and supervisory practices as indicators of independence and objectivity. Results indicate a weakness in the positioning of internal auditing within institutions, possibly compromising auditor independence. Because the auditing function is…
American Independence. Fifth Grade.
ERIC Educational Resources Information Center
Crosby, Annette
This fifth grade teaching unit covers early conflicts between the American colonies and Britain, battles of the American Revolutionary War, and the Declaration of Independence. Knowledge goals address the pre-revolutionary acts enforced by the British, the concepts of conflict and independence, and the major events and significant people from the…
A Bayesian optimization approach for wind farm power maximization
NASA Astrophysics Data System (ADS)
Park, Jinkyoo; Law, Kincho H.
2015-03-01
The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.
Expectation-Maximization Binary Clustering for Behavioural Annotation
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631
Action Now for Older Americans: Toward Independent Living.
ERIC Educational Resources Information Center
Thorson, James A., Ed.
The collection of conference papers given by representatives of State, Federal, and voluntary agencies, and university faculty, discusses information and planning strategies aimed at maximizing independent living for the elderly. Introductory and welcoming remarks by James A. Thorson, Virginia Smith, and Frank Groschelle are included along with…
Diffusion Tensor Estimation by Maximizing Rician Likelihood
Landman, Bennett; Bazin, Pierre-Louis; Prince, Jerry
2012-01-01
Diffusion tensor imaging (DTI) is widely used to characterize white matter in health and disease. Previous approaches to the estimation of diffusion tensors have either been statistically suboptimal or have used Gaussian approximations of the underlying noise structure, which is Rician in reality. This can cause quantities derived from these tensors — e.g., fractional anisotropy and apparent diffusion coefficient — to diverge from their true values, potentially leading to artifactual changes that confound clinically significant ones. This paper presents a novel maximum likelihood approach to tensor estimation, denoted Diffusion Tensor Estimation by Maximizing Rician Likelihood (DTEMRL). In contrast to previous approaches, DTEMRL considers the joint distribution of all observed data in the context of an augmented tensor model to account for variable levels of Rician noise. To improve numeric stability and prevent non-physical solutions, DTEMRL incorporates a robust characterization of positive definite tensors and a new estimator of underlying noise variance. In simulated and clinical data, mean squared error metrics show consistent and significant improvements from low clinical SNR to high SNR. DTEMRL may be readily supplemented with spatial regularization or a priori tensor distributions for Bayesian tensor estimation. PMID:23132746
Maximizing exosome colloidal stability following electroporation.
Hood, Joshua L; Scott, Michael J; Wickline, Samuel A
2014-03-01
Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo.
Predicting maximal grip strength using hand circumference.
Li, Ke; Hewson, David J; Duchêne, Jacques; Hogrel, Jean-Yves
2010-12-01
The objective of this study was to analyze the correlations between anthropometric data and maximal grip strength (MGS) in order to establish a simple model to predict "normal" MGS. Randomized bilateral measurement of MGS was performed on a homogeneous population of 100 subjects. MGS was measured according to a standardized protocol with three dynamometers (Jamar, Myogrip and Martin Vigorimeter) for both dominant and non-dominant sides. Several anthropometric data were also measured: height; weight; hand, wrist and forearm circumference; hand and palm length. Among these data, hand circumference had the strongest correlation with MGS for all three dynamometers and for both hands (0.789 and 0.782 for Jamar; 0.829 and 0.824 for Myogrip; 0.663 and 0.730 for Vigorimeter). In addition, the only anthropometric variable systematically selected by a stepwise multiple linear regression analysis was also hand circumference. Based on this parameter alone, a predictive regression model presented good results (r(2) = 0.624 for Jamar; r(2) = 0.683 for Myogrip and r(2) = 0.473 for Vigorimeter; all adjusted r(2)). Moreover a single equation was predictive of MGS for both men and women and for both non-dominant and dominant hands. "Normal" MGS can be predicted using hand circumference alone.
Predicting maximal grip strength using hand circumference.
Li, Ke; Hewson, David J; Duchêne, Jacques; Hogrel, Jean-Yves
2010-12-01
The objective of this study was to analyze the correlations between anthropometric data and maximal grip strength (MGS) in order to establish a simple model to predict "normal" MGS. Randomized bilateral measurement of MGS was performed on a homogeneous population of 100 subjects. MGS was measured according to a standardized protocol with three dynamometers (Jamar, Myogrip and Martin Vigorimeter) for both dominant and non-dominant sides. Several anthropometric data were also measured: height; weight; hand, wrist and forearm circumference; hand and palm length. Among these data, hand circumference had the strongest correlation with MGS for all three dynamometers and for both hands (0.789 and 0.782 for Jamar; 0.829 and 0.824 for Myogrip; 0.663 and 0.730 for Vigorimeter). In addition, the only anthropometric variable systematically selected by a stepwise multiple linear regression analysis was also hand circumference. Based on this parameter alone, a predictive regression model presented good results (r(2) = 0.624 for Jamar; r(2) = 0.683 for Myogrip and r(2) = 0.473 for Vigorimeter; all adjusted r(2)). Moreover a single equation was predictive of MGS for both men and women and for both non-dominant and dominant hands. "Normal" MGS can be predicted using hand circumference alone. PMID:20708427
Evaluating Reanalysis - Independent Observations and Observation Independence
NASA Astrophysics Data System (ADS)
Wahl, S.; Bollmeyer, C.; Danek, C.; Friederichs, P.; Keller, J. D.; Ohlwein, C.
2014-12-01
Reanalyses on global to regional scales are widely used for validation of meteorological or hydrological models and for many climate applications. However, the evaluation of the reanalyses itself is still a crucial task. A major challenge is the lack of independent observations, since most of the available observational data is already included, e. g. by the data assimilation scheme. Here, we focus on the evaluation of dynamical reanalyses which are obtained by using numerical weather prediction models with a fixed data assimilation scheme. Precipitation is generally not assimilated in dynamical reanalyses (except for e.g. latent heat nudging) and thereby provides valuable data for the evaluation of reanalysis. Since precipitation results from the complex dynamical and microphysical atmospheric processes, an accurate representation of precipitation is often used as an indicator for a good model performance. Here, we use independent observations of daily precipitation accumulations from European rain gauges (E-OBS) of the years 2008 and 2009 for the intercomparison of various regional reanalyses products for the European CORDEX domain (Hirlam reanalysis at 0.2°, Metoffice UM reanalysis at 0.11°, COSMO reanalysis at 0.055°). This allows for assessing the benefits of increased horizontal resolution compared to global reanalyses. Furthermore, the effect of latent heat nudging (assimilation of radar-derived rain rates) is investigated using an experimental setup of the COSMO reanalysis with 6km and 2km resolution for summer 2011. Further, we present an observation independent evaluation based on kinetic energy spectra. Such spectra should follow a k-3 dependence of the wave number k for the larger scale, and a k-5/3 dependence on the mesoscale. We compare the spectra of the aforementioned regional reanalyses in order to investigate the general capability of the reanalyses to resolve events on the mesoscale (e.g. effective resolution). The intercomparison and
Maximal and sub-maximal functional lifting performance at different platform heights.
Savage, Robert J; Jaffrey, Mark A; Billing, Daniel C; Ham, Daniel J
2015-01-01
Introducing valid physical employment tests requires identifying and developing a small number of practical tests that provide broad coverage of physical performance across the full range of job tasks. This study investigated discrete lifting performance across various platform heights reflective of common military lifting tasks. Sixteen Australian Army personnel performed a discrete lifting assessment to maximal lifting capacity (MLC) and maximal acceptable weight of lift (MAWL) at four platform heights between 1.30 and 1.70 m. There were strong correlations between platform height and normalised lifting performance for MLC (R(2) = 0.76 ± 0.18, p < 0.05) and MAWL (R(2) = 0.73 ± 0.21, p < 0.05). The developed relationship allowed prediction of lifting capacity at one platform height based on lifting capacity at any of the three other heights, with a standard error of < 4.5 kg and < 2.0 kg for MLC and MAWL, respectively.
NASA Astrophysics Data System (ADS)
Yan, Ming; Vese, Luminita A.
2011-03-01
Computerized tomography (CT) plays an important role in medical imaging, especially for diagnosis and therapy. However, higher radiation dose from CT will result in increasing of radiation exposure in the population. Therefore, the reduction of radiation from CT is an essential issue. Expectation maximization (EM) is an iterative method used for CT image reconstruction that maximizes the likelihood function under Poisson noise assumption. Total variation regularization is a technique used frequently in image restoration to preserve edges, given the assumption that most images are piecewise constant. Here, we propose a method combining expectation maximization and total variation regularization, called EM+TV. This method can reconstruct a better image using fewer views in the computed tomography setting, thus reducing the overall dose of radiation. The numerical results in two and three dimensions show the efficiency of the proposed EM+TV method by comparison with those obtained by filtered back projection (FBP) or by EM only.
Scheinker, Alexander; Baily, Scott; Young, Daniel; Kolski, Jeffrey S.; Prokop, Mark
2014-08-01
In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic fieldmore » cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.« less
Scheinker, Alexander; Baily, Scott; Young, Daniel; Kolski, Jeffrey S.; Prokop, Mark
2014-08-01
In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic field cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.
Demura, Shinichi; Morishita, Koji; Yamada, Takayoshi; Yamaji, Shunsuke; Komatsu, Miho
2011-11-01
L-Ornithine plays an important role in ammonia metabolism via the urea cycle. This study aimed to examine the effect of L-ornithine hydrochloride ingestion on ammonia metabolism and performance after intermittent maximal anaerobic cycle ergometer exercise. Ten healthy young adults (age, 23.8 ± 3.9 year; height, 172.3 ± 5.5 cm; body mass, 67.7 ± 6.1 kg) with regular training experience ingested L-ornithine hydrochloride (0.1 g/kg, body mass) or placebo after 30 s of maximal cycling exercise. Five sets of the same maximal cycling exercise were conducted 60 min after ingestion, and maximal cycling exercise was conducted after a 15 min rest. The intensity of cycling exercise was based on each subject's body mass (0.74 N kg(-1)). Work volume (watt), peak rpm (rpm) before and after intermittent maximal ergometer exercise and the following serum parameters were measured before ingestion, immediately after exercise and 15 min after exercise: ornithine, ammonia, urea, lactic acid and glutamate. Peak rpm was significantly greater with L-ornithine hydrochloride ingestion than with placebo ingestion. Serum ornithine level was significantly greater with L-ornithine hydrochloride ingestion than with placebo ingestion immediately and 15 min after intermittent maximal cycle ergometer exercise. In conclusion, although maximal anaerobic performance may be improved by L-ornithine hydrochloride ingestion before intermittent maximal anaerobic cycle ergometer exercise, the above may not depend on increase of ammonia metabolism with L-ornithine hydrochloride.
Skeletal muscle vasodilatation during maximal exercise in health and disease.
Calbet, Jose A L; Lundby, Carsten
2012-12-15
Maximal exercise vasodilatation results from the balance between vasoconstricting and vasodilating signals combined with the vascular reactivity to these signals. During maximal exercise with a small muscle mass the skeletal muscle vascular bed is fully vasodilated. During maximal whole body exercise, however, vasodilatation is restrained by the sympathetic system. This is necessary to avoid hypotension since the maximal vascular conductance of the musculature exceeds the maximal pumping capacity of the heart. Endurance training and high-intensity intermittent knee extension training increase the capacity for maximal exercise vasodilatation by 20-30%, mainly due to an enhanced vasodilatory capacity, as maximal exercise perfusion pressure changes little with training. The increase in maximal exercise vascular conductance is to a large extent explained by skeletal muscle hypertrophy and vascular remodelling. The vasodilatory capacity during maximal exercise is reduced or blunted with ageing, as well as in chronic heart failure patients and chronically hypoxic humans; reduced vasodilatory responsiveness and increased sympathetic activity (and probably, altered sympatholysis) are potential mechanisms accounting for this effect. Pharmacological counteraction of the sympathetic restraint may result in lower perfusion pressure and reduced oxygen extraction by the exercising muscles. However, at the same time fast inhibition of the chemoreflex in maximally exercising humans may result in increased vasodilatation, further confirming a restraining role of the sympathetic nervous system on exercise-induced vasodilatation. This is likely to be critical for the maintenance of blood pressure in exercising patients with a limited heart pump capacity.
Skeletal muscle vasodilatation during maximal exercise in health and disease
Calbet, Jose A L; Lundby, Carsten
2012-01-01
Maximal exercise vasodilatation results from the balance between vasoconstricting and vasodilating signals combined with the vascular reactivity to these signals. During maximal exercise with a small muscle mass the skeletal muscle vascular bed is fully vasodilated. During maximal whole body exercise, however, vasodilatation is restrained by the sympathetic system. This is necessary to avoid hypotension since the maximal vascular conductance of the musculature exceeds the maximal pumping capacity of the heart. Endurance training and high-intensity intermittent knee extension training increase the capacity for maximal exercise vasodilatation by 20–30%, mainly due to an enhanced vasodilatory capacity, as maximal exercise perfusion pressure changes little with training. The increase in maximal exercise vascular conductance is to a large extent explained by skeletal muscle hypertrophy and vascular remodelling. The vasodilatory capacity during maximal exercise is reduced or blunted with ageing, as well as in chronic heart failure patients and chronically hypoxic humans; reduced vasodilatory responsiveness and increased sympathetic activity (and probably, altered sympatholysis) are potential mechanisms accounting for this effect. Pharmacological counteraction of the sympathetic restraint may result in lower perfusion pressure and reduced oxygen extraction by the exercising muscles. However, at the same time fast inhibition of the chemoreflex in maximally exercising humans may result in increased vasodilatation, further confirming a restraining role of the sympathetic nervous system on exercise-induced vasodilatation. This is likely to be critical for the maintenance of blood pressure in exercising patients with a limited heart pump capacity. PMID:23027820
Matching, Demand, Maximization, and Consumer Choice
ERIC Educational Resources Information Center
Wells, Victoria K.; Foxall, Gordon R.
2013-01-01
The use of behavioral economics and behavioral psychology in consumer choice has been limited. The current study extends the study of consumer behavior analysis, a synthesis between behavioral psychology, economics, and marketing, to a larger data set. This article presents the current work and results from the early analysis of the data. We…
Maximizing Athletic Potential: Integrating Mind and Body.
ERIC Educational Resources Information Center
Harris, Dorothy V.
1982-01-01
An athlete needs to be taught to identify strengths and weaknesses, to concentrate for longer periods of time and regain lost concentration, to develop greater self-discipline and self control, and to deal with performance anxiety. Instructions are given for teaching relaxation methods to athletes, improving concentration, and setting realistic…
Maximal yields from multispecies fisheries systems: rules for systems with multiple trophic levels.
Matsuda, Hiroyuki; Abrams, Peter A
2006-02-01
Increasing centralization of the control of fisheries combined with increased knowledge of food-web relationships is likely to lead to attempts to maximize economic yield from entire food webs. With the exception of predator-prey systems, we lack any analysis of the nature of such yield-maximizing strategies. We use simple food-web models to investigate the nature of yield- or profit-maximizing exploitation of communities including two types of three-species food webs and a variety of six-species systems with as many as five trophic levels. These models show that, for most webs, relatively few species are harvested at equilibrium and that a significant fraction of the species is lost from the web. These extinctions occur for two reasons: (1) indirect effects due to harvesting of species that had positive effects on the extinct species, and (2) intentional eradication of species that are not themselves valuable, but have negative effects on more valuable species. In most cases, the yield-maximizing harvest involves taking only species from one trophic level. In no case was an unharvested top predator part of the yield-maximizing strategy. Analyses reveal that the existence of direct density dependence in consumers has a large effect on the nature of the optimal harvest policy, typically resulting in harvest of a larger number of species. A constraint that all species must be retained in the system (a "constraint of biodiversity conservation") usually increases the number of species and trophic levels harvested at the yield-maximizing policy. The reduction in total yield caused by such a constraint is modest for most food webs but can be over 90% in some cases. Independent harvesting of species within the web can also cause extinctions but is less likely to do so. PMID:16705975
Maximal Torque and Muscle Strength is Affected by Seat Distance from the Steering Wheel when Driving
Yoo, Kyung-Tae; An, Ho-Jung; Lee, Sun-Kyung; Choi, Jung-Hyun
2013-01-01
[Purpose] This research analyzed how seat distance and gender affect maximal torque and muscle strength when driving to present base data for the optimal driving posture. [Subjects and Methods] The subjects were 27 college students in their 20's, 15 males and 12 females. After had been measured, the subjects sat in front of a steering wheel with the distance between the steering wheel and the seat set in turns. at 50, 70, and 90% their arm length, and the maximal torque and muscle strength were measured. [Results] Both the maximal torque and muscle strength were found to be greater in male subjects than female subjects whether they turned the steering wheel clockwise or counterclockwise. The difference was big enough to be statistically significant. Maximal torque was greatest when the seat distance was 50% of arm length, whether turning the steering wheel clockwise or counterclockwise. There were statistically significant differences in maximal torque between seat distances of 50 and 70% and 90% of the arm length. Muscle strength, in contrast, was found to be the greatest at a seat distance of 70% of arm length. [Conclusion] We conclude that greater torque can be obtained when the steering wheel is nearer the seat while greater muscle strength can be obtained when the seat distance from the steering wheel is 70% of the arm length. PMID:24259937
Maximal vasodilation does not eliminate the vascular waterfall in the canine hindlimb.
Shrier, I; Magder, S
1995-11-01
Previous studies have shown that blood flow through skeletal muscle is regulated by changes in an arteriolar vascular waterfall [critical pressure (Pcrit)] and a proximal (arterial) resistance (Ra) element. To determine whether Pcrit still exists during maximal vasodilation, we pump perfused vascularly isolated canine hindlimbs. We set outflow pressure to zero and measured Pcrit, perfusion pressure (Pper), and regional elastic recoil pressure (Pcl; by a stop-flow technique) and calculated both Ra and venous resistance before and after maximal vasodilation with adenosine and nitroprusside. Pcrit was 56.4 +/- 5.1 mmHg before vasodilation and decreased to 11.0 +/- 0.6 mmHg after vasodilation, which was less than the downstream pressure in the venous compliant region (Pel). Therefore, Pcrit should not have affected flow at normal Pper levels under vasodilated conditions. However, we could still measure Pcrit because our technique allowed Pel to decline and Pcrit becomes apparent once Pel < Pcrit. With vasodilation, Ra decreased to < 8.1 +/- 2.6% and Rv decreased to 41 +/- 6% of control values. In contrast to the nonvasodilated vasculature, increases in venous pressure during maximal vasodilation caused immediate increases in Pper. This also suggests that the vascular waterfall is inactive under conditions of maximal vasodilation. We conclude that a small arteriolar Pcrit is still present in the maximally vasodilated hindlimb but is less than the downstream pressure and does not affect flow under these conditions.
1994-12-30
Data-machine independence achieved by using four technologies (ASN.1, XDR, SDS, and ZEBRA) has been evaluated by encoding two different applications in each of the above; and their results compared against the standard programming method using C.
NASA Technical Reports Server (NTRS)
1987-01-01
The work done on the Media Independent Interface (MII) Interface Control Document (ICD) program is described and recommendations based on it were made. Explanations and rationale for the content of the ICD itself are presented.
Maximize, minimize or target - optimization for a fitted response from a designed experiment
Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu
2016-04-01
One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.
Maximality-Based Structural Operational Semantics for Petri Nets
NASA Astrophysics Data System (ADS)
Saīdouni, Djamel Eddine; Belala, Nabil; Bouneb, Messaouda
2009-03-01
The goal of this work is to exploit an implementable model, namely the maximality-based labeled transition system, which permits to express true-concurrency in a natural way without splitting actions on their start and end events. One can do this by giving a maximality-based structural operational semantics for the model of Place/Transition Petri nets in terms of maximality-based labeled transition systems structures.
Modes of independence while informal caregiving.
Tellioğlu, Hilda; Hensely-Schinkinger, Susanne; Pinatti De Carvalho, Aparecido Fabiano
2015-01-01
This paper is about understanding and conceptualizing the notion of independence in the context of caregiving. Based on the current studies and on our ethnographic and design research in an AAL project (TOPIC) we introduce a model of independence consisting of four dimensions: action, finance, decision, and emotion. These interrelated dimensions are described and discussed in the setting of informal caregiving. Some additional examples are shown to illustrate how to reduce the dependence of informal caregivers before concluding the paper. PMID:26294578
Metabolic states with maximal specific rate carry flux through an elementary flux mode.
Wortel, Meike T; Peters, Han; Hulshof, Josephus; Teusink, Bas; Bruggeman, Frank J
2014-03-01
Specific product formation rates and cellular growth rates are important maximization targets in biotechnology and microbial evolution. Maximization of a specific rate (i.e. a rate expressed per unit biomass amount) requires the expression of particular metabolic pathways at optimal enzyme concentrations. In contrast to the prediction of maximal product yields, any prediction of optimal specific rates at the genome scale is currently computationally intractable, even if the kinetic properties of all enzymes are available. In the present study, we characterize maximal-specific-rate states of metabolic networks of arbitrary size and complexity, including genome-scale kinetic models. We report that optimal states are elementary flux modes, which are minimal metabolic networks operating at a thermodynamically-feasible steady state with one independent flux. Remarkably, elementary flux modes rely only on reaction stoichiometry, yet they function as the optimal states of mathematical models incorporating enzyme kinetics. Our results pave the way for the optimization of genome-scale kinetic models because they offer huge simplifications to overcome the concomitant computational problems.
Violating Bell inequalities maximally for two d-dimensional systems
Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin
2006-09-15
We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |{psi}>{sub app} that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information.
Cardiorespiratory fitness associates with metabolic risk independent of central adiposity.
Silva, G; Aires, L; Martins, C; Mota, J; Oliveira, J; Ribeiro, J C
2013-10-01
This study sought to analyze the associations between cardiorespiratory fitness (CRF), waist circumference (WC) and metabolic risk in children and adolescents. Participants were 633 subjects (58.7% girls) ages 10-18 years. Metabolic risk score (MRS) was calculated from HDL-cholesterol, triglycerides, fasting glucose and mean arterial pressure. MRS was dichotomized into low and high metabolic risk (HMRS). CRF was defined as the maximal oxygen uptake (VO₂max) estimated from the 20 m Shuttle Run Test. The first quartile of CRF was set as the low fitness group. The fourth quartile of WC was defined as high central adiposity. With adjustments for age, sex and WC, CRF was correlated with MRS (r=-0.095; p<0.05). WC was correlated with MRS (r=0.150; p<0.001) after adjustments for age, sex and CRF. Participants who had low fitness levels, presented higher levels of MRS (p<0.001) compared to those who were fit, even after adjustment for age, sex and WC. In comparison with subjects who were fit with normal central adiposity, an increased odds ratio (OR) for being at HMRS was found for participants who were of low fitness level with high central adiposity (OR=2.934; 95%CI= 1.690-5.092) and for those who were of low fitness with normal central adiposity (OR=2.234; 95%CI=1.116-4.279). Results suggest that CRF relates to MRS independently of central adiposity.
Zimmerman, K; Levitis, D; Addicott, E; Pringle, A
2016-02-01
We present a novel algorithm for the design of crossing experiments. The algorithm identifies a set of individuals (a 'crossing-set') from a larger pool of potential crossing-sets by maximizing the diversity of traits of interest, for example, maximizing the range of genetic and geographic distances between individuals included in the crossing-set. To calculate diversity, we use the mean nearest neighbor distance of crosses plotted in trait space. We implement our algorithm on a real dataset of Neurospora crassa strains, using the genetic and geographic distances between potential crosses as a two-dimensional trait space. In simulated mating experiments, crossing-sets selected by our algorithm provide better estimates of underlying parameter values than randomly chosen crossing-sets.
Practice Setting Modification and Skill Acquisition
ERIC Educational Resources Information Center
Coker, Cheryl A.
2005-01-01
One hindrance to maximizing the amount of time students are actively engaged in quality practice is the wait time that results from limited equipment (e.g. basketball goals) and/or facilities (e.g. shot put ring). A possible solution to counteract this problem would be to modify the natural performance setting. Empirical evidence regarding the…
NASA Astrophysics Data System (ADS)
Richman, Barbara T.
A proposal to pull the National Oceanic and Atmospheric Administration (NOAA) out of the Department of Commerce and make it an independent agency was the subject of a recent congressional hearing. Supporters within the science community and in Congress said that an independent NOAA will benefit by being more visible and by not being tied to a cabinet-level department whose main concerns lie elsewhere. The proposal's critics, however, cautioned that making NOAA independent could make it even more vulnerable to the budget axe and would sever the agency's direct access to the President.The separation of NOAA from Commerce was contained in a June 1 proposal by President Ronald Reagan that also called for all federal trade functions under the Department of Commerce to be reorganized into a new Department of International Trade and Industry (DITI).
Independent technical review, handbook
Not Available
1994-02-01
Purpose Provide an independent engineering review of the major projects being funded by the Department of Energy, Office of Environmental Restoration and Waste Management. The independent engineering review will address questions of whether the engineering practice is sufficiently developed to a point where a major project can be executed without significant technical problems. The independent review will focus on questions related to: (1) Adequacy of development of the technical base of understanding; (2) Status of development and availability of technology among the various alternatives; (3) Status and availability of the industrial infrastructure to support project design, equipment fabrication, facility construction, and process and program/project operation; (4) Adequacy of the design effort to provide a sound foundation to support execution of project; (5) Ability of the organization to fully integrate the system, and direct, manage, and control the execution of a complex major project.
Augusiak, Remigiusz; Horodecki, Pawel
2006-07-15
It is shown that Smolin four-qubit bound entangled states [J. A. Smolin, Phys. Rev. A 63, 032306 (2001)] can maximally violate the simple two-setting Bell inequality similar to the standard Clauser-Horne-Shimony-Holt (CHSH) inequality. The simplicity of the setting and the robustness of the entanglement make it promising for current experimental technology. On the other hand, the entanglement does not allow for secure key distillation, so neither entanglement nor maximal violation of Bell inequalities implies directly the presence of a quantum secure key. As a result, one concludes that two tasks--reducing of communication complexity and cryptography--are not (even qualitatively) equivalent in a quantum multipartite scenario.
The maximal runaway temperature of Earth-like planets
NASA Astrophysics Data System (ADS)
Shaviv, Nir J.; Shaviv, Giora; Wehrse, Rainer
2011-12-01
In Simpson’s (Simpson, G.C. [1927]. Mem. R. Meteorol. Soc. II (16), 69-95) classical derivation of the temperature of the Earth in the semi-gray model, the surface temperature diverges as the fourth root of the thermal radiation’s optical depth. No resolution to this apparent paradox was yet obtained under the strict semi-gray approximation. Using this approximation and a simplified approach, we study the saturation of the runaway greenhouse effect. First we generalize the problem of the semi-gray model to cases in which a non-negligible fraction of the stellar radiation falls on the long-wavelength range, and/or that the planetary long-wavelength emission penetrates into the transparent short wavelength domain of the absorption. Second, applying the most general assumptions and independently of any particular properties of an absorber, we show that the greenhouse effect saturates and that any Earth-like planet has a maximal temperature which depends on the type of and distance to its main-sequence star, its albedo and the primary atmospheric components which determine the cutoff frequency below which the atmosphere is optically thick. For example, a hypothetical convection-less planet similar to Venus, that is optically thin in the visible, could have at most a surface temperature of 1200-1300 K irrespective of the nature of the greenhouse gas. We show that two primary mechanisms are responsible for the saturation of the runaway greenhouse effect, depending on the value of λcut, the wavelength above which the atmosphere becomes optically thick. Unless λcut is small and resides in the optical region, saturation is achieved by radiating the thermal flux of the planet through the short wavelength tail of the thermal distribution. This has an interesting observational implication, the radiation from such a planet should be skewed towards the NIR. Otherwise, saturation takes place by radiating through windows in the FIR.
Maximal zero textures in Linear and Inverse seesaw
NASA Astrophysics Data System (ADS)
Sinha, Roopam; Samanta, Rome; Ghosal, Ambar
2016-08-01
We investigate Linear and Inverse seesaw mechanisms with maximal zero textures of the constituent matrices subjected to the assumption of non-zero eigenvalues for the neutrino mass matrix mν and charged lepton mass matrix me. If we restrict to the minimally parametrized non-singular 'me' (i.e., with maximum number of zeros) it gives rise to only 6 possible textures of me. Non-zero determinant of mν dictates six possible textures of the constituent matrices. We ask in this minimalistic approach, what phenomenologically allowed maximum zero textures are possible. It turns out that Inverse seesaw leads to 7 allowed two-zero textures while the Linear seesaw leads to only one. In Inverse seesaw, we show that 2 is the maximum number of independent zeros that can be inserted into μS to obtain all 7 viable two-zero textures of mν. On the other hand, in Linear seesaw mechanism, the minimal scheme allows maximum 5 zeros to be accommodated in 'm' so as to obtain viable effective neutrino mass matrices (mν). Interestingly, we find that our minimalistic approach in Inverse seesaw leads to a realization of all the phenomenologically allowed two-zero textures whereas in Linear seesaw only one such texture is viable. Next, our numerical analysis shows that none of the two-zero textures give rise to enough CP violation or significant δCP. Therefore, if δCP = π / 2 is established, our minimalistic scheme may still be viable provided we allow larger number of parameters in 'me'.
Pace's Maxims for Homegrown Library Projects. Coming Full Circle
ERIC Educational Resources Information Center
Pace, Andrew K.
2005-01-01
This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…
Minimal Length, Maximal Momentum and the Entropic Force Law
NASA Astrophysics Data System (ADS)
Nozari, Kourosh; Pedram, Pouria; Molkara, M.
2012-04-01
Different candidates of quantum gravity proposal such as string theory, noncommutative geometry, loop quantum gravity and doubly special relativity, all predict the existence of a minimum observable length and/or a maximal momentum which modify the standard Heisenberg uncertainty principle. In this paper, we study the effects of minimal length and maximal momentum on the entropic force law formulated recently by E. Verlinde.
Effect of Age and Other Factors on Maximal Heart Rate.
ERIC Educational Resources Information Center
Londeree, Ben R.; Moeschberger, Melvin L.
1982-01-01
To reduce confusion regarding reported effects of age on maximal exercise heart rate, a comprehensive review of the relevant English literature was conducted. Data on maximal heart rate after exercising with a bicycle, a treadmill, and after swimming were analyzed with regard to physical fitness and to age, sex, and racial differences. (Authors/PP)
PERMANENT GENETIC RESOURCES: Development of polymorphic microsatellite markers in Acer mono Maxim.
Kikuchi, S; Shibata, M
2008-03-01
Thirteen polymorphic microsatellite markers were developed for Acer mono Maxim., one of the major components of deciduous forests in Japan. An average of 13.8 alleles were found, with expected heterozygosity ranging from 0.140 to 0.945 in 34 A. mono individuals from the Ogawa Forest Reserve in Ibaraki Prefecture, Japan. This set of microsatellite markers can be used to analyse mating patterns and gene flow in A. mono populations.
Comprehensibility maximization and humanly comprehensible representations
NASA Astrophysics Data System (ADS)
Kamimura, Ryotaro
2012-04-01
In this paper, we propose a new information-theoretic method to measure the comprehensibility of network configurations in competitive learning. Comprehensibility is supposed to be measured by information contained in components in competitive networks. Thus, the increase in information corresponds to the increase in comprehensibility of network configurations. One of the most important characteristics of the method is that parameters can be explicitly determined so as to produce a state where the different types of comprehensibility can be mutually increased. We applied the method to two problems, namely an artificial data set and the ionosphere data from the well-known machine learning database. In both problems, we showed that improved performance could be obtained in terms of all types of comprehensibility and quantization errors. For the topographic errors, we found that updating connection weights prevented them from increasing. Then, the optimal values of comprehensibility could be explicitly determined, and clearer class boundaries were generated.
Criticality Maximizes Complexity in Neural Tissue
Timme, Nicholas M.; Marshall, Najja J.; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M.
2016-01-01
The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity—an information theoretic measure—has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online. PMID:27729870
Maximal entanglement versus entropy for mixed quantum states
Wei, T.-C.; Goldbart, Paul M.; Kwiat, Paul G.; Nemoto, Kae; Munro, William J.; Verstraete, Frank
2003-02-01
Maximally entangled mixed states are those states that, for a given mixedness, achieve the greatest possible entanglement. For two-qubit systems and for various combinations of entanglement and mixedness measures, the form of the corresponding maximally entangled mixed states is determined primarily analytically. As measures of entanglement, we consider entanglement of formation, relative entropy of entanglement, and negativity; as measures of mixedness, we consider linear and von Neumann entropies. We show that the forms of the maximally entangled mixed states can vary with the combination of (entanglement and mixedness) measures chosen. Moreover, for certain combinations, the forms of the maximally entangled mixed states can change discontinuously at a specific value of the entropy. Along the way, we determine the states that, for a given value of entropy, achieve maximal violation of Bell's inequality.
ERIC Educational Resources Information Center
James, H. Thomas
Independent schools that are of viable size, well managed, and strategically located to meet competition will survive and prosper past the current financial crisis. We live in a complex technological society with insatiable demands for knowledgeable people to keep it running. The future will be marked by the orderly selection of qualified people,…
Independence, Disengagement, and Discipline
ERIC Educational Resources Information Center
Rubin, Ron
2012-01-01
School disengagement is linked to a lack of opportunities for students to fulfill their needs for independence and self-determination. Young people have little say about what, when, where, and how they will learn, the criteria used to assess their success, and the content of school and classroom rules. Traditional behavior management discourages…
Caring about Independent Lives
ERIC Educational Resources Information Center
Christensen, Karen
2010-01-01
With the rhetoric of independence, new cash for care systems were introduced in many developed welfare states at the end of the 20th century. These systems allow local authorities to pay people who are eligible for community care services directly, to enable them to employ their own careworkers. Despite the obvious importance of the careworker's…
Postcard from Independence, Mo.
ERIC Educational Resources Information Center
Archer, Jeff
2004-01-01
This article reports results showing that the Independence, Missori school district failed to meet almost every one of its improvement goals under the No Child Left Behind Act. The state accreditation system stresses improvement over past scores, while the federal law demands specified amounts of annual progress toward the ultimate goal of 100…
Independent School Governance.
ERIC Educational Resources Information Center
Beavis, Allan K.
Findings of a study that examined the role of the governing body in the independent school's self-renewing processes are presented in this paper. From the holistic paradigm, the school is viewed as a self-renewing system that is able to maintain its identity despite environmental changes through existing structures that define and create…
Maximal stochastic transport in the Lorenz equations
NASA Astrophysics Data System (ADS)
Agarwal, Sahil; Wettlaufer, J. S.
2016-01-01
We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh-Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.
Kettlebell swing training improves maximal and explosive strength.
Lake, Jason P; Lauder, Mike A
2012-08-01
The aim of this study was to establish the effect that kettlebell swing (KB) training had on measures of maximum (half squat-HS-1 repetition maximum [1RM]) and explosive (vertical jump height-VJH) strength. To put these effects into context, they were compared with the effects of jump squat power training (JS-known to improve 1RM and VJH). Twenty-one healthy men (age = 18-27 years, body mass = 72.58 ± 12.87 kg) who could perform a proficient HS were tested for their HS 1RM and VJH pre- and post-training. Subjects were randomly assigned to either a KB or JS training group after HS 1RM testing and trained twice a week. The KB group performed 12-minute bouts of KB exercise (12 rounds of 30-second exercise, 30-second rest with 12 kg if <70 kg or 16 kg if >70 kg). The JS group performed at least 4 sets of 3 JS with the load that maximized peak power-Training volume was altered to accommodate different training loads and ranged from 4 sets of 3 with the heaviest load (60% 1RM) to 8 sets of 6 with the lightest load (0% 1RM). Maximum strength improved by 9.8% (HS 1RM: 165-181% body mass, p < 0.001) after the training intervention, and post hoc analysis revealed that there was no significant difference between the effect of KB and JS training (p = 0.56). Explosive strength improved by 19.8% (VJH: 20.6-24.3 cm) after the training intervention, and post hoc analysis revealed that the type of training did not significantly affect this either (p = 0.38). The results of this study clearly demonstrate that 6 weeks of biweekly KB training provides a stimulus that is sufficient to increase both maximum and explosive strength offering a useful alternative to strength and conditioning professionals seeking variety for their athletes.
Muscle Damage following Maximal Eccentric Knee Extensions in Males and Females
2016-01-01
Aim To investigate whether there is a sex difference in exercise induced muscle damage. Materials and Method Vastus Lateralis and patella tendon properties were measured in males and females using ultrasonography. During maximal voluntary eccentric knee extensions (12 reps x 6 sets), Vastus Lateralis fascicle lengthening and maximal voluntary eccentric knee extensions torque were recorded every 10° of knee joint angle (20–90°). Isometric torque, Creatine Kinase and muscle soreness were measured pre, post, 48, 96 and 168 hours post damage as markers of exercise induced muscle damage. Results Patella tendon stiffness and Vastus Lateralis fascicle lengthening were significantly higher in males compared to females (p<0.05). There was no sex difference in isometric torque loss and muscle soreness post exercise induced muscle damage (p>0.05). Creatine Kinase levels post exercise induced muscle damage were higher in males compared to females (p<0.05), and remained higher when maximal voluntary eccentric knee extension torque, relative to estimated quadriceps anatomical cross sectional area, was taken as a covariate (p<0.05). Conclusion Based on isometric torque loss, there is no sex difference in exercise induced muscle damage. The higher Creatine Kinase in males could not be explained by differences in maximal voluntary eccentric knee extension torque, Vastus Lateralis fascicle lengthening and patella tendon stiffness. Further research is required to understand the significant sex differences in Creatine Kinase levels following exercise induced muscle damage. PMID:26986066
Erol, Volkan; Ozaydin, Fatih; Altintas, Azmi Ali
2014-06-24
Entanglement has been studied extensively for unveiling the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known measures for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. It was found that for sets of non-maximally entangled states of two qubits, comparing these entanglement measures may lead to different entanglement orderings of the states. On the other hand, although it is not an entanglement measure and not monotonic under local operations, due to its ability of detecting multipartite entanglement, quantum Fisher information (QFI) has recently received an intense attraction generally with entanglement in the focus. In this work, we revisit the state ordering problem of general two qubit states. Generating a thousand random quantum states and performing an optimization based on local general rotations of each qubit, we calculate the maximal QFI for each state. We analyze the maximized QFI in comparison with concurrence, REE and negativity and obtain new state orderings. We show that there are pairs of states having equal maximized QFI but different values for concurrence, REE and negativity and vice versa.
A taxonomic approach to communicating maxims in interstellar messages
NASA Astrophysics Data System (ADS)
Vakoch, Douglas A.
2011-02-01
Previous discussions of interstellar messages that could be sent to extraterrestrial intelligence have focused on descriptions of mathematics, science, and aspects of human culture and civilization. Although some of these depictions of humanity have implicitly referred to our aspirations, this has not clearly been separated from descriptions of our actions and attitudes as they are. In this paper, a methodology is developed for constructing interstellar messages that convey information about our aspirations by developing a taxonomy of maxims that provide guidance for living. Sixty-six maxims providing guidance for living were judged for degree of similarity to each of other. Quantitative measures of the degree of similarity between all pairs of maxims were derived by aggregating similarity judgments across individual participants. These composite similarity ratings were subjected to a cluster analysis, which yielded a taxonomy that highlights perceived interrelationships between individual maxims and that identifies major classes of maxims. Such maxims can be encoded in interstellar messages through three-dimensional animation sequences conveying narratives that highlight interactions between individuals. In addition, verbal descriptions of these interactions in Basic English can be combined with these pictorial sequences to increase intelligibility. Online projects to collect messages such as the SETI Institute's Earth Speaks and La Tierra Habla, can be used to solicit maxims from participants around the world.
Oxygen uptake in maximal effort constant rate and interval running.
Pratt, Daniel; O'Brien, Brendan J; Clark, Bradley
2013-01-01
This study investigated differences in average VO2 of maximal effort interval running to maximal effort constant rate running at lactate threshold matched for time. The average VO2 and distance covered of 10 recreational male runners (VO2max: 4158 ± 390 mL · min(-1)) were compared between a maximal effort constant-rate run at lactate threshold (CRLT), a maximal effort interval run (INT) consisting of 2 min at VO2max speed with 2 minutes at 50% of VO2 repeated 5 times, and a run at the average speed sustained during the interval run (CR submax). Data are presented as mean and 95% confidence intervals. The average VO2 for INT, 3451 (3269-3633) mL · min(-1), 83% VO2max, was not significantly different to CRLT, 3464 (3285-3643) mL · min(-1), 84% VO2max, but both were significantly higher than CR sub-max, 3464 (3285-3643) mL · min(-1), 76% VO2max. The distance covered was significantly greater in CLRT, 4431 (4202-3731) metres, compared to INT and CR sub-max, 4070 (3831-4309) metres. The novel finding was that a 20-minute maximal effort constant rate run uses similar amounts of oxygen as a 20-minute maximal effort interval run despite the greater distance covered in the maximal effort constant-rate run. PMID:24288501
Agent independent task planning
NASA Technical Reports Server (NTRS)
Davis, William S.
1990-01-01
Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.
International exploration by independents
Bertagne, R.G. )
1991-03-01
Recent industry trends indicate that the smaller US independents are looking at foreign exploration opportunities as one of the alternatives for growth in the new age of exploration. It is usually accepted that foreign finding costs per barrel are substantially lower than domestic because of the large reserve potential of international plays. To get involved overseas requires, however, an adaptation to different cultural, financial, legal, operational, and political conditions. Generally foreign exploration proceeds at a slower pace than domestic because concessions are granted by the government, or are explored in partnership with the national oil company. First, a mid- to long-term strategy, tailored to the goals and the financial capabilities of the company, must be prepared; it must be followed by an ongoing evaluation of quality prospects in various sedimentary basins, and a careful planning and conduct of the operations. To successfully explore overseas also requires the presence on the team of a minimum number of explorationists and engineers thoroughly familiar with the various exploratory and operational aspects of foreign work, having had a considerable amount of onsite experience in various geographical and climatic environments. Independents that are best suited for foreign expansion are those that have been financially successful domestically, and have a good discovery track record. When properly approached foreign exploration is well within the reach of smaller US independents and presents essentially no greater risk than domestic exploration; the reward, however, can be much larger and can catapult the company into the big leagues.
International exploration by independent
Bertragne, R.G.
1992-04-01
Recent industry trends indicate that the smaller U.S. independents are looking at foreign exploration opportunities as one of the alternatives for growth in the new age of exploration. Foreign finding costs per barrel usually are accepted to be substantially lower than domestic costs because of the large reserve potential of international plays. To get involved in overseas exploration, however, requires the explorationist to adapt to different cultural, financial, legal, operational, and political conditions. Generally, foreign exploration proceeds at a slower pace than domestic exploration because concessions are granted by a country's government, or are explored in partnership with a national oil company. First, the explorationist must prepare a mid- to long-term strategy, tailored to the goals and the financial capabilities of the company; next, is an ongoing evaluation of quality prospects in various sedimentary basins, and careful planning and conduct of the operations. To successfully explore overseas also requires the presence of a minimum number of explorationists and engineers thoroughly familiar with the various exploratory and operational aspects of foreign work. Ideally, these team members will have had a considerable amount of on-site experience in various countries and climates. Independents best suited for foreign expansion are those who have been financially successful in domestic exploration. When properly approached, foreign exploration is well within the reach of smaller U.S. independents, and presents essentially no greater risk than domestic exploration; however, the reward can be much larger and can catapult the company into the 'big leagues.'
Building hospital TQM teams: effective polarity analysis and maximization.
Hurst, J B
1996-09-01
Building and maintaining teams require careful attention to and maximization of such polar opposites (¿polarities¿) as individual and team, directive and participatory leadership, task and process, and stability and change. Analyzing systematic elements of any polarity and listing blocks, supports, and flexible ways to maximize it will prevent the negative consequences that occur when treating a polarity like a solvable problem. Flexible, well-timed shifts from pole to pole result in the maximization of upside and minimization of downside consequences.
Opportunities to maximize value with integrated palliative care
Bergman, Jonathan; Laviana, Aaron A
2016-01-01
Palliative care involves aggressively addressing and treating psychosocial, spiritual, religious, and family concerns, as well as considering the overall psychosocial structures supporting a patient. The concept of integrated palliative care removes the either/or decision a patient needs to make: they need not decide if they want either aggressive chemotherapy from their oncologist or symptom-guided palliative care but rather they can be comanaged by several clinicians, including a palliative care clinician, to maximize the benefit to them. One common misconception about palliative care, and supportive care in general, is that it amounts to “doing nothing” or “giving up” on aggressive treatments for patients. Rather, palliative care involves very aggressive care, targeted at patient symptoms, quality-of-life, psychosocial needs, family needs, and others. Integrating palliative care into the care plan for individuals with advanced diseases does not necessarily imply that a patient must forego other treatment options, including those aimed at a cure, prolonging of life, or palliation. Implementing interventions to understand patient preferences and to ensure those preferences are addressed, including preferences related to palliative and supportive care, is vital in improving the patient-centeredness and value of surgical care. Given our aging population and the disproportionate cost of end-of-life care, this holds great hope in bending the cost curve of health care spending, ensuring patient-centeredness, and improving quality and value of care. Level 1 evidence supports this model, and it has been achieved in several settings; the next necessary step is to disseminate such models more broadly. PMID:27226721
Maximizing information exchange between complex networks
NASA Astrophysics Data System (ADS)
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-10-01
Science is not merely the smooth progressive interaction of hypothesis, experiment and theory, although it sometimes has that form. More realistically the scientific study of any given complex phenomenon generates a number of explanations, from a variety of perspectives, that eventually requires synthesis to achieve a deep level of insight and understanding. One such synthesis has created the field of out-of-equilibrium statistical physics as applied to the understanding of complex dynamic networks. Over the past forty years the concept of complexity has undergone a metamorphosis. Complexity was originally seen as a consequence of memory in individual particle trajectories, in full agreement with a Hamiltonian picture of microscopic dynamics and, in principle, macroscopic dynamics could be derived from the microscopic Hamiltonian picture. The main difficulty in deriving macroscopic dynamics from microscopic dynamics is the need to take into account the actions of a very large number of components. The existence of events such as abrupt jumps, considered by the conventional continuous time random walk approach to describing complexity was never perceived as conflicting with the Hamiltonian view. Herein we review many of the reasons why this traditional Hamiltonian view of complexity is unsatisfactory. We show that as a result of technological advances, which make the observation of single elementary events possible, the definition of complexity has shifted from the conventional memory concept towards the action of non-Poisson renewal events. We show that the observation of crucial processes, such as the intermittent fluorescence of blinking quantum dots as well as the brain’s response to music, as monitored by a set of electrodes attached to the scalp, has forced investigators to go beyond the traditional concept of complexity and to establish closer contact with the nascent field of complex networks. Complex networks form one of the most challenging areas of
Rational maximizing by humans (Homo sapiens) in an ultimatum game.
Smith, Phillip; Silberberg, Alan
2010-07-01
In the human mini-ultimatum game, a proposer splits a sum of money with a responder. If the responder accepts, both are paid. If not, neither is paid. Typically, responders reject inequitable distributions, favoring punishing over maximizing. In Jensen et al.'s (Science 318:107-109, 2007) adaptation with apes, a proposer selects between two distributions of raisins. Despite inequitable offers, responders often accept, thereby maximizing. The rejection response differs between the human and ape versions of this game. For humans, rejection is instantaneous; for apes, it requires 1 min of inaction. We replicate Jensen et al.'s procedure in humans with money. When waiting 1 min to reject, humans favor punishing over maximizing; however, when rejection requires 5 min of inaction, humans, like apes, maximize. If species differences in time horizons are accommodated, Jensen et al.'s ape data are reproducible in humans.
Carnot cycle at finite power: attainability of maximal efficiency.
Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G
2013-08-01
We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.
Maximal slicing of D-dimensional spherically symmetric vacuum spacetime
Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru
2009-10-15
We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D{>=}5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.
Maximizing Your Investment in Building Automation System Technology.
ERIC Educational Resources Information Center
Darnell, Charles
2001-01-01
Discusses how organizational issues and system standardization can be important factors that determine an institution's ability to fully exploit contemporary building automation systems (BAS). Further presented is management strategy for maximizing BAS investments. (GR)
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-01-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph. PMID:25767331
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Quantum state-independent contextuality requires 13 rays
NASA Astrophysics Data System (ADS)
Cabello, Adán; Kleinmann, Matthias; Portillo, José R.
2016-09-01
We show that, regardless of the dimension of the Hilbert space, there exists no set of rays revealing state-independent contextuality with less than 13 rays. This implies that the set proposed by Yu and Oh in dimension three (2012 Phys. Rev. Lett. 108 030402) is actually the minimal set in quantum theory. This contrasts with the case of Kochen–Specker sets, where the smallest set occurs in dimension four.
Evolution of Shanghai STOCK Market Based on Maximal Spanning Trees
NASA Astrophysics Data System (ADS)
Yang, Chunxia; Shen, Ying; Xia, Bingying
2013-01-01
In this paper, using a moving window to scan through every stock price time series over a period from 2 January 2001 to 11 March 2011 and mutual information to measure the statistical interdependence between stock prices, we construct a corresponding weighted network for 501 Shanghai stocks in every given window. Next, we extract its maximal spanning tree and understand the structure variation of Shanghai stock market by analyzing the average path length, the influence of the center node and the p-value for every maximal spanning tree. A further analysis of the structure properties of maximal spanning trees over different periods of Shanghai stock market is carried out. All the obtained results indicate that the periods around 8 August 2005, 17 October 2007 and 25 December 2008 are turning points of Shanghai stock market, at turning points, the topology structure of the maximal spanning tree changes obviously: the degree of separation between nodes increases; the structure becomes looser; the influence of the center node gets smaller, and the degree distribution of the maximal spanning tree is no longer a power-law distribution. Lastly, we give an analysis of the variations of the single-step and multi-step survival ratios for all maximal spanning trees and find that two stocks are closely bonded and hard to be broken in a short term, on the contrary, no pair of stocks remains closely bonded for a long time.
Cary Potter on Independent Education
ERIC Educational Resources Information Center
Potter, Cary
1978-01-01
Cary Potter was President of the National Association of Independent Schools from 1964-1978. As he leaves NAIS he gives his views on education, on independence, on the independent school, on public responsibility, on choice in a free society, on educational change, and on the need for collective action by independent schools. (Author/RK)
Myth or Truth: Independence Day.
ERIC Educational Resources Information Center
Gardner, Traci
Most Americans think of the Fourth of July as Independence Day, but is it really the day the U.S. declared and celebrated independence? By exploring myths and truths surrounding Independence Day, this lesson asks students to think critically about commonly believed stories regarding the beginning of the Revolutionary War and the Independence Day…
Martorelli, André; Bottaro, Martim; Vieira, Amilton; Rocha-Júnior, Valdinar; Cadore, Eduardo; Prestes, Jonato; Wagner, Dale; Martorelli, Saulo
2015-01-01
Studies investigating the effect of rest interval length (RI) between sets on neuromuscular performance and metabolic response during power training are scarce. Therefore, the purpose of this study was to compare maximal power output, muscular activity and blood lactate concentration following 1, 2 or 3 minutes RI between sets during a squat power training protocol. Twelve resistance-trained men (22.7 ± 3.2 years; 1.79 ± 0.08 cm; 81.8 ± 11.3 kg) performed 6 sets of 6 repetitions of squat exercise at 60% of their 1 repetition maximum. Peak and average power were obtained for each repetition and set using a linear position transducer. Muscular activity and blood lactate were measured pre and post-exercise session. There was no significant difference between RI on peak power and average power. However, peak power decreased 5.6%, 1.9%, and 5.9% after 6 sets using 1, 2 and 3 minutes of RI, respectively. Average power also decreased 10.5% (1 min), 2.6% (2 min), and 4.3% (3 min) after 6 sets. Blood lactate increased similarly during the three training sessions (1-min: 5.5 mMol, 2-min: 4.3 mMol, and 3-min: 4.0 mMol) and no significant changes were observed in the muscle activity after multiple sets, independent of RI length (pooled ES for 1-min: 0.47, 2-min: 0.65, and 3-min: 1.39). From a practical point of view, the results suggest that 1 to 2 minute of RI between sets during squat exercise may be sufficient to recover power output in a designed power training protocol. However, if training duration is malleable, we recommend 2 min of RI for optimal recovery and power output maintenance during the subsequent exercise sets. Key points This study demonstrates that 1 minute of RI between sets is sufficient to maintain maximal power output during multiple sets of a power-based exercise when it is composed of few repetitions and the sets are not performed until failure. Therefore, a short RI should be considered when designing training programs for the development of
Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)
NASA Technical Reports Server (NTRS)
Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan
2016-01-01
Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.
Douglas, Julie A.; Sandefur, Conner I.
2010-01-01
Summary In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a genetic study in the Old Order Amish of Lancaster County, Pennsylvania, a population isolate derived from a modest number of founders. Given one or more pedigrees, our program automatically and rapidly extracts a fixed number of maximally unrelated individuals. PMID:18321883
Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2peak)
NASA Technical Reports Server (NTRS)
Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan
2016-01-01
Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.
An efficient algorithm for maximizing range sum queries in a road network.
Phan, Tien-Khoi; Jung, HaRim; Kim, Ung-Mo
2014-01-01
Given a set of positive-weighted points and a query rectangle r (specified by a client) of given extents, the goal of a maximizing range sum (MaxRS) query is to find the optimal location of r such that the total weights of all the points covered by r are maximized. All existing methods for processing MaxRS queries assume the Euclidean distance metric. In many location-based applications, however, the motion of a client may be constrained by an underlying (spatial) road network; that is, the client cannot move freely in space. This paper addresses the problem of processing MaxRS queries in a road network. We propose the external-memory algorithm that is suited for a large road network database. In addition, in contrast to the existing methods, which retrieve only one optimal location, our proposed algorithm retrieves all the possible optimal locations. Through simulations, we evaluate the performance of the proposed algorithm.
Eleutherococcus senticosus (Rupr. & Maxim.) Maxim. (Araliaceae) as an adaptogen: a closer look.
Davydov, M; Krikorian, A D
2000-10-01
The adaptogen concept is examined from an historical, biological, chemical, pharmacological and medical perspective using a wide variety of primary and secondary literature. The definition of an adaptogen first proposed by Soviet scientists in the late 1950s, namely that an adaptogen is any substance that exerts effects on both sick and healthy individuals by 'correcting' any dysfunction(s) without producing unwanted side effects, was used as a point of departure. We attempted to identify critically what an adaptogen supposedly does and to determine whether the word embodies in and of itself any concept(s) acceptable to western conventional (allopathic) medicine. Special attention was paid to the reported pharmacological effects of the 'adaptogen-containing plant' Eleutherococcus senticosus (Rupr. & Maxim.) Maxim. (Araliaceae), referred to by some as 'Siberian ginseng', and to its secondary chemical composition. We conclude that so far as specific pharmacological activities are concerned there are a number of valid arguments for equating the action of so-called adaptogens with those of medicinal agents that have activities as anti-oxidants, and/or anti-cancerogenic, immunomodulatory and hypocholesteroletic as well as hypoglycemic and choleretic action. However, 'adaptogens' and 'anti-oxidants' etc. also show significant dissimilarities and these are discussed. Significantly, the classical definition of an adaptogen has much in common with views currently being invoked to describe and explain the 'placebo effect'. Nevertheless, the chemistry of the secondary compounds of Eleutherococcus isolated thus far and their pharmacological effects support our hypothesis that the reported beneficial effects of adaptogens derive from their capacity to exert protective and/or inhibitory action against free radicals. An inventory of the secondary substances contained in Eleutherococcus discloses a potential for a wide range of activities reported from work on cultured cell lines
Savage, Robert J; Best, Stuart A; Carstairs, Greg L; Ham, Daniel J
2012-07-01
Psychophysical assessments, such as the maximum acceptable lift, have been used to establish worker capability and set safe load limits for manual handling tasks in occupational settings. However, in military settings, in which task demand is set and capable workers must be selected, subjective measurements are inadequate, and maximal capacity testing must be used to assess lifting capability. The aim of this study was to establish and compare the relationship between maximal lifting capacity and a self-determined tolerable lifting limit, maximum acceptable lift, across a range of military-relevant lifting tasks. Seventy male soldiers (age 23.7 ± 6.1 years) from the Australian Army performed 7 strength-based lifting tasks to determine their maximum lifting capacity and maximum acceptable lift. Comparisons were performed to identify maximum acceptable lift relative to maximum lifting capacity for each individual task. Linear regression was used to identify the relationship across all tasks when the data were pooled. Strong correlations existed between all 7 lifting tasks (rrange = 0.87-0.96, p < 0.05). No differences were found in maximum acceptable lift relative to maximum lifting capacity across all tasks (p = 0.46). When data were pooled, maximum acceptable lift was equal to 84 ± 8% of the maximum lifting capacity. This study is the first to illustrate the strong and consistent relationship between maximum lifting capacity and maximum acceptable lift for multiple single lifting tasks. The relationship developed between these indices may be used to help assess self-selected manual handling capability through occupationally relevant maximal performance tests.
Smith, Des H.V.; Converse, Sarah J.; Gibson, Keith; Moehrenschlager, Axel; Link, William A.; Olsen, Glenn H.; Maguire, Kelly
2011-01-01
Captive breeding is key to management of severely endangered species, but maximizing captive production can be challenging because of poor knowledge of species breeding biology and the complexity of evaluating different management options. In the face of uncertainty and complexity, decision-analytic approaches can be used to identify optimal management options for maximizing captive production. Building decision-analytic models requires iterations of model conception, data analysis, model building and evaluation, identification of remaining uncertainty, further research and monitoring to reduce uncertainty, and integration of new data into the model. We initiated such a process to maximize captive production of the whooping crane (Grus americana), the world's most endangered crane, which is managed through captive breeding and reintroduction. We collected 15 years of captive breeding data from 3 institutions and used Bayesian analysis and model selection to identify predictors of whooping crane hatching success. The strongest predictor, and that with clear management relevance, was incubation environment. The incubation period of whooping crane eggs is split across two environments: crane nests and artificial incubators. Although artificial incubators are useful for allowing breeding pairs to produce multiple clutches, our results indicate that crane incubation is most effective at promoting hatching success. Hatching probability increased the longer an egg spent in a crane nest, from 40% hatching probability for eggs receiving 1 day of crane incubation to 95% for those receiving 30 days (time incubated in each environment varied independently of total incubation period). Because birds will lay fewer eggs when they are incubating longer, a tradeoff exists between the number of clutches produced and egg hatching probability. We developed a decision-analytic model that estimated 16 to be the optimal number of days of crane incubation needed to maximize the number of
Frame independent cosmological perturbations
Prokopec, Tomislav; Weenink, Jan E-mail: j.g.weenink@uu.nl
2013-09-01
We compute the third order gauge invariant action for scalar-graviton interactions in the Jordan frame. We demonstrate that the gauge invariant action for scalar and tensor perturbations on one physical hypersurface only differs from that on another physical hypersurface via terms proportional to the equation of motion and boundary terms, such that the evolution of non-Gaussianity may be called unique. Moreover, we demonstrate that the gauge invariant curvature perturbation and graviton on uniform field hypersurfaces in the Jordan frame are equal to their counterparts in the Einstein frame. These frame independent perturbations are therefore particularly useful in relating results in different frames at the perturbative level. On the other hand, the field perturbation and graviton on uniform curvature hypersurfaces in the Jordan and Einstein frame are non-linearly related, as are their corresponding actions and n-point functions.
Single- vs. Multiple-Set Strength Training in Women.
ERIC Educational Resources Information Center
Schlumberger, Andreas; Stec, Justyna; Schmidtbleicher, Dietmar
2001-01-01
Compared the effects of single- and multiple-set strength training in women with basic experience in resistance training. Both training groups had significant strength improvements in leg extension. In the seated bench press, only the three-set group showed a significant increase in maximal strength. There were higher strength gains overall in the…
Detecting novel associations in large data sets.
Reshef, David N; Reshef, Yakir A; Finucane, Hilary K; Grossman, Sharon R; McVean, Gilean; Turnbaugh, Peter J; Lander, Eric S; Mitzenmacher, Michael; Sabeti, Pardis C
2011-12-16
Identifying interesting relationships between pairs of variables in large data sets is increasingly important. Here, we present a measure of dependence for two-variable relationships: the maximal information coefficient (MIC). MIC captures a wide range of associations both functional and not, and for functional relationships provides a score that roughly equals the coefficient of determination (R(2)) of the data relative to the regression function. MIC belongs to a larger class of maximal information-based nonparametric exploration (MINE) statistics for identifying and classifying relationships. We apply MIC and MINE to data sets in global health, gene expression, major-league baseball, and the human gut microbiota and identify known and novel relationships. PMID:22174245
Detecting novel associations in large data sets.
Reshef, David N; Reshef, Yakir A; Finucane, Hilary K; Grossman, Sharon R; McVean, Gilean; Turnbaugh, Peter J; Lander, Eric S; Mitzenmacher, Michael; Sabeti, Pardis C
2011-12-16
Identifying interesting relationships between pairs of variables in large data sets is increasingly important. Here, we present a measure of dependence for two-variable relationships: the maximal information coefficient (MIC). MIC captures a wide range of associations both functional and not, and for functional relationships provides a score that roughly equals the coefficient of determination (R(2)) of the data relative to the regression function. MIC belongs to a larger class of maximal information-based nonparametric exploration (MINE) statistics for identifying and classifying relationships. We apply MIC and MINE to data sets in global health, gene expression, major-league baseball, and the human gut microbiota and identify known and novel relationships.
Molecular maximizing characterizes choice on Vaughan's (1981) procedure.
Silberberg, A; Ziriax, J M
1985-01-01
Pigeons keypecked on a two-key procedure in which their choice ratios during one time period determined the reinforcement rates assigned to each key during the next period (Vaughan, 1981). During each of four phases, which differed in the reinforcement rates they provided for different choice ratios, the duration of these periods was four minutes, duplicating one condition from Vaughan's study. During the other four phases, these periods lasted six seconds. When these periods were long, the results were similar to Vaughan's and appeared compatible with melioration theory. But when these periods were short, the data were consistent with molecular maximizing (see Silberberg & Ziriax, 1982) and were incompatible with melioration, molar maximizing, and matching. In a simulation, stat birds following a molecular-maximizing algorithm responded on the short- and long-period conditions of this experiment. When the time periods lasted four minutes, the results were similar to Vaughan's and to the results of the four-minute conditions of this study; when the time periods lasted six seconds, the choice data were similar to the data from real subjects for the six-second conditions. Thus, a molecular-maximizing response rule generated choice data comparable to those from the short- and long-period conditions of this experiment. These data show that, among extant accounts, choice on the Vaughan procedure is most compatible with molecular maximizing.
Ventilatory patterns differ between maximal running and cycling.
Tanner, David A; Duke, Joseph W; Stager, Joel M
2014-01-15
To determine the effect of exercise mode on ventilatory patterns, 22 trained men performed two maximal graded exercise tests; one running on a treadmill and one cycling on an ergometer. Tidal flow-volume (FV) loops were recorded during each minute of exercise with maximal loops measured pre and post exercise. Running resulted in a greater VO2peak than cycling (62.7±7.6 vs. 58.1±7.2mLkg(-1)min(-1)). Although maximal ventilation (VE) did not differ between modes, ventilatory equivalents for O2 and CO2 were significantly larger during maximal cycling. Arterial oxygen saturation (estimated via ear oximeter) was also greater during maximal cycling, as were end-expiratory (EELV; 3.40±0.54 vs. 3.21±0.55L) and end-inspiratory lung volumes, (EILV; 6.24±0.88 vs. 5.90±0.74L). Based on these results we conclude that ventilatory patterns differ as a function of exercise mode and these observed differences are likely due to the differences in posture adopted during exercise in these modes.
Do obese children perceive submaximal and maximal exertion differently?
Belanger, Kevin; Breithaupt, Peter; Ferraro, Zachary M; Barrowman, Nick; Rutherford, Jane; Hadjiyannakis, Stasia; Colley, Rachel C; Adamo, Kristi B
2013-01-01
We examined how obese children perceive a maximal cardiorespiratory fitness test compared with a submaximal cardiorespiratory fitness test. Twenty-one obese children (body mass index ≥95th percentile, ages 10-17 years) completed maximal and submaximal cardiorespiratory fitness tests on 2 separate occasions. Oxygen consumption (VO2) and overall perceived exertion (Borg 15-category scale) were measured in both fitness tests. At comparable workloads, perceived exertion was rated significantly higher (P < 0.001) in the submaximal cardiorespiratory fitness test compared with the maximal cardiorespiratory fitness test. The submaximal cardiorespiratory fitness test was significantly longer than the maximal test (14:21 ± 04:04 seconds vs. 12:48 ± 03:27 seconds, P < 0.001). Our data indicate that at the same relative intensity, obese children report comparable or even higher perceived exertion during submaximal fitness testing than during maximal fitness testing. Perceived exertion in a sample of children and youth with obesity may be influenced by test duration and protocol design.
Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision
A Conceptual Examination of Setting Events
ERIC Educational Resources Information Center
Carter, Mark; Driscoll, Coralie
2007-01-01
Setting events are typically seen as antecedent contextual variables that influence behaviour. They are thought to act independently of Skinner's three-term contingency, which consists of a discriminative stimulus, response, and reinforcing consequence. There has been increasing interest in setting events in education from both a theoretical and…
Force Irregularity Following Maximal Effort: The After-Peak Reduction.
Doucet, Barbara M; Mettler, Joni A; Griffin, Lisa; Spirduso, Waneen
2016-08-01
Irregularities in force output are present throughout human movement and can impair task performance. We investigated the presence of a large force discontinuity (after-peak reduction, APR) that appeared immediately following peak in maximal effort ramp contractions performed with the thumb adductor and ankle dorsiflexor muscles in 25 young adult participants (76% males, 24% females; M age 24.4 years, SD = 7.1). The after-peak reduction displayed similar parameters in both muscle groups with comparable drops in force during the after-peak reduction minima (thumb adductor: 27.5 ± 7.5% maximal voluntary contraction; ankle dorsiflexor: 25.8 ± 6.2% maximal voluntary contraction). A trend for the presence of fewer after-peak reductions with successive ramp trials was observed, suggesting a learning effect. Further investigation should explore underlying neural mechanisms contributing to the after-peak reduction. PMID:27502241
Matching and maximizing with concurrent ratio-interval schedules.
Green, L; Rachlin, H; Hanson, J
1983-11-01
Animals exposed to standard concurrent variable-ratio variable-interval schedules could maximize overall reinforcement rate if, in responding, they showed a strong response bias toward the variable-ratio schedule. Tests with the standard schedules have failed to find such a bias and have been widely cited as evidence against maximization as an explanation of animal choice behavior. However, those experiments were confounded in that the value of leisure (behavior other than the instrumental response) partially offsets the value of reinforcement. The present experiment provides another such test using a concurrent procedure in which the confounding effects of leisure were mostly eliminated while the critical aspects of the concurrent variable-ratio variable-interval contingency were maintained: Responding in one component advanced only its ratio schedule while responding in the other component advanced both ratio schedules. The bias toward the latter component predicted by maximization theory was found.
Control of communication networks: welfare maximization and multipath transfers.
Key, Peter B; Massoulié, Laurent
2008-06-13
We discuss control strategies for communication networks such as the Internet. We advocate the goal of welfare maximization as a paradigm for network resource allocation. We explore the application of this paradigm to the case of parallel network paths. We show that welfare maximization requires active balancing across paths by data sources, and potentially requires implementation of novel transport protocols. However, the only requirement from the underlying 'network layer' is to expose the marginal congestion cost of network paths to the 'transport layer'. We further illustrate the versatility of the corresponding layered architecture by describing transport protocols with the following properties: they welfare maximization, each communication may use an arbitrary collection of paths, where paths may be from an overlay, and paths may be combined in series and parallel. We conclude by commenting on incentives, pricing and open problems. PMID:18325871
Multiqubit symmetric states with maximally mixed one-qubit reductions
NASA Astrophysics Data System (ADS)
Baguette, D.; Bastin, T.; Martin, J.
2014-09-01
We present a comprehensive study of maximally entangled symmetric states of arbitrary numbers of qubits in the sense of the maximal mixedness of the one-qubit reduced density operator. A general criterion is provided to easily identify whether given symmetric states are maximally entangled in that respect or not. We show that these maximally entangled symmetric (MES) states are the only symmetric states for which the expectation value of the associated collective spin of the system vanishes, as well as in corollary the dipole moment of the Husimi function. We establish the link between this kind of maximal entanglement, the anticoherence properties of spin states, and the degree of polarization of light fields. We analyze the relationship between the MES states and the classes of states equivalent through stochastic local operations with classical communication (SLOCC). We provide a nonexistence criterion of MES states within SLOCC classes of qubit states and show in particular that the symmetric Dicke state SLOCC classes never contain such MES states, with the only exception of the balanced Dicke state class for even numbers of qubits. The 4-qubit system is analyzed exhaustively and all MES states of this system are identified and characterized. Finally the entanglement content of MES states is analyzed with respect to the geometric and barycentric measures of entanglement, as well as to the generalized N-tangle. We show that the geometric entanglement of MES states is ensured to be larger than or equal to 1/2, but also that MES states are not in general the symmetric states that maximize the investigated entanglement measures.
Projection of two biphoton qutrits onto a maximally entangled state.
Halevy, A; Megidish, E; Shacham, T; Dovrat, L; Eisenberg, H S
2011-04-01
Bell state measurements, in which two quantum bits are projected onto a maximally entangled state, are an essential component of quantum information science. We propose and experimentally demonstrate the projection of two quantum systems with three states (qutrits) onto a generalized maximally entangled state. Each qutrit is represented by the polarization of a pair of indistinguishable photons-a biphoton. The projection is a joint measurement on both biphotons using standard linear optics elements. This demonstration enables the realization of quantum information protocols with qutrits, such as teleportation and entanglement swapping. PMID:21517363
Maximal expiratory flow volume curve in quarry workers.
Subhashini, Arcot Sadagopa; Satchidhanandam, Natesa
2002-01-01
Maximal Expiratory Flow Volume (MEFV) curves were recorded with a computerized Spirometer (Med Spiror). Forced Vital Capacity (FVC), Forced Expiratory Volumes (FEV), mean and maximal flow rates were obtained in 25 quarry workers who were free from respiratory disorders and 20 healthy control subjects. All the functional values are lower in quarry workers than in the control subject, the largest reduction in quarry workers with a work duration of over 15 years, especially for FEF75. The effects are probably due to smoking rather than dust exposure.
Efficient maximal repeat finding using the burrows-wheeler transform and wavelet tree.
Külekci, M Oğuzhan; Vitter, Jeffrey Scott; Xu, Bojian
2012-01-01
Finding repetitive structures in genomes and proteins is important to understand their biological functions. Many data compressors for modern genomic sequences rely heavily on finding repeats in the sequences. The notion of maximal repeats captures all the repeats in the data in a space-efficient way. Prior work on maximal repeat finding used either a suffix tree or a suffix array along with other auxiliary data structures. Their space usage is 19--50 times the text size with the best engineering efforts, prohibiting their usability on massive data. Our technique uses the Burrows-Wheeler Transform and wavelet trees. For data sets consisting of natural language texts, the space usage of our method is no more than three times the text size. For genomic sequences stored using one byte per base, the space usage is less than double the sequence size. Our method is also orders of magnitude faster than the prior methods for processing massive texts, since the prior methods must use external memory. For the first time, our method enables a desktop computer with 8GB internal memory to find all the maximal repeats in the whole human genome in less than 17 hours. We have implemented our method as general-purpose open-source software for public use.
Studying the Independent School Library
ERIC Educational Resources Information Center
Cahoy, Ellysa Stern; Williamson, Susan G.
2008-01-01
In 2005, the American Association of School Librarians' Independent Schools Section conducted a national survey of independent school libraries. This article analyzes the results of the survey, reporting specialized data and information regarding independent school library budgets, collections, services, facilities, and staffing. Additionally, the…
NASA Astrophysics Data System (ADS)
Liu, GaiYun; Chao, Daniel Yuh
2015-08-01
To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.
UpSet: Visualization of Intersecting Sets
Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter
2016-01-01
Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912
Michel, Christian J
2015-09-01
In 1996, a set X of 20 trinucleotides is identified in genes of both prokaryotes and eukaryotes which has in average the highest occurrence in reading frame compared to the two shifted frames (Arquès and Michel, 1996). Furthermore, this set X has an interesting mathematical property as X is a maximal C(3) self-complementary trinucleotide circular code (Arquès and Michel, 1996). In 2014, the number of trinucleotides in prokaryotic genes has been multiplied by a factor of 527. Furthermore, two new gene kingdoms of plasmids and viruses contain enough trinucleotide data to be analysed. The approach used in 1996 for identifying a preferential frame for a trinucleotide is quantified here with a new definition analysing the occurrence probability of a complementary/permutation (CP) trinucleotide set in a gene kingdom. Furthermore, in order to increase the statistical significance of results compared to those of 1996, the circular code X is studied on several gene taxonomic groups in a kingdom. Based on this new statistical approach, the circular code X is strengthened in genes of prokaryotes and eukaryotes, and now also identified in genes of plasmids. A subset of X with 18 or 16 trinucleotides is identified in genes of viruses. Furthermore, a simple probabilistic model based on the independent occurrence of trinucleotides in reading frame of genes explains the circular code frequencies and asymmetries observed in the shifted frames in all studied gene kingdoms. Finally, the developed approach allows to identify variant X codes in genes, i.e. trinucleotide codes which differ from X. In genes of bacteria, eukaryotes and plasmids, 14 among the 47 studied gene taxonomic groups (about 30%) have variant X codes. Seven variant X codes are identified with at least 16 trinucleotides of X. Two variant X codes XA in cyanobacteria and plasmids of cyanobacteria, and XD in birds are self-complementary, without permuted trinucleotides but non-circular. Five variant X codes XB in
Are acute effects of maximal dynamic contractions on upper-body ballistic performance load specific?
Markovic, Goran; Simek, Sanja; Bradic, Asim
2008-11-01
This study investigated the acute effects of upper-body maximal dynamic contractions on maximal throwing speed with 0.55- and 4-kg medicine balls. It was hypothesized that heavy preloading would transiently improve throwing performance only when overcoming the heavier of the two loads. Twenty-three male volunteers were randomly allocated into experimental (n = 11) and control (n = 12) groups. Both groups performed initial and final seated medicine ball throws from the chest, and the maximal medicine ball speed was measured by means of a radar gun. Between the two measurements, the control group rested passively for 15 minutes, and the experimental group performed three sets of three-repetition maximum bench presses. For the 0.55-kg load, a 2 x 2 repeated-measures analysis of variance revealed no significant effect of time x group interaction (p = 0.22), as well as no significant time (p = 0.22) or group (p = 0.72) effects. In contrast, for the 4-kg load, a significant time x group interaction (p = 0.004) and a significant time (p = 0.035) but not group (p = 0.77) effect were observed. Analysis of simple main effects revealed that the experimental group significantly (8.3%; p < 0.01) improved maximal throwing speed with the 4-kg load. These results support our research hypothesis and suggest that the acute effects of heavy preloading on upper-body ballistic performance might be load specific. In a practical sense, our findings suggest that the use of upper-body heavy resistance exercise before ballistic throwing movements against moderate external loads might be an efficient training strategy for improving an athlete's upper-body explosive performance.
Peak oxygen uptake and maximal power output of Olympic wheelchair-dependent athletes.
Veeger, H E; Hadj Yahmed, M; van der Woude, L H; Charpentier, P
1991-10-01
To extend the existing data base on the cardiovascular capacity of wheelchair-dependent athletes, a maximum wheelchair exercise test was conducted by 48 athletes (8 females and 40 males) on a motor driven treadmill. Athletes were selected on availability from the representatives of eight different disciplines. For 36 subjects maximal external power was calculated on the basis of a separate drag test. Maximal oxygen uptake (VO2max) for the male population was 2.23 l.min-1 (32.9 ml.kg-1.min-1). Subjects were divided into functional categories according to the International Stoke Mandeville Classification, with one nonambulatory, nonparaplegic group classified as "LA." The LA group displayed the highest values while the class IC tetraplegic showed the lowest performance level. Classified over sports disciplines, male track and field representatives showed the highest VO2max (2.86 l.min-1, 44.9 ml.kg-1.min-1) and target shooting athletes the lowest (1.32 l.min-1, 16.3 ml.kg-2.min-1). Maximal power output was on average 81.1 W for the male population and varied from 65.8 W for class II athletes to 92.2 W for class LA. Between sports values ranged from 96.8 W for basketball players to 48.2 W for the archery representative. These data are useful for setting standards for maximally attainable performance levels in relation to sport, functional classification, or sex and underline the capability of the wheelchair-dependent to improve cardiovascular fitness.
Glucose transporters and maximal transport are increased in endurance-trained rat soleus
NASA Technical Reports Server (NTRS)
Slentz, C. A.; Gulve, E. A.; Rodnick, K. J.; Henriksen, E. J.; Youn, J. H.; Holloszy, J. O.
1992-01-01
Voluntary wheel running induces an increase in the concentration of the regulatable glucose transporter (GLUT4) in rat plantaris muscle but not in soleus muscle (K. J. Rodnick, J. O. Holloszy, C. E. Mondon, and D. E. James. Diabetes 39: 1425-1429, 1990). Wheel running also causes hypertrophy of the soleus in rats. This study was undertaken to ascertain whether endurance training that induces enzymatic adaptations but no hypertrophy results in an increase in the concentration of GLUT4 protein in rat soleus (slow-twitch red) muscle and, if it does, to determine whether there is a concomitant increase in maximal glucose transport activity. Female rats were trained by treadmill running at 25 m/min up a 15% grade, 90 min/day, 6 days/wk for 3 wk. This training program induced increases of 52% in citrate synthase activity, 66% in hexokinase activity, and 47% in immunoreactive GLUT4 protein concentration in soleus muscles without causing hypertrophy. Glucose transport activity stimulated maximally with insulin plus contractile activity was increased to roughly the same extent (44%) as GLUT4 protein content in soleus muscle by the treadmill exercise training. In a second set of experiments, we examined whether a swim-training program increases glucose transport activity in the soleus in the presence of a maximally effective concentration of insulin. The swimming program induced a 44% increase in immunoreactive GLUT4 protein concentration. Glucose transport activity maximally stimulated with insulin was 62% greater in soleus muscle of the swimmers than in untrained controls. Training did not alter the basal rate of 2-deoxyglucose uptake.(ABSTRACT TRUNCATED AT 250 WORDS).
"Independence" and the nonprofit board: a general counsel's guide.
Peregrine, Michael W; Broccolo, Bernadette M
2006-01-01
In the wake of the Sarbanes-Oxley Act regulations that govern the public company sector, standards are emerging to assure that nonprofit corporate boards are maintaining appropriate levels of independence. This Article provides a summation of the current trends in the development of independence standards for nonprofit corporate governance, from both tax and corporate law perspectives. The authors consider independence standards for nonprofit boards of governance and discuss the evolution of independence standards as they relate to the duty of good faith, and the distinction between independence and conflicts of interest. The authors also seek to examine the evolution of current federal regulations and study state models that have been successfully implemented to insure the independence of nonprofit corporations. Finally, the authors propose a set of core guidelines to be considered when addressing board and committee independence issues. PMID:17402658
Maximizing Thermal Efficiency and Optimizing Energy Management (Fact Sheet)
Not Available
2012-03-01
Researchers at the Thermal Test Facility (TTF) on the campus of the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) in Golden, Colorado, are addressing maximizing thermal efficiency and optimizing energy management through analysis of efficient heating, ventilating, and air conditioning (HVAC) strategies, automated home energy management (AHEM), and energy storage systems.
Dynamical generation of maximally entangled states in two identical cavities
Alexanian, Moorad
2011-11-15
The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.
Evidence for surprise minimization over value maximization in choice behavior.
Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents' to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus 'keep their options open'. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Neural network approach for solving the maximal common subgraph problem.
Shoukry, A; Aboutabl, M
1996-01-01
A new formulation of the maximal common subgraph problem (MCSP), that is implemented using a two-stage Hopfield neural network, is given. Relative merits of this proposed formulation, with respect to current neural network-based solutions as well as classical sequential-search-based solutions, are discussed.
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible.
Emotional Control and Instructional Effectiveness: Maximizing a Timeout
ERIC Educational Resources Information Center
Andrews, Staci R.
2015-01-01
This article provides recommendations for best practices for basketball coaches to maximize the instructional effectiveness of a timeout during competition. Practical applications are derived from research findings linking emotional intelligence to effective coaching behaviors. Additionally, recommendations are based on the implications of the…
Children Use Categories to Maximize Accuracy in Estimation
ERIC Educational Resources Information Center
Duffy, Sean; Huttenlocher, Janellen; Crawford, L. Elizabeth
2006-01-01
The present study tests a model of category effects upon stimulus estimation in children. Prior work with adults suggests that people inductively generalize distributional information about a category of stimuli and use this information to adjust their estimates of individual stimuli in a way that maximizes average accuracy in estimation (see…
Maximally entangled mixed-state generation via local operations
Aiello, A.; Puentes, G.; Voigt, D.; Woerdman, J. P.
2007-06-15
We present a general theoretical method to generate maximally entangled mixed states of a pair of photons initially prepared in the singlet polarization state. This method requires only local operations upon a single photon of the pair and exploits spatial degrees of freedom to induce decoherence. We report also experimental confirmation of these theoretical results.
On Adaptation, Maximization, and Reinforcement Learning among Cognitive Strategies
ERIC Educational Resources Information Center
Erev, Ido; Barron, Greg
2005-01-01
Analysis of binary choice behavior in iterated tasks with immediate feedback reveals robust deviations from maximization that can be described as indications of 3 effects: (a) a payoff variability effect, in which high payoff variability seems to move choice behavior toward random choice; (b) underweighting of rare events, in which alternatives…
Maximizing plant density affects broccoli yield and quality
Technology Transfer Automated Retrieval System (TEKTRAN)
Increased demand for fresh market bunch broccoli (Brassica oleracea L. var. italica) has led to increased production along the United States east coast. Maximizing broccoli yields is a primary concern for quickly expanding southeastern commercial markets. This broccoli plant density study was carr...
Maximizing grain sorghum water use efficiency under deficit irrigation
Technology Transfer Automated Retrieval System (TEKTRAN)
Development and evaluation of sustainable and efficient irrigation strategies is a priority for producers faced with water shortages resulting from aquifer depletion, reduced base flows, and reallocation of water to non-agricultural sectors. Under a limited water supply, yield maximization may not b...
Maximality and Idealized Cognitive Models: The Complementation of Spanish "Tener."
ERIC Educational Resources Information Center
Hilferty, Joseph; Valenzuela, Javier
2001-01-01
Discusses the bare-noun phrase (NP) complementation pattern of the Spanish verb "tener" (have). Shows that the maximality of the complement NP is dependent upon three factors: (1) idiosyncratic valence requirements; (2) encyclopedic knowledge related to possession; and (3) contextualized semantic construal. (Author/VWL)
Matching Pupils and Teachers to Maximize Expected Outcomes.
ERIC Educational Resources Information Center
Ward, Joe H., Jr.; And Others
To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…
Do Speakers and Listeners Observe the Gricean Maxim of Quantity?
ERIC Educational Resources Information Center
Engelhardt, Paul E.; Bailey, Karl G. D.; Ferreira, Fernanda
2006-01-01
The Gricean Maxim of Quantity is believed to govern linguistic performance. Speakers are assumed to provide as much information as required for referent identification and no more, and listeners are believed to expect unambiguous but concise descriptions. In three experiments we examined the extent to which naive participants are sensitive to the…
The Profit-Maximizing Firm: Old Wine in New Bottles.
ERIC Educational Resources Information Center
Felder, Joseph
1990-01-01
Explains and illustrates a simplified use of graphical analysis for analyzing the profit-maximizing firm. Believes that graphical analysis helps college students gain a deeper understanding of marginalism and an increased ability to formulate economic problems in marginalist terms. (DB)
Cognitive Somatic Behavioral Interventions for Maximizing Gymnastic Performance.
ERIC Educational Resources Information Center
Ravizza, Kenneth; Rotella, Robert
Psychological training programs developed and implemented for gymnasts of a wide range of age and varying ability levels are examined. The programs utilized strategies based on cognitive-behavioral intervention. The approach contends that mental training plays a crucial role in maximizing performance for most gymnasts. The object of the training…
Evidence for surprise minimization over value maximization in choice behavior.
Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-11-13
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents' to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus 'keep their options open'. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations.
Nursing Students' Awareness and Intentional Maximization of Their Learning Styles
ERIC Educational Resources Information Center
Mayfield, Linda Riggs
2012-01-01
This small, descriptive, pilot study addressed survey data from four levels of nursing students who had been taught to maximize their learning styles in a first-semester freshman success skills course. Bandura's Agency Theory supports the design. The hypothesis was that without reinforcing instruction, the students' recall and application of that…
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible. PMID:26026290
PROFIT-MAXIMIZING PRINCIPLES, INSTRUCTIONAL UNITS FOR VOCATIONAL AGRICULTURE.
ERIC Educational Resources Information Center
BARKER, RICHARD L.
THE PURPOSE OF THIS GUIDE IS TO ASSIST VOCATIONAL AGRICULTURE TEACHERS IN STIMULATING JUNIOR AND SENIOR HIGH SCHOOL STUDENT THINKING, UNDERSTANDING, AND DECISION MAKING AS ASSOCIATED WITH PROFIT-MAXIMIZING PRINCIPLES OF FARM OPERATION FOR USE IN FARM MANAGEMENT. IT WAS DEVELOPED UNDER A U.S. OFFICE OF EDUCATION GRANT BY TEACHER-EDUCATORS, A FARM…
Curriculum and Testing Strategies to Maximize Special Education STAAR Achievement
ERIC Educational Resources Information Center
Johnson, William L.; Johnson, Annabel M.; Johnson, Jared W.
2015-01-01
This document is from a presentation at the 2015 annual conference of the Science Teachers Association of Texas (STAT). The two sessions (each listed as feature sessions at the state conference) examined classroom strategies the presenter used in his chemistry classes to maximize Texas end-of-course chemistry test scores for his special population…
Density-metric unimodular gravity: Vacuum maximal symmetry
Abbassi, A.H.; Abbassi, A.M.
2011-05-15
We have investigated the vacuum maximally symmetric solutions of recently proposed density-metric unimodular gravity theory. The results are widely different from inflationary scenario. The exponential dependence on time in deSitter space is substituted by a power law. Open space-times with non-zero cosmological constant are excluded.
Teacher Praise: Maximizing the Motivational Impact. Teaching Strategies.
ERIC Educational Resources Information Center
McVey, Mary D.
2001-01-01
Recognizes the influence of praise on human behavior, and provides specific suggestions on how to maximize the positive effects of praise when intended as positive reinforcement. Examines contingency, specificity, and selectivity aspects of praise. Cautions teachers to avoid the controlling effects of praise and the possibility that praise may…
Modifying Softball for Maximizing Learning Outcomes in Physical Education
ERIC Educational Resources Information Center
Brian, Ali; Ward, Phillip; Goodway, Jacqueline D.; Sutherland, Sue
2014-01-01
Softball is taught in many physical education programs throughout the United States. This article describes modifications that maximize learning outcomes and that address the National Standards and safety recommendations. The modifications focus on tasks and equipment, developmentally appropriate motor-skill acquisition, increasing number of…
Maximizing Data Use: A Focus on the Completion Agenda
ERIC Educational Resources Information Center
Phillips, Brad C.; Horowitz, Jordan E.
2013-01-01
The completion agenda is in full force at the nation's community colleges. To maximize the impact colleges can have on improving completion, colleges must organize around using student progress and outcome data to monitor and track their efforts. Unfortunately, colleges are struggling to identify relevant data and to mobilize staff to review…
Maximization of total genetic variance in breed conservation programmes.
Cervantes, I; Meuwissen, T H E
2011-12-01
The preservation of the maximum genetic diversity in a population is one of the main objectives within a breed conservation programme. We applied the maximum variance total (MVT) method to a unique population in order to maximize the total genetic variance. The function maximization was performed by the annealing algorithm. We have selected the parents and the mating scheme at the same time simply maximizing the total genetic variance (a mate selection problem). The scenario was compared with a scenario of full-sib lines, a MVT scenario with a rate of inbreeding restriction, and with a minimum coancestry selection scenario. The MVT method produces sublines in a population attaining a similar scheme as the full-sib sublining that agrees with other authors that the maximum genetic diversity in a population (the lowest overall coancestry) is attained in the long term by subdividing it in as many isolated groups as possible. The application of a restriction on the rate of inbreeding jointly with the MVT method avoids the consequences of inbreeding depression and maintains the effective size at an acceptable minimum. The scenario of minimum coancestry selection gave higher effective size values, but a lower total genetic variance. A maximization of the total genetic variance ensures more genetic variation for extreme traits, which could be useful in case the population needs to adapt to a new environment/production system.
Maximizing the Online Learning Experience: Suggestions for Educators and Students
ERIC Educational Resources Information Center
Cicco, Gina
2011-01-01
This article will discuss ways of maximizing the online course experience for teachers- and counselors-in-training. The widespread popularity of online instruction makes it a necessary learning experience for future teachers and counselors (Ash, 2011). New teachers and counselors take on the responsibility of preparing their students for real-life…
Using Debate to Maximize Learning Potential: A Case Study
ERIC Educational Resources Information Center
Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda
2007-01-01
Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…
Fertilizer placement to maximize nitrogen use by fescue
Technology Transfer Automated Retrieval System (TEKTRAN)
The method of fertilizer nitrogen(N) application can affect N uptake in tall fescue and therefore its yield and quality. Subsurface-banding (knife) of fertilizer maximizes fescue N uptake in the poorly-drained clay–pan soils of southeastern Kansas. This study was conducted to determine if knifed N r...
Performance of device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhao, Qi; Ma, Xiongfeng
2016-07-01
Quantum key distribution provides information-theoretically-secure communication. In practice, device imperfections may jeopardise the system security. Device-independent quantum key distribution solves this problem by providing secure keys even when the quantum devices are untrusted and uncharacterized. Following a recent security proof of the device-independent quantum key distribution, we improve the key rate by tightening the parameter choice in the security proof. In practice where the system is lossy, we further improve the key rate by taking into account the loss position information. From our numerical simulation, our method can outperform existing results. Meanwhile, we outline clear experimental requirements for implementing device-independent quantum key distribution. The maximal tolerable error rate is 1.6%, the minimal required transmittance is 97.3%, and the minimal required visibility is 96.8 % .
Job Creation and Petroleum Independence with E85 in Texas
Walk, Steve
2014-08-08
Protec Fuel Management project objectives are to help design, build, provide, promote and supply biofuels for the greater energy independence, national security and domestic economic growth through job creations, infrastructure projects and supply chain business stimulants. Protec Fuel has teamed up with station owners to convert 5 existing retail fueling stations to include E85 fuel to service existing large number of fleet FFVs and general public FFVs. The stations are located in high flex fuel vehicle locations in the state of TX. Under the project name, “Job Creation and Petroleum Independence with E85 in Texas,” Protec Fuel identified and successfully opened stations strategically located to maximize e85 fueling success for fleets and public. Protec Fuel and industry affiliates and FFV manufacturers are excited about these stations and the opportunities as they will help reduce emissions, increase jobs, economic stimulus benefits, energy independence and petroleum displacement.
Learning to maximize reward rate: a model based on semi-Markov decision processes.
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R
2014-01-01
WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.
Learning to maximize reward rate: a model based on semi-Markov decision processes
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.
2014-01-01
When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252
Weak values and weak coupling maximizing the output of weak measurements
Di Lorenzo, Antonio
2014-06-15
In a weak measurement, the average output 〈o〉 of a probe that measures an observable A{sup -hat} of a quantum system undergoing both a preparation in a state ρ{sub i} and a postselection in a state E{sub f} is, to a good approximation, a function of the weak value A{sub w}=Tr[E{sub f}A{sup -hat} ρ{sub i}]/Tr[E{sub f}ρ{sub i}], a complex number. For a fixed coupling λ, when the overlap Tr[E{sub f}ρ{sub i}] is very small, A{sub w} diverges, but 〈o〉 stays finite, often tending to zero for symmetry reasons. This paper answers the questions: what is the weak value that maximizes the output for a fixed coupling? What is the coupling that maximizes the output for a fixed weak value? We derive equations for the optimal values of A{sub w} and λ, and provide the solutions. The results are independent of the dimensionality of the system, and they apply to a probe having a Hilbert space of arbitrary dimension. Using the Schrödinger–Robertson uncertainty relation, we demonstrate that, in an important case, the amplification 〈o〉 cannot exceed the initial uncertainty σ{sub o} in the observable o{sup -hat}, we provide an upper limit for the more general case, and a strategy to obtain 〈o〉≫σ{sub o}. - Highlights: •We have provided a general framework to find the extremal values of a weak measurement. •We have derived the location of the extremal values in terms of preparation and postselection. •We have devised a maximization strategy going beyond the limit of the Schrödinger–Robertson relation.
Level Set Method for Positron Emission Tomography
Chan, Tony F.; Li, Hongwei; Lysaker, Marius; Tai, Xue-Cheng
2007-01-01
In positron emission tomography (PET), a radioactive compound is injected into the body to promote a tissue-dependent emission rate. Expectation maximization (EM) reconstruction algorithms are iterative techniques which estimate the concentration coefficients that provide the best fitted solution, for example, a maximum likelihood estimate. In this paper, we combine the EM algorithm with a level set approach. The level set method is used to capture the coarse scale information and the discontinuities of the concentration coefficients. An intrinsic advantage of the level set formulation is that anatomical information can be efficiently incorporated and used in an easy and natural way. We utilize a multiple level set formulation to represent the geometry of the objects in the scene. The proposed algorithm can be applied to any PET configuration, without major modifications. PMID:18354724
Yu, Chao; Sharma, Gaurav
2010-08-01
We explore camera scheduling and energy allocation strategies for lifetime optimization in image sensor networks. For the application scenarios that we consider, visual coverage over a monitored region is obtained by deploying wireless, battery-powered image sensors. Each sensor camera provides coverage over a part of the monitored region and a central processor coordinates the sensors in order to gather required visual data. For the purpose of maximizing the network operational lifetime, we consider two problems in this setting: a) camera scheduling, i.e., the selection, among available possibilities, of a set of cameras providing the desired coverage at each time instance, and b) energy allocation, i.e., the distribution of total available energy between the camera sensor nodes. We model the network lifetime as a stochastic random variable that depends upon the coverage geometry for the sensors and the distribution of data requests over the monitored region, two key characteristics that distinguish our problem from other wireless sensor network applications. By suitably abstracting this model of network lifetime and utilizing asymptotic analysis, we propose lifetime-maximizing camera scheduling and energy allocation strategies. The effectiveness of the proposed camera scheduling and energy allocation strategies is validated by simulations. PMID:20350857
The optimal number of lymph nodes removed in maximizing the survival of breast cancer patients
NASA Astrophysics Data System (ADS)
Peng, Lim Fong; Taib, Nur Aishah; Mohamed, Ibrahim; Daud, Noorizam
2014-07-01
The number of lymph nodes removed is one of the important predictors for survival in breast cancer study. Our aim is to determine the optimal number of lymph nodes to be removed for maximizing the survival of breast cancer patients. The study population consists of 873 patients with at least one of axillary nodes involved among 1890 patients from the University of Malaya Medical Center (UMMC) breast cancer registry. For this study, the Chi-square test of independence is performed to determine the significant association between prognostic factors and survival status, while Wilcoxon test is used to compare the estimates of the hazard functions of the two or more groups at each observed event time. Logistic regression analysis is then conducted to identify important predictors of survival. In particular, Akaike's Information Criterion (AIC) are calculated from the logistic regression model for all thresholds of node involved, as an alternative measure for the Wald statistic (χ2), in order to determine the optimal number of nodes that need to be removed to obtain the maximum differential in survival. The results from both measurements are compared. It is recommended that, for this particular group, the minimum of 10 nodes should be removed to maximize survival of breast cancer patients.
Correlated Protein Function Prediction via Maximization of Data-Knowledge Consistency.
Wang, Hua; Huang, Heng; Ding, Chris
2015-06-01
Conventional computational approaches for protein function prediction usually predict one function at a time, fundamentally. As a result, the protein functions are treated as separate target classes. However, biological processes are highly correlated in reality, which makes multiple functions assigned to a protein not independent. Therefore, it would be beneficial to make use of function category correlations when predicting protein functions. In this article, we propose a novel Maximization of Data-Knowledge Consistency (MDKC) approach to exploit function category correlations for protein function prediction. Our approach banks on the assumption that two proteins are likely to have large overlap in their annotated functions if they are highly similar according to certain experimental data. We first establish a new pairwise protein similarity using protein annotations from knowledge perspective. Then by maximizing the consistency between the established knowledge similarity upon annotations and the data similarity upon biological experiments, putative functions are assigned to unannotated proteins. Most importantly, function category correlations are gracefully incorporated into our learning objective through the knowledge similarity. Comprehensive experimental evaluations on the Saccharomyces cerevisiae species have demonstrated promising results that validate the performance of our methods.
Clustering performance comparison using K-means and expectation maximization algorithms
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-01-01
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610
Near maximal atmospheric neutrino mixing in neutrino mass models with two texture zeros
NASA Astrophysics Data System (ADS)
Dev, S.; Gautam, R. R.; Singh, Lal; Gupta, Manmohan
2014-07-01
The implications of a large value of the effective Majorana neutrino mass for a class of two texture zero neutrino mass matrices have been studied in the flavor basis. It is found that these textures predict a near maximal atmospheric neutrino mixing angle in the limit of a large effective Majorana neutrino mass. It is noted that this prediction is independent of the values of solar and reactor neutrino mixing angles. We present the symmetry realization of these textures using the discrete cyclic group Z3. It is found that the texture zeros realized in this work remain stable under renormalization group running of the neutrino mass matrix from the seesaw scale to the electroweak scale, at one-loop level.
The Effects of Vibration During Maximal Graded Cycling Exercise: A Pilot Study
Filingeri, Davide; Jemni, Monèm; Bianco, Antonino; Zeinstra, Edzard; Jimenez, Alfonso
2012-01-01
Whole Body Vibration training is studied and used in different areas, related to sport performance and rehabilitation. However, few studies have investigated the effects of Vibration (Vib) exposure on aerobic performance through the application of this concept to cycling exercise. A specifically designed vibrating cycloergometer, the powerBIKE™, was used to compare the effects of Vib cycling exercise and normal cycling on different physiological parameters during maximal graded exercise test. Twelve recreationally active male adults (25 ± 4.8 yrs; 181.33 ± 5.47 cm; 80.66 ± 11.91 kg) performed two maximal incremental cycling tests with and without Vib in a block-randomized order. The protocol consisted of a 4 min warm up at 70 rev·min-1 followed by incremental steps of 3 min each. Cycling cadence was increased at each step by 10 rev·min-1 until participants reached their volitional exhaustion. Respiratory gases (VO2, VCO2), Heart Rate, Blood Lactate and RPE were collected during the test. Paired t-tests and Cor-relation Coefficients were used for statistical analysis. A significantly greater (P<0.05) response in the VO2, HR, BLa and RPE was observed during the Vib trial compare to normal cycling. No significant differences were found in the maximal aerobic power (Vib 34.32 ± 9.70 ml·kg-1·min-1; no Vib 40.11 ± 9.49 ml·kg-1·min-1). Adding Vib to cycling exercise seems eliciting a quicker energetic demand during maximal exercise. However, mechanical limitations of the vibrating prototype could have affected the final outcomes. Future studies with more comparative setting are recommended to deeply appraise this concept. Key points There is strong evidence to suggest that acute indirect vibrations act on muscle to enhance force, power, flexibility, balance and proprioception. There is a lack of knowledge regarding the effects of applying Vib to dynamic aerobic exercise. Added vibrations to cycling exercise seem producing a quicker energetic demand during
Momentary maximizing in concurrent schedules with a minimum interchangeover interval.
Todorov, J C; Souza, D G; Bori, C M
1993-09-01
Eight pigeons were trained on concurrent variable-interval variable-interval schedules with a minimum interchangeover time programmed as a consequence of changeovers. In Experiment 1 the reinforcement schedules remained constant while the minimum interchangeover time varied from 0 to 200 s. Relative response rates and relative time deviated from relative reinforcement rates toward indifference with long minimum interchangeover times. In Experiment 2 different reinforcement ratios were scheduled in successive experimental conditions with the minimum interchangeover time constant at 0, 2, 10, or 120 s. The exponent of the generalized matching equation was close to 1.0 when the minimum interchangeover time was 0 s (the typical procedure for concurrent schedules without a changeover delay) and decreased as that duration was increased. The data support the momentary maximizing theory and contradict molar maximizing theories and the melioration theory.
Maximizing production rates of the Linde Hampson machine
NASA Astrophysics Data System (ADS)
Maytal, B.-Z.
2006-01-01
In contrast to the ideal case of unlimited size recuperator, any real Linde-Hampson machine of finite size recuperator can be optimized to reach the extreme rates of performance. The group of cryocoolers sharing the same size recuperator is optimized in a closed form by determining the corresponding flow rate which maximizes its rate of cold production. For a similar group of liquefiers an optimal flow rate is derived to maximize the rate of production of liquid cryogen. The group of cryocoolers sharing a constant and given flow rate is optimized by shortening the recuperator for reaching a maximum compactness measured by the cooling power per unit size of the recuperator. The optimum conditions are developed for nitrogen and argon. The relevance of this analysis is discussed in the context of practice of fast cooldown Joule-Thomson cryocooling.
Safety factor maximization for trusses subjected to fatigue stresses
NASA Astrophysics Data System (ADS)
Hedaya, Mohammed Mohammed; Moneeb Elsabbagh, Adel; Hussein, Ahmed Mohamed
2015-08-01
This article presents a mathematical model for sizing optimization of undamped trusses subjected to dynamic loading leading to fatigue. The combined effect of static and dynamic loading, at steady state, is considered. An optimization model, whose objective is the maximization of the safety factor of these trusses, is developed. A new quantity (equivalent fatigue strain energy) combining the effects of static and dynamic stresses is presented. This quantity is used as a global measure of the proximity of fatigue failure. Therefore, the equivalent fatigue strain energy is minimized, and this seems to give a good value for the maximal equivalent static stress. This assumption is verified through two simple examples. The method of moving asymptotes is used in the optimization of trusses. The applicability of the proposed approach is demonstrated through two numerical examples; a 10-bar truss with different loading cases and a helicopter tail subjected to dynamic loading.
Magellan Project: Evolving enhanced operations efficiency to maximize science value
NASA Technical Reports Server (NTRS)
Cheuvront, Allan R.; Neuman, James C.; Mckinney, J. Franklin
1994-01-01
Magellan has been one of NASA's most successful spacecraft, returning more science data than all planetary spacecraft combined. The Magellan Spacecraft Team (SCT) has maximized the science return with innovative operational techniques to overcome anomalies and to perform activities for which the spacecraft was not designed. Commanding the spacecraft was originally time consuming because the standard development process was envisioned as manual tasks. The Program understood that reducing mission operations costs were essential for an extended mission. Management created an environment which encouraged automation of routine tasks, allowing staff reduction while maximizing the science data returned. Data analysis and trending, command preparation, and command reviews are some of the tasks that were automated. The SCT has accommodated personnel reductions by improving operations efficiency while returning the maximum science data possible.
Controlled Dense Coding Using the Maximal Slice States
NASA Astrophysics Data System (ADS)
Liu, Jun; Mo, Zhi-wen; Sun, Shu-qin
2016-04-01
In this paper we investigate the controlled dense coding with the maximal slice states. Three schemes are presented. Our schemes employ the maximal slice states as quantum channel, which consists of the tripartite entangled state from the first party(Alice), the second party(Bob), the third party(Cliff). The supervisor(Cliff) can supervises and controls the channel between Alice and Bob via measurement. Through carrying out local von Neumann measurement, controlled-NOT operation and positive operator-valued measure(POVM), and introducing an auxiliary particle, we can obtain the success probability of dense coding. It is shown that the success probability of information transmitted from Alice to Bob is usually less than one. The average amount of information for each scheme is calculated in detail. These results offer deeper insight into quantum dense coding via quantum channels of partially entangled states.
Maximal radius of the aftershock zone in earthquake networks
NASA Astrophysics Data System (ADS)
Mezentsev, A. Yu.; Hayakawa, M.
2009-09-01
In this paper, several seismoactive regions were investigated (Japan, Southern California and two tectonically distinct Japanese subregions) and structural seismic constants were estimated for each region. Using the method for seismic clustering detection proposed by Baiesi and Paczuski [M. Baiesi, M. Paczuski, Phys. Rev. E 69 (2004) 066106; M. Baiesi, M. Paczuski, Nonlin. Proc. Geophys. (2005) 1607-7946], we obtained the equation of the aftershock zone (AZ). It was shown that the consideration of a finite velocity of seismic signal leads to the natural appearance of maximal possible radius of the AZ. We obtained the equation of maximal radius of the AZ as a function of the magnitude of the main event and estimated its values for each region.
Harnessing gauge fields for maximally entangled state generation
NASA Astrophysics Data System (ADS)
Reyes, S. A.; Morales-Molina, L.; Orszag, M.; Spehner, D.
2014-10-01
We study the generation of entanglement between two species of bosons living on a ring lattice, where each group of particles can be described by a d-dimensional Hilbert space (qudit). Gauge fields are exploited to create an entangled state between the pair of qudits. Maximally entangled eigenstates are found for well-defined values of the Aharonov-Bohm phase, which are zero-energy eigenstates of both the kinetic and interacting parts of the Bose-Hubbard Hamiltonian, making them quite exceptional and robust. We propose a protocol to reach the maximally entangled state (MES) by starting from an initially prepared ground state. Also, an indirect method to detect the MES by measuring the current of the particles is proposed.
Harnessing gauge fields for maximally entangled state generation
NASA Astrophysics Data System (ADS)
Reyes, Sebastian; Morales-Molina, Luis; Orszag, Miguel; Spehner, Dominique
2015-03-01
We study the generation of entanglement between two species of bosons living on a ring lattice, where each group of particles can be described by a d-dimensional Hilbert space (qudit). Gauge fields are exploited to create an entangled state between the pair of qudits. Maximally entangled eigenstates are found for well-defined values of the Aharonov-Bohm phase, which are zero-energy eigenstates of both the kinetic and interacting parts of the Bose-Hubbard Hamiltonian, making them quite exceptional. We propose a protocol to reach the maximally entangled state (MES) by starting from an initially prepared ground state. Also, an indirect method to detect the MES by measuring the current of the particles is proposed.
Mosaics of retinal cells that transmit maximal information
NASA Astrophysics Data System (ADS)
Sharpee, Tatyana
2008-03-01
In the nervous system, visual signals are encoded by retinal ganglion cells into sequences of discrete electrical pulses termed spikes. Response regions of different ganglion cells tile the visual field and are arranged on approximately hexagonal lattice. Here we consider the optimal arrangement of response regions that would collectively allow for maximal information transmitted about the location of a point light source. We find that maximal information can be transmitted when at most three neighboring regions overlap and the average radius of response field is ˜0.67 of the distance between response field centers. This finding was obtained with no adjustable parameters and agrees with experimental measurements of retinal mosaics [1, 2]. [1] D.M. Dacey and S. Brace, Visual Neuroscience 9:279-90 (1992). [2] S.H. Devries and D.A. Baylor, J Neurophysiol. 78:2048-60 (1997).
Maximal anaerobic power in national level Indian players.
Bhanot, J L; Sidhu, L S
1981-12-01
The comparative study of aerobic power in different sports was conducted on 99 National Senior as well as National Junior players specialised in hockey and football, field games; volleyball and basketball, court games. The National Seniors were 27 hockey and 16 volleyball players, whereas, 32 football and 24 basketball players were the National Juniors. The maximal anaerobic power of the players was determined from maximal vertical velocity and body weight by the methods of margaria. The football players have been found to be highest followed by hockey, volleyball and basketball players in vertical velocity. It is observed that field game players are higher than the court game players in vertical velocity and that volleyball players possess higher maximum anaerobic power than football, hockey and basketball players. PMID:7317726
[Chemical constituents from the roots of Angelica polymorpha Maxim].
Yang, Yu; Zhang, Yang; Ren, Feng-Xia; Yu, Neng-Jiang; Xu, Rui; Zhao, Yi-Min
2013-05-01
Angelica polymorpha Maxim. is a plant of the Angelica genus (Umbelliferae). The root and stem of this plant is a folk medicine known to have the actions of relieving rheumatism and cold and subsiding swelling and pains. To investigate the chemical constituents in the root of A. polymorpha Maxim., seven compounds were isolated from an 80% ethanol extract by column chromatography. Their structures were elucidated according to the spectroscopic analysis. Compound 1 is a new sesquiterpene, named as bisabolactone. Its absolute configuration was determined by 1D NOESY and CD analysis. The others were identified as 5-hydroxymethylfurfural (2), hycandinic acid ester 1 (3), ferulic acid (4), isooxypeucedanin (5), noreugenin (6) and cimifugin (7). Compound 2 and 3 were isolated from this genus for the first time and compound 4 was isolated from this plant for the first time.
Maximal anaerobic power in national level Indian players.
Bhanot, J L; Sidhu, L S
1981-12-01
The comparative study of aerobic power in different sports was conducted on 99 National Senior as well as National Junior players specialised in hockey and football, field games; volleyball and basketball, court games. The National Seniors were 27 hockey and 16 volleyball players, whereas, 32 football and 24 basketball players were the National Juniors. The maximal anaerobic power of the players was determined from maximal vertical velocity and body weight by the methods of margaria. The football players have been found to be highest followed by hockey, volleyball and basketball players in vertical velocity. It is observed that field game players are higher than the court game players in vertical velocity and that volleyball players possess higher maximum anaerobic power than football, hockey and basketball players.
[Research advance in rare and endemic plant Tetraena mongolica Maxim].
Zhen, Jiang-Hong; Liu, Guo-Hou
2008-02-01
In this paper, the research advance in rare and endemic plant Tetraena mongolica Maxim. was summarized from the aspects of morphology, anatomy, palynology, cytology, seed-coat micro-morphology, embryology, physiology, biology, ecology, genetic diversity, chemical constituents, endangered causes, and conservation approaches, and the further research directions were prospected. It was considered that population viability, idioplasm conservation and artificial renewal, molecular biology of ecological adaptability, and assessment of habitat suitability should be the main aspects for the future study of T. mongolica.
Maximal anaerobic performance of the knee extensor muscles during growth.
Saavedra, C; Lagassé, P; Bouchard, C; Simoneau, J A
1991-09-01
The extent of the growth changes in maximal work output during 10 s (MWO10), 30 s (MWO30), and 90 s (MWO90) of maximal repetitive knee flexions and extensions assessed on a modified Hydra-Gym machine was investigated in 84 boys and 83 girls, 9-19 yr of age. Body weight, fat mass and fat free mass by underwater weighing, and thigh volume and cross-sectional area were also determined. No difference was observed in the absolute MWO10, MWO30, and MWO90 between girls and boys at 9 and 11 yr of age. However, significant differences appeared between genders from 13 yr of age onward, anaerobic performances of the knee extensor muscles of girls representing about 75% or even less of those of boys. The analysis of variance revealed that maximal work ouput during the three knee extension tests was significantly greater in males as well as in females from 9 to 18 yr, regardless how performance was related to morphological characteristics. Performance in absolute values or expressed per unit of body weight, fat free mass, and thigh cross-sectional area for the MWO10, MWO30, and MWO90 tests were almost always significantly lower in both genders when performances of the 9-yr-old group were compared with those of the 13-yr-old group or older groups. Improvement in maximal work output during the 10-s, 30-s, or 90-s knee extension tests with age occurred mainly between 9 and 15 yr in both genders. The results of the present study show that there are gender differences in predominantly anaerobic performances during growth and reveal that increase in muscle mass does not appear to be the only factor responsible for the age-related increment in the anaerobic working capacity of the knee extensor muscles.
Planning for partnerships: Maximizing surge capacity resources through service learning.
Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B
2015-01-01
Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts.
Networks maximizing the consensus time of voter models
NASA Astrophysics Data System (ADS)
Iwamasa, Yuni; Masuda, Naoki
2014-07-01
We explore the networks that yield the largest mean consensus time of voter models under different update rules. By analytical and numerical means, we show that the so-called lollipop graph, barbell graph, and double-star graph maximize the mean consensus time under the update rules called the link dynamics, voter model, and invasion process, respectively. For each update rule, the largest mean consensus time scales as O (N3), where N is the number of nodes in the network.
Maximizing the clinical utility of comparative effectiveness research.
Umscheid, C A
2010-12-01
Providers, consumers, payers, and policy makers are awash in choices when it comes to medical decision making and need better evidence to inform their decisions. Large federal investments in comparative effectiveness research (CER) aim to fill this need. But how do we ensure the clinical utility of CER? Here, I define comparative effectiveness and clinical utility, outline metrics to evaluate clinical utility, and suggest methods for maximizing the clinical utility of CER for the various stakeholders.
Planning for partnerships: Maximizing surge capacity resources through service learning.
Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B
2015-01-01
Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts. PMID:26750818
Gatterer, Hannes; Klarod, Kultida; Heinrich, Dieter; Schlemmer, Philipp; Dilitz, Stefan; Burtscher, Martin
2015-08-01
The purpose of this study was to investigate the effect of a maximal shuttle-run shock microcycle in hypoxia on repeated sprint ability (RSA, 6 × 40-m (6 × 20 m back and forth, 20" rest in between)), Yo-Yo-intermittent-recovery (YYIR) test performance, and redox-status. Fourteen soccer players (age: 23.9 ± 2.1 years), randomly assigned to hypoxia (∼ 3300 m) or normoxia training, performed 8 maximal shuttle-run training sessions within 12 days. YYIR test performance and RSA fatigue-slope improved independently of the hypoxia stimulus (p < 0.05). Training reduced the oxidative stress level (-7.9%, p < 0.05), and the reduction was associated with performance improvements (r = 0.761, ΔRSA; r = -0.575, ΔYYIR, p < 0.05). PMID:26212372
Hild, Kenneth E.; Attias, Hagai T.; Nagarajan, Srikantan S.
2009-01-01
In this paper, we develop a maximum-likelihood (ML) spatio-temporal blind source separation (BSS) algorithm, where the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and the distribution of the associated independent identically distributed (i.i.d.) inovations process is described using a mixture of Gaussians. Unlike most ML methods, the proposed algorithm takes into account both spatial and temporal information, optimization is performed using the expectation-maximization (EM) method, the source model is adapted to maximize the likelihood, and the update equations have a simple, analytical form. The proposed method, which we refer to as autoregressive mixture of Gaussians (AR-MOG), outperforms nine other methods for artificial mixtures of real audio. We also show results for using AR-MOG to extract the fetal cardiac signal from real magnetocardiographic (MCG) data. PMID:18334368
Formation Control of the MAXIM L2 Libration Orbit Mission
NASA Technical Reports Server (NTRS)
Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda
2004-01-01
The Micro-Arcsecond X-ray Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imager mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 micro-arcsecond imaging. Currently the mission architecture comprises 25 spacecraft, 24 as optics modules and one as the detector, which will form sparse sub-apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. To achieve these mission goals, the formation is required to cooperatively point at desired targets. Once pointed, the individual elements of the MAXIM formation must remain stable, maintaining their relative positions and attitudes below a critical threshold. These pointing and formation stability requirements impact the control and design of the formation. In this paper, we provide analysis of control efforts that are dependent upon the stability and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions to minimize the control efforts and we address continuous control via input feedback linearization (IFL). Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.
Reference values of maximal oxygen uptake for polish rowers.
Klusiewicz, Andrzej; Starczewski, Michał; Ładyga, Maria; Długołęcka, Barbara; Braksator, Wojciech; Mamcarz, Artur; Sitkowski, Dariusz
2014-12-01
The aim of this study was to characterize changes in maximal oxygen uptake over several years and to elaborate current reference values of this index based on determinations carried out in large and representative groups of top Polish rowers. For this study 81 female and 159 male rowers from the sub-junior to senior categories were recruited from the Polish National Team and its direct backup. All the subjects performed an incremental exercise test on a rowing ergometer. During the test maximal oxygen uptake was measured with the BxB method. The calculated reference values for elite Polish junior and U23 rowers allowed to evaluate the athletes' fitness level against the respective reference group and may aid the coach in controlling the training process. Mean values of VO2max achieved by members of the top Polish rowing crews who over the last five years competed in the Olympic Games or World Championships were also presented. The results of the research on the "trainability" of the maximal oxygen uptake may lead to a conclusion that the growth rate of the index is larger in case of high-level athletes and that the index (in absolute values) increases significantly between the age of 19-22 years (U23 category). PMID:25713672
Munch, G D W; Svendsen, J H; Damsgaard, R; Secher, N H; González-Alonso, J; Mortensen, S P
2014-01-01
In humans, maximal aerobic power () is associated with a plateau in cardiac output (), but the mechanisms regulating the interplay between maximal heart rate (HRmax) and stroke volume (SV) are unclear. To evaluate the effect of tachycardia and elevations in HRmax on cardiovascular function and capacity during maximal exercise in healthy humans, 12 young male cyclists performed incremental cycling and one-legged knee-extensor exercise (KEE) to exhaustion with and without right atrial pacing to increase HR. During control cycling, and leg blood flow increased up to 85% of maximal workload (WLmax) and remained unchanged until exhaustion. SV initially increased, plateaued and then decreased before exhaustion (P < 0.05) despite an increase in right atrial pressure (RAP) and a tendency (P = 0.056) for a reduction in left ventricular transmural filling pressure (LVFP). Atrial pacing increased HRmax from 184 ± 2 to 206 ± 3 beats min−1 (P < 0.05), but remained similar to the control condition at all intensities because of a lower SV and LVFP (P < 0.05). No differences in arterial pressure, peripheral haemodynamics, catecholamines or were observed, but pacing increased the rate pressure product and RAP (P < 0.05). Atrial pacing had a similar effect on haemodynamics during KEE, except that pacing decreased RAP. In conclusion, the human heart can be paced to a higher HR than observed during maximal exercise, suggesting that HRmax and myocardial work capacity do not limit in healthy individuals. A limited left ventricular filling and possibly altered contractility reduce SV during atrial pacing, whereas a plateau in LVFP appears to restrict close to . Key points During high intensity whole-body exercise, systemic and contracting skeletal muscle O2 delivery and uptake (>) are compromised, but the underlying mechanisms remain unclear. We evaluated the effect of a ∼20 beats min−1 increase in heart rate (HR) by right atrial pacing during incremental cycling and knee
Independent Learning Models: A Comparison.
ERIC Educational Resources Information Center
Wickett, R. E. Y.
Five models of independent learning are suitable for use in adult education programs. The common factor is a facilitator who works in some way with the student in the learning process. They display different characteristics, including the extent of independence in relation to content and/or process. Nondirective tutorial instruction and learning…
Swanson, David L.; Thomas, Nathan E.; Liknes, Eric T.; Cooper, Sheldon J.
2012-01-01
The underlying assumption of the aerobic capacity model for the evolution of endothermy is that basal (BMR) and maximal aerobic metabolic rates are phenotypically linked. However, because BMR is largely a function of central organs whereas maximal metabolic output is largely a function of skeletal muscles, the mechanistic underpinnings for their linkage are not obvious. Interspecific studies in birds generally support a phenotypic correlation between BMR and maximal metabolic output. If the aerobic capacity model is valid, these phenotypic correlations should also extend to intraspecific comparisons. We measured BMR, Msum (maximum thermoregulatory metabolic rate) and MMR (maximum exercise metabolic rate in a hop-flutter chamber) in winter for dark-eyed juncos (Junco hyemalis), American goldfinches (Carduelis tristis; Msum and MMR only), and black-capped chickadees (Poecile atricapillus; BMR and Msum only) and examined correlations among these variables. We also measured BMR and Msum in individual house sparrows (Passer domesticus) in both summer, winter and spring. For both raw metabolic rates and residuals from allometric regressions, BMR was not significantly correlated with either Msum or MMR in juncos. Moreover, no significant correlation between Msum and MMR or their mass-independent residuals occurred for juncos or goldfinches. Raw BMR and Msum were significantly positively correlated for black-capped chickadees and house sparrows, but mass-independent residuals of BMR and Msum were not. These data suggest that central organ and exercise organ metabolic levels are not inextricably linked and that muscular capacities for exercise and shivering do not necessarily vary in tandem in individual birds. Why intraspecific and interspecific avian studies show differing results and the significance of these differences to the aerobic capacity model are unknown, and resolution of these questions will require additional studies of potential mechanistic links between
Energetics of kayaking at submaximal and maximal speeds.
Zamparo, P; Capelli, C; Guerrini, G
1999-01-01
The energy cost of kayaking per unit distance (C(k), kJ x m(-1)) was assessed in eight middle- to high-class athletes (three males and five females; 45-76 kg body mass; 1.50-1.88 m height; 15-32 years of age) at submaximal and maximal speeds. At submaximal speeds, C(k) was measured by dividing the steady-state oxygen consumption (VO(2), l x s(-1)) by the speed (v, m x s(-1)), assuming an energy equivalent of 20.9 kJ x l O(-1)(2). At maximal speeds, C(k) was calculated from the ratio of the total metabolic energy expenditure (E, kJ) to the distance (d, m). E was assumed to be the sum of three terms, as originally proposed by Wilkie (1980): E = AnS + alphaVO(2max) x t-alphaVO(2max) x tau(1-e(-t x tau(-1))), were alpha is the energy equivalent of O(2) (20.9 kJ x l O(2)(-1)), tau is the time constant with which VO(2max) is attained at the onset of exercise at the muscular level, AnS is the amount of energy derived from anaerobic energy utilization, t is the performance time, and VO(2max) is the net maximal VO(2). Individual VO(2max) was obtained from the VO(2) measured during the last minute of the 1000-m or 2000-m maximal run. The average metabolic power output (E, kW) amounted to 141% and 102% of the individual maximal aerobic power (VO(2max)) from the shortest (250 m) to the longest (2000 m) distance, respectively. The average (SD) power provided by oxidative processes increased with the distance covered [from 0.64 (0.14) kW at 250 m to 1.02 (0.31) kW at 2000 m], whereas that provided by anaerobic sources showed the opposite trend. The net C(k) was a continuous power function of the speed over the entire range of velocities from 2.88 to 4.45 m x s(-1): C(k) = 0.02 x v(2.26) (r = 0.937, n = 32). PMID:10541920
Hillman, Stanley S; Hancock, Thomas V; Hedrick, Michael S
2013-02-01
Maximal aerobic metabolic rates (MMR) in vertebrates are supported by increased conductive and diffusive fluxes of O(2) from the environment to the mitochondria necessitating concomitant increases in CO(2) efflux. A question that has received much attention has been which step, respiratory or cardiovascular, provides the principal rate limitation to gas flux at MMR? Limitation analyses have principally focused on O(2) fluxes, though the excess capacity of the lung for O(2) ventilation and diffusion remains unexplained except as a safety factor. Analyses of MMR normally rely upon allometry and temperature to define these factors, but cannot account for much of the variation and often have narrow phylogenetic breadth. The unique aspect of our comparative approach was to use an interclass meta-analysis to examine cardio-respiratory variables during the increase from resting metabolic rate to MMR among vertebrates from fish to mammals, independent of allometry and phylogeny. Common patterns at MMR indicate universal principles governing O(2) and CO(2) transport in vertebrate cardiovascular and respiratory systems, despite the varied modes of activities (swimming, running, flying), different cardio-respiratory architecture, and vastly different rates of metabolism (endothermy vs. ectothermy). Our meta-analysis supports previous studies indicating a cardiovascular limit to maximal O(2) transport and also implicates a respiratory system limit to maximal CO(2) efflux, especially in ectotherms. Thus, natural selection would operate on the respiratory system to enhance maximal CO(2) excretion and the cardiovascular system to enhance maximal O(2) uptake. This provides a possible evolutionary explanation for the conundrum of why the respiratory system appears functionally over-designed from an O(2) perspective, a unique insight from previous work focused solely on O(2) fluxes. The results suggest a common gas transport blueprint, or Bauplan, in the vertebrate clade. PMID
Hillman, Stanley S; Hancock, Thomas V; Hedrick, Michael S
2013-02-01
Maximal aerobic metabolic rates (MMR) in vertebrates are supported by increased conductive and diffusive fluxes of O(2) from the environment to the mitochondria necessitating concomitant increases in CO(2) efflux. A question that has received much attention has been which step, respiratory or cardiovascular, provides the principal rate limitation to gas flux at MMR? Limitation analyses have principally focused on O(2) fluxes, though the excess capacity of the lung for O(2) ventilation and diffusion remains unexplained except as a safety factor. Analyses of MMR normally rely upon allometry and temperature to define these factors, but cannot account for much of the variation and often have narrow phylogenetic breadth. The unique aspect of our comparative approach was to use an interclass meta-analysis to examine cardio-respiratory variables during the increase from resting metabolic rate to MMR among vertebrates from fish to mammals, independent of allometry and phylogeny. Common patterns at MMR indicate universal principles governing O(2) and CO(2) transport in vertebrate cardiovascular and respiratory systems, despite the varied modes of activities (swimming, running, flying), different cardio-respiratory architecture, and vastly different rates of metabolism (endothermy vs. ectothermy). Our meta-analysis supports previous studies indicating a cardiovascular limit to maximal O(2) transport and also implicates a respiratory system limit to maximal CO(2) efflux, especially in ectotherms. Thus, natural selection would operate on the respiratory system to enhance maximal CO(2) excretion and the cardiovascular system to enhance maximal O(2) uptake. This provides a possible evolutionary explanation for the conundrum of why the respiratory system appears functionally over-designed from an O(2) perspective, a unique insight from previous work focused solely on O(2) fluxes. The results suggest a common gas transport blueprint, or Bauplan, in the vertebrate clade.
Steding-Ehrenborg, Katarina; Boushel, Robert C; Calbet, José A; Åkeson, Per; Mortensen, Stefan P
2015-12-01
Age-related decline in cardiac function can be prevented or postponed by lifelong endurance training. However, effects of normal ageing as well as of lifelong endurance exercise on longitudinal and radial contribution to stroke volume are unknown. The aim of this study was to determine resting longitudinal and radial pumping in elderly athletes, sedentary elderly and young sedentary subjects. Furthermore, we aimed to investigate determinants of maximal cardiac output in elderly. Eight elderly athletes (63 ± 4 years), seven elderly sedentary (66 ± 4 years) and ten young sedentary subjects (29 ± 4 years) underwent cardiac magnetic resonance imaging. All subjects underwent maximal exercise testing and for elderly subjects maximal cardiac output during cycling was determined using a dye dilution technique. Longitudinal and radial contribution to stroke volume did not differ between groups (longitudinal left ventricle (LV) 52-65%, P = 0.12, right ventricle (RV) 77-87%, P = 0.16, radial 7.9-8.6%, P = 1.0). Left ventricular atrioventricular plane displacement (LVAVPD) was higher in elderly athletes and young sedentary compared with elderly sedentary subjects (14 ± 3, 15 ± 2 and 11 ± 1 mm, respectively, P < 0.05). There was no difference between groups for RVAVPD (P = 0.2). LVAVPD was an independent predictor of maximal cardiac output (R(2) = 0.61, P < 0.01, β = 0.78). Longitudinal and radial contributions to stroke volume did not differ between groups. However, how longitudinal pumping was achieved differed; elderly athletes and young sedentary subjects showed similar AVPD whereas this was significantly lower in elderly sedentary subjects. Elderly sedentary subjects achieved longitudinal pumping through increased short-axis area of the ventricle. Large AVPD was a determinant of maximal cardiac output and exercise capacity.
Predictors of maximal short-term power outputs in basketball players 14-16 years.
Carvalho, Humberto M; Coelho E Silva, Manuel J; Figueiredo, António J; Gonçalves, Carlos E; Philippaerts, Renaat M; Castagna, Carlo; Malina, Robert M
2011-05-01
Relationships between growth, maturation and maximal short-term power outputs were investigated in 94 youth basketball players aged 14-16 years. Data included chronological age (CA), skeletal age (SA), years of training; body dimensions, estimated thigh volume, a running based short-term exercise assessed by the line drill test (LDT), the Bangsbo sprint test (BST) and short-term muscle power outputs with the Wingate anaerobic test (WAnT). Multiple linear regression analyses were used to estimate the effects of CA, skeletal maturity (SA/CA), years of training experience, body size and lower-limb volume on short-term performance in the LDT, BST and WAnT, respectively. Explained variances differed between cycle-ergometry outputs (52-54%) and running test performances (23-46%). The independent effects of predictors were small in the fatigue scores of the WAnT (4%) and the BST (11%). Skeletal maturity, body mass and leg length were primary predictors for all maximal short-term power output measures. Leg length was more relevant as a predictor than stature in the WAnT outputs, while stature and body mass appeared in the model with the running tests as dependent variable. Maximal short-term running abilities were also sensitive to years of training. In summary, skeletal maturation, body size and thigh muscle mass explained moderate to large proportions of the variance on maximal short-term performances of adolescent basketball players. The results highlight the importance of considering maturity status in evaluating the maximal short-term power outputs of adolescent athletes.
Anthony, Christopher J; DiPerna, James C; Lei, Pui-Wa
2016-04-01
Measurement efficiency is an important consideration when developing behavior rating scales for use in research and practice. Although most published scales have been developed within a Classical Test Theory (CTT) framework, Item Response Theory (IRT) offers several advantages for developing scales that maximize measurement efficiency. The current study provides an example of using IRT to maximize rating scale efficiency with the Social Skills Improvement System - Teacher Rating Scale (SSIS - TRS), a measure of student social skills frequently used in practice and research. Based on IRT analyses, 27 items from the Social Skills subscales and 14 items from the Problem Behavior subscales of the SSIS - TRS were identified as maximally efficient. In addition to maintaining similar content coverage to the published version, these sets of maximally efficient items demonstrated similar psychometric properties to the published SSIS - TRS.
Test of independence for generalized Farlie-Gumbel-Morgenstern distributions
NASA Astrophysics Data System (ADS)
Guven, Bilgehan; Kotz, Samual
2008-02-01
Given a pair of absolutely continuous random variables (X,Y) distributed as the generalized Farlie-Gumbel-Morgenstern (GFGM) distribution, we develop a test for testing the hypothesis: X and Y are independent vs. the alternative; X and Y are positively (negatively) quadrant dependent above a preassigned degree of dependence. The proposed test maximizes the minimum power over the alternative hypothesis. Also it possesses a monotone increasing power with respect to the dependence parameter of the GFGM distribution. An asymptotic distribution of the test statistic and an approximate test power are also studied.
Garcia-Tabar, Ibai; Eclache, Jean P.; Aramendi, José F.; Gorostiaga, Esteban M.
2015-01-01
The aim was to examine the drift in the measurements of fractional concentration of oxygen (FO2) and carbon dioxide (FCO2) of a Nafion-using metabolic cart during incremental maximal exercise in 18 young and 12 elderly males, and to propose a way in which the drift can be corrected. The drift was verified by comparing the pre-test calibration values with the immediate post-test verification values of the calibration gases. The system demonstrated an average downscale drift (P < 0.001) in FO2 and FCO2 of −0.18% and −0.05%, respectively. Compared with measured values, corrected average maximal oxygen uptakevalues were 5–6% lower (P < 0.001) whereas corrected maximal respiratory exchange ratio values were 8–9% higher (P < 0.001). The drift was not due to an electronic instability in the analyzers because it was reverted after 20 min of recovery from the end of the exercise. The drift may be related to an incomplete removal of water vapor from the expired gas during transit through the Nafion conducting tube. These data demonstrate the importance of checking FO2 and FCO2 values by regular pre-test calibrations and post-test verifications, and also the importance of correcting a possible shift immediately after exercise. PMID:26578980
Park, Saejong; Kim, Jong Kyung; Choi, Hyun Min; Kim, Hyun Gook; Beekley, Matthew D; Nho, Hosung
2010-07-01
Walk training with blood flow occlusion (OCC-walk) leads to muscle hypertrophy; however, cardiorespiratory endurance in response to OCC-walk is unknown. Ischemia enhances the adaptation to endurance training such as increased maximal oxygen uptake (VO₂(max)) and muscle glycogen content. Thus, we investigated the effects of an OCC-walk on cardiorespiratory endurance, anaerobic power, and muscle strength in elite athletes. College basketball players participated in walk training with (n = 7) and without (n = 5) blood flow occlusion. Five sets of a 3-min walk (4-6 km/h at 5% grade) and a 1-min rest between the walks were performed twice a day, 6 days a week for 2 weeks. Two-way ANOVA with repeated measures (groups x time) was utilized (P < 0.05). Interactions were found in VO₂(max) (P = 0.011) and maximal minute ventilation (VE(max); P = 0.019). VO₂(max) (11.6%) and VE(max) (10.6%) were increased following the OCC-walk. For the cardiovascular adaptations of the OCC-walk, hemodynamic parameters such as stroke volume (SV) and heart rate (HR) at rest and during OCC-walk were compared between the first and the last OCC-walk sessions. Although no change in hemodynamics was found at rest, during the last OCC-walk session SV was increased in all five sets (21.4%) and HR was decreased in the third (12.3%) and fifth (15.0%) sets. With anaerobic power an interaction was found in anaerobic capacity (P = 0.038) but not in peak power. Anaerobic capacity (2.5%) was increased following the OCC-walk. No interaction was found in muscle strength. In conclusion, the 2-week OCC-walk significantly increases VO₂(max) and VE(max) in athletes. The OCC-walk training might be used in the rehabilitation for athletes who intend to maintain or improve endurance.
Experimental measurement-device-independent entanglement detection.
Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed
2015-01-01
Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols. PMID:25649664
Arsac, L M; Belli, A; Lacour, J R
1996-01-01
A friction loaded cycle ergometer was instrumented with a strain gauge and an incremental encoder to obtain accurate measurement of human mechanical work output during the acceleration phase of a cycling sprint. This device was used to characterise muscle function in a group of 15 well-trained male subjects, asked to perform six short maximal sprints on the cycle against a constant friction load. Friction loads were successively set at 0.25, 0.35, 0.45, 0.55, 0.65 and 0.75 N.kg-1 body mass. Since the sprints were performed from a standing start, and since the acceleration was not restricted, the greatest attention was paid to the measurement of the acceleration balancing load due to flywheel inertia. Instantaneous pedalling velocity (v) and power output (P) were calculated each 5 ms and then averaged over each downstroke period so that each pedal downstroke provided a combination of v, force and P. Since an 8-s acceleration phase was composed of about 21 to 34 pedal downstrokes, this many v-P combinations were obtained amounting to 137-180 v-P combinations for all six friction loads in one individual, over the widest functional range of pedalling velocities (17-214 rpm). Thus, the individual's muscle function was characterised by the v-P relationships obtained during the six acceleration phases of the six sprints. An important finding of the present study was a strong linear relationship between individual optimal velocity (vopt) and individual maximal power output (Pmax) (n = 15, r = 0.95, P < 0.001) which has never been observed before. Since vopt has been demonstrated to be related to human fibre type composition both vopt, Pmax and their inter-relationship could represent a major feature in characterising muscle function in maximal unrestricted exercise. It is suggested that the present method is well suited to such analyses.
Biographical factors of occupational independence.
Müller, G F
2001-10-01
The present study examined biographical factors of occupational independence including any kind of nonemployed profession. Participants were 59 occupationally independent and 58 employed persons of different age (M = 36.3 yr.), sex, and profession. They were interviewed on variables like family influence, educational background, occupational role models, and critical events for choosing a particular type of occupational career. The obtained results show that occupationally independent people reported stronger family ties, experienced fewer restrictions of formal education, and remembered fewer negative role models than the employed people. Implications of these results are discussed. PMID:11783553
Sequence independent amplification of DNA
Bohlander, Stefan K.
1998-01-01
The present invention is a rapid sequence-independent amplification procedure (SIA). Even minute amounts of DNA from various sources can be amplified independent of any sequence requirements of the DNA or any a priori knowledge of any sequence characteristics of the DNA to be amplified. This method allows, for example the sequence independent amplification of microdissected chromosomal material and the reliable construction of high quality fluorescent in situ hybridization (FISH) probes from YACs or from other sources. These probes can be used to localize YACs on metaphase chromosomes but also--with high efficiency--in interphase nuclei.
Sequence independent amplification of DNA
Bohlander, S.K.
1998-03-24
The present invention is a rapid sequence-independent amplification procedure (SIA). Even minute amounts of DNA from various sources can be amplified independent of any sequence requirements of the DNA or any a priori knowledge of any sequence characteristics of the DNA to be amplified. This method allows, for example, the sequence independent amplification of microdissected chromosomal material and the reliable construction of high quality fluorescent in situ hybridization (FISH) probes from YACs or from other sources. These probes can be used to localize YACs on metaphase chromosomes but also--with high efficiency--in interphase nuclei. 25 figs.
NASA Astrophysics Data System (ADS)
Sai, Toru; Sugimoto, Yasuhiro
By using a quadratic compensation slope, a CMOS current-mode buck DC-DC converter with constant frequency characteristics over wide input and output voltage ranges has been developed. The use of a quadratic slope instead of a conventional linear slope makes both the damping factor in the transfer function and the frequency bandwidth of the current feedback loop independent of the converter's output voltage settings. When the coefficient of the quadratic slope is chosen to be dependent on the input voltage settings, the damping factor in the transfer function and the frequency bandwidth of the current feedback loop both become independent of the input voltage settings. Thus, both the input and output voltage dependences in the current feedback loop are eliminated, the frequency characteristics become constant, and the frequency bandwidth is maximized. To verify the effectiveness of a quadratic compensation slope with a coefficient that is dependent on the input voltage in a buck DC-DC converter, we fabricated a test chip using a 0.18µm high-voltage CMOS process. The evaluation results show that the frequency characteristics of both the total feedback loop and the current feedback loop are constant even when the input and output voltages are changed from 2.5V to 7V and from 0.5V to 5.6V, respectively, using a 3MHz clock.
The Independent Payment Advisory Board.
Manchikanti, Laxmaiah; Falco, Frank J E; Singh, Vijay; Benyamin, Ramsin M; Hirsch, Joshua A
2011-01-01
The Independent Payment Advisory Board (IPAB) is a vastly powerful component of the president's health care reform law, with authority to issue recommendations to reduce the growth in Medicare spending, providing recommendations to be considered by Congress and implemented by the administration on a fast track basis. Ever since its inception, IPAB has been one of the most controversial issues of the Patient Protection and Affordable Care Act (ACA), even though the powers of IPAB are restricted and multiple sectors of health care have been protected in the law. IPAB works by recommending policies to Congress to help Medicare provide better care at a lower cost, which would include ideas on coordinating care, getting rid of waste in the system, providing incentives for best practices, and prioritizing primary care. Congress then has the power to accept or reject these recommendations. However, Congress faces extreme limitations, either to enact policies that achieve equivalent savings, or let the Secretary of Health and Human Services (HHS) follow IPAB's recommendations. IPAB has strong supporters and opponents, leading to arguments in favor of or against to the extreme of introducing legislation to repeal IPAB. The origins of IPAB are found in the ideology of the National Institute for Health and Clinical Excellence (NICE) and the impetus of exploring health care costs, even though IPAB's authority seems to be limited to Medicare only. The structure and operation of IPAB differs from Medicare and has been called the Medicare Payment Advisory Commission (MedPAC) on steroids. The board membership consists of 15 full-time members appointed by the president and confirmed by the Senate with options for recess appointments. The IPAB statute sets target growth rates for Medicare spending. The applicable percent for maximum savings appears to be 0.5% for year 2015, 1% for 2016, 1.25% for 2017, and 1.5% for 2018 and later. The IPAB Medicare proposal process involves
Haddad, S; Charest-Tardif, G; Krishnan, K
2000-10-13
The objective of this study was to predict and validate the theoretically possible, maximal impact of metabolic interactions on the blood concentration profile of each component in mixtures of volatile organic chemicals (VOCs) [dichloromethane (DCM), benzene (BEN), trichloroethylene (TCE), toluene (TOL), tetrachloroethylene (PER), ethylbenzene (EBZ), styrene (STY), as well as para, ortho-, and meta-xylene (p-XYL, o-XYL, m-XYL)] in the rat. The methodology consisted of: (1) obtaining the validated, physiologically based toxicokinetic (PBTK) model for each of the mixture components from the literature, (2) substituting the Michaelis-Menten description of metabolism with an equation based on the hepatic extraction ratio (E) for simulating the maximal impact of metabolic interactions (i.e., by setting E to 0 or 1 for simulating maximal inhibition or induction, respectively), and (3) validating the PBTK model simulations by comparing the predicted boundaries of venous blood concentrations with the experimental data obtained following exposure to various mixtures of VOCs. All experimental venous blood concentration data for 9 of the 10 chemicals investigated in the present study (PER excepted) fell within the boundaries of the maximal impact of metabolic inhibition and induction predicted by the PBTK model. The modeling approach validated in this study represents a potentially useful tool for screening/identifying the chemicals for which metabolic interactions are likely to be important in the context of mixed exposures and mixture risk assessment.
Kreitler, Jason; Stoms, David M; Davis, Frank W
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Infant Mental Health Home Visitation: Setting and Maintaining Professional Boundaries
ERIC Educational Resources Information Center
Barron, Carla; Paradis, Nichole
2010-01-01
Relationship-based infant mental health home visiting services for infants, toddlers, and their families intensify the connection between the personal and professional. To promote the therapeutic relationship and maximize the effectiveness of the intervention, home visitors must exercise good judgment, in the field and in the moment, to set and…
All to the Center! Maintaining Equilibrium in the Collaborative Setting.
ERIC Educational Resources Information Center
Kurz, Meredith
One of the issues a college writing instructor grapples with in teaching writing is how best to structure collaborative groups to maximize benefit for each student in a multicultural classroom where many students might fairly be considered "marginalized"--to create an environment in which they become "insiders." Criteria sets for forming group…
Energy system interaction and relative contribution during maximal exercise.
Gastin, P B
2001-01-01
There are 3 distinct yet closely integrated processes that operate together to satisfy the energy requirements of muscle. The anaerobic energy system is divided into alactic and lactic components, referring to the processes involved in the splitting of the stored phosphagens, ATP and phosphocreatine (PCr), and the nonaerobic breakdown of carbohydrate to lactic acid through glycolysis. The aerobic energy system refers to the combustion of carbohydrates and fats in the presence of oxygen. The anaerobic pathways are capable of regenerating ATP at high rates yet are limited by the amount of energy that can be released in a single bout of intense exercise. In contrast, the aerobic system has an enormous capacity yet is somewhat hampered in its ability to delivery energy quickly. The focus of this review is on the interaction and relative contribution of the energy systems during single bouts of maximal exercise. A particular emphasis has been placed on the role of the aerobic energy system during high intensity exercise. Attempts to depict the interaction and relative contribution of the energy systems during maximal exercise first appeared in the 1960s and 1970s. While insightful at the time, these representations were based on calculations of anaerobic energy release that now appear questionable. Given repeated reproduction over the years, these early attempts have lead to 2 common misconceptions in the exercise science and coaching professions. First, that the energy systems respond to the demands of intense exercise in an almost sequential manner, and secondly, that the aerobic system responds slowly to these energy demands, thereby playing little role in determining performance over short durations. More recent research suggests that energy is derived from each of the energy-producing pathways during almost all exercise activities. The duration of maximal exercise at which equal contributions are derived from the anaerobic and aerobic energy systems appears to
Energy system interaction and relative contribution during maximal exercise.
Gastin, P B
2001-01-01
There are 3 distinct yet closely integrated processes that operate together to satisfy the energy requirements of muscle. The anaerobic energy system is divided into alactic and lactic components, referring to the processes involved in the splitting of the stored phosphagens, ATP and phosphocreatine (PCr), and the nonaerobic breakdown of carbohydrate to lactic acid through glycolysis. The aerobic energy system refers to the combustion of carbohydrates and fats in the presence of oxygen. The anaerobic pathways are capable of regenerating ATP at high rates yet are limited by the amount of energy that can be released in a single bout of intense exercise. In contrast, the aerobic system has an enormous capacity yet is somewhat hampered in its ability to delivery energy quickly. The focus of this review is on the interaction and relative contribution of the energy systems during single bouts of maximal exercise. A particular emphasis has been placed on the role of the aerobic energy system during high intensity exercise. Attempts to depict the interaction and relative contribution of the energy systems during maximal exercise first appeared in the 1960s and 1970s. While insightful at the time, these representations were based on calculations of anaerobic energy release that now appear questionable. Given repeated reproduction over the years, these early attempts have lead to 2 common misconceptions in the exercise science and coaching professions. First, that the energy systems respond to the demands of intense exercise in an almost sequential manner, and secondly, that the aerobic system responds slowly to these energy demands, thereby playing little role in determining performance over short durations. More recent research suggests that energy is derived from each of the energy-producing pathways during almost all exercise activities. The duration of maximal exercise at which equal contributions are derived from the anaerobic and aerobic energy systems appears to
Metabolic heterogeneity in human calf muscle during maximal exercise
Vandenborne, K. Free Univ. of Brussels ); McCully, K.; Kakihira, H.; Prammer. M.; Bolinger, L.; Detre, J.A.; Walter, G.; Chance, B.; Leigh. J.S. ); De Meirleir, K. )
1991-07-01
Human skeletal muscle is composed of various muscle fiber types. The authors hypothesized that differences in metabolism between fiber types could be detected noninvasively with {sup 31}P nuclear magnetic resonance spectroscopy during maximal exercise. This assumes that during maximal exercise all fiber types are recruited and all vary in the amount of acidosis. The calf muscles of seven subjects were studied. Two different coils were applied: an 11-cm-diameter surface coil and a five-segment meander coil. The meander coil was used to localize the {sup 31}P signal to either the medial or the lateral gastrocnemius. Maximal exercise, consisting of rapid plantar flexions, resulted in an 83.7% {plus minus} 7.8% decrease of the phosphocreatine pool and an 8-fold increase of the inorganic phosphate (P{sub i}) pool. At rest the P{sub i} pool was observed as a single resonance (pH 7.0). Toward the end of the first minute of exercise, three subjects showed three distinct P{sub i} peaks. During the second minute of exercise the pH values stabilized. The same pattern was seen when the signal was collected from the medial or lateral gastrocnemius. In four subjects only two distinct P{sub i} peaks were observed. The P{sub i} peaks had differing relative areas in different subjects, but they were reproducible in each individual. This method allowed is to study the appearance and disappearance of the different P{sub i} peaks, together with the changes in pH. Because multiple P{sub i} peaks were seen in single muscles they most likely identify different muscle fiber types.
Desai, Sachin N; Cravioto, Alejandro; Sur, Dipika; Kanungo, Suman
2014-01-01
When oral vaccines are administered to children in lower- and middle-income countries, they do not induce the same immune responses as they do in developed countries. Although not completely understood, reasons for this finding include maternal antibody interference, mucosal pathology secondary to infection, malnutrition, enteropathy, and previous exposure to the organism (or related organisms). Young children experience a high burden of cholera infection, which can lead to severe acute dehydrating diarrhea and substantial mortality and morbidity. Oral cholera vaccines show variations in their duration of protection and efficacy between children and adults. Evaluating innate and memory immune response is necessary to understand V. cholerae immunity and to improve current cholera vaccine candidates, especially in young children. Further research on the benefits of supplementary interventions and delivery schedules may also improve immunization strategies.
Absolutely maximally entangled states, combinatorial designs, and multiunitary matrices
NASA Astrophysics Data System (ADS)
Goyeneche, Dardo; Alsina, Daniel; Latorre, José I.; Riera, Arnau; Życzkowski, Karol
2015-09-01
Absolutely maximally entangled (AME) states are those multipartite quantum states that carry absolute maximum entanglement in all possible bipartitions. AME states are known to play a relevant role in multipartite teleportation, in quantum secret sharing, and they provide the basis novel tensor networks related to holography. We present alternative constructions of AME states and show their link with combinatorial designs. We also analyze a key property of AME states, namely, their relation to tensors, which can be understood as unitary transformations in all of their bipartitions. We call this property multiunitarity.
RapidMic: Rapid Computation of the Maximal Information Coefficient
Tang, Dongming; Wang, Mingwen; Zheng, Weifan; Wang, Hongjun
2014-01-01
To discover relationships and associations rapidly in large-scale datasets, we propose a cross-platform tool for the rapid computation of the maximal information coefficient based on parallel computing methods. Through parallel processing, the provided tool can effectively analyze large-scale biological datasets with a markedly reduced computing time. The experimental results show that the proposed tool is notably fast, and is able to perform an all-pairs analysis of a large biological dataset using a normal computer. The source code and guidelines can be downloaded from https://github.com/HelloWorldCN/RapidMic. PMID:24526831
Maximizing Shelf Life of Paneer-A Review.
Goyal, Sumit; Goyal, Gyanendra Kumar
2016-06-10
Paneer resembling soft cheese is a well-known heat- and acid-coagulated milk product. It is very popular in the Indian subcontinent and has appeared in the western and Middle East markets. The shelf life of paneer is quite low and it loses freshness after two to three days when stored under refrigeration. Various preservation techniques, including chemical additives, packaging, thermal processing, and low-temperature storage, have been proposed by researchers for enhancing its shelf life. The use of antimicrobial additives is not preferred because of perceived toxicity risks. Modified atmosphere packaging has been recommended as one of the best techniques for maximizing the shelf life of paneer. PMID:25679043
Maximizing the return on taxpayers' investments in fundamental biomedical research
Lorsch, Jon R.
2015-01-01
The National Institute of General Medical Sciences (NIGMS) at the U.S. National Institutes of Health has an annual budget of more than $2.3 billion. The institute uses these funds to support fundamental biomedical research and training at universities, medical schools, and other institutions across the country. My job as director of NIGMS is to work to maximize the scientific returns on the taxpayers' investments. I describe how we are optimizing our investment strategies and funding mechanisms, and how, in the process, we hope to create a more efficient and sustainable biomedical research enterprise. PMID:25926703
Alzheimer's disease care management plan: maximizing patient care.
Treinkman, Anna
2005-03-01
Nurse practitioners have the potential to significantly impact the care of patients with dementia. Healthcare providers can now offer patients medications that will control symptoms and prolong functioning. As a result of ongoing contact with patients, NPs play an important role in assessing and screening patients for AD and educating the patients, families, and caregivers about the disease. Alzheimer's disease is a chronic, progressive illness that requires long-term management. Nurse practitioners should be familiar with available medications and appreciate the need to individualize therapy to maximize efficacy and minimize potential adverse drug reactions.
Method for maximizing shale oil recovery from an underground formation
Sisemore, Clyde J.
1980-01-01
A method for maximizing shale oil recovery from an underground oil shale formation which has previously been processed by in situ retorting such that there is provided in the formation a column of substantially intact oil shale intervening between adjacent spent retorts, which method includes the steps of back filling the spent retorts with an aqueous slurry of spent shale. The slurry is permitted to harden into a cement-like substance which stabilizes the spent retorts. Shale oil is then recovered from the intervening column of intact oil shale by retorting the column in situ, the stabilized spent retorts providing support for the newly developed retorts.
A new tetrahydrofuran lignan diglycoside from Viola tianshanica Maxim.
Qin, Yan; Yin, Chengle; Cheng, Zhihong
2013-11-04
A new lignan glycoside, tianshanoside A (1), together with a known phenylpropanoid glycoside, syringin (2) and two known lignan glycosides, picraquassioside C (3), and aketrilignoside B (4), were isolated from the whole plant of Viola tianshanica Maxim. The structure of the new compound was elucidated by extensive NMR (1H, 13C, COSY, HSQC, HMBC and ROESY) and high resolution mass spectrometry analysis. The three lignans 1, 3, and 4 did not exhibit significant cytotoxicity against human gastric cancer Ags cells or HepG2 liver cancer cells. This is the first report of the isolation of a lignan skeleton from the genus Viola L.
Exact Distribution of the Maximal Height of p Vicious Walkers
NASA Astrophysics Data System (ADS)
Schehr, Grégory; Majumdar, Satya N.; Comtet, Alain; Randon-Furling, Julien
2008-10-01
Using path-integral techniques, we compute exactly the distribution of the maximal height Hp of p nonintersecting Brownian walkers over a unit time interval in one dimension, both for excursions p watermelons with a wall, and bridges p watermelons without a wall, for all integer p≥1. For large p, we show that ⟨Hp⟩˜2p (excursions) whereas ⟨Hp⟩˜p (bridges). Our exact results prove that previous numerical experiments only measured the preasymptotic behaviors and not the correct asymptotic ones. In addition, our method establishes a physical connection between vicious walkers and random matrix theory.
Informatics knowledge: the key to maximizing performance and productivity.
Sinclair, V G
1997-06-01
Nurse managers face a competitive and complex market-place that demands greater attention to customer satisfaction along with continual improvements in quality and cost-effectiveness. Without information technology, the manager cannot hope to deliver ever greater quality at less cost. The nurse managers' effective promotion and use of computer applications can have enormous impact on the performance and productivity of their health care facilities. This article explores the potential of information technology to maximize clinical and cost outcomes, optimize decision making, and enhance administrative productivity. PMID:9220900
Sexually transmitted infections in adolescents: Maximizing opportunities for optimal care
Allen, Upton D; MacDonald, Noni E
2014-01-01
Sexually transmitted infections are a growing public health concern in Canada, with rates of Chlamydia trachomatis infection, gonorrhea and syphilis increasing among adolescents and young adults. The present practice point outlines epidemiology, risk factors, laboratory testing and management for C trachomatis, Neisseria gonorrhoeae and Treponema pallidum, with a lesser focus on HIV. The need for test-of-cure and indications for further investigations are also discussed. The importance of maximizing opportunities to screen for and treat sexually transmitted infections in this age group is highlighted. PMID:25383001
Maximizing the return on taxpayers' investments in fundamental biomedical research.
Lorsch, Jon R
2015-05-01
The National Institute of General Medical Sciences (NIGMS) at the U.S. National Institutes of Health has an annual budget of more than $2.3 billion. The institute uses these funds to support fundamental biomedical research and training at universities, medical schools, and other institutions across the country. My job as director of NIGMS is to work to maximize the scientific returns on the taxpayers' investments. I describe how we are optimizing our investment strategies and funding mechanisms, and how, in the process, we hope to create a more efficient and sustainable biomedical research enterprise.
Weighted EMPCA: Weighted Expectation Maximization Principal Component Analysis
NASA Astrophysics Data System (ADS)
Bailey, Stephen
2016-09-01
Weighted EMPCA performs principal component analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that the resulting eigenvectors, when compared to classic PCA, are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data are simply limiting cases of weight = 0. The underlying algorithm is a noise weighted expectation maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution.
Wave number of maximal growth in viscous ferrofluids.
NASA Astrophysics Data System (ADS)
Lange, A.; Reimann, B.; Richter, R.
2001-09-01
Within the frame of linear stability theory an analytical method is presented for the normal field instability in magnetic fluids. It allows to calculate the maximal growth rate and the corresponding wave number for arbitrary values of the layer thickness and viscosity. Applying this method to magnetic fluids of finite depth, the results are quantitatively compared to the wave number of the transient pattern observed experimentally after a jumplike increase of the field. The wave number grows linearly with increasing induction where the theoretical and the experimental data agree well. Figs 2, Refs 13.
Testing sequential quantum measurements: how can maximal knowledge be extracted?
Nagali, Eleonora; Felicetti, Simone; de Assis, Pierre-Louis; D'Ambrosio, Vincenzo; Filip, Radim; Sciarrino, Fabio
2012-01-01
The extraction of information from a quantum system unavoidably implies a modification of the measured system itself. In this framework partial measurements can be carried out in order to extract only a portion of the information encoded in a quantum system, at the cost of inducing a limited amount of disturbance. Here we analyze experimentally the dynamics of sequential partial measurements carried out on a quantum system, focusing on the trade-off between the maximal information extractable and the disturbance. In particular we implement two sequential measurements observing that, by exploiting an adaptive strategy, is possible to find an optimal trade-off between the two quantities. PMID:22720131
Maximally entangled states in a Bose-Hubbard trimer
NASA Astrophysics Data System (ADS)
Reyes, Sebastian; Morales-Molina, Luis; Orszag, Miguel
2014-03-01
We study the generation of entanglement for interacting cold atoms in a three-site Bose-Hubbard ring. We propose a scheme by which maximally entangled states (MES) between two distinct atomic species can be prepared. Depending on the choice of experimental parameters, we demonstrate that it is possible to obtain different types of MES. Furthermore, we show that these MES are highly protected against experimental noise, making them good candidates for potential applications. S. R. acknowledges the support of FONDECYT grant 11110537.
Deformations with maximal supersymmetries part 2: off-shell formulation
NASA Astrophysics Data System (ADS)
Chang, Chi-Ming; Lin, Ying-Hsuan; Wang, Yifan; Yin, Xi
2016-04-01
Continuing our exploration of maximally supersymmetric gauge theories (MSYM) deformed by higher dimensional operators, in this paper we consider an off-shell approach based on pure spinor superspace and focus on constructing supersymmetric deformations beyond the first order. In particular, we give a construction of the Batalin-Vilkovisky action of an all-order non-Abelian Born-Infeld deformation of MSYM in the non-minimal pure spinor formalism. We also discuss subtleties in the integration over the pure spinor superspace and the relevance of Berkovits-Nekrasov regularization.
Independent Schools: Landscape and Learnings.
ERIC Educational Resources Information Center
Oates, William A.
1981-01-01
Examines American independent schools (parochial, southern segregated, and private institutions) in terms of their funding, expenditures, changing enrollment patterns, teacher-student ratios, and societal functions. Journal available from Daedalus Subscription Department, 1172 Commonwealth Ave., Boston, MA 02132. (AM)
Technology for Independent Living: Sourcebook.
ERIC Educational Resources Information Center
Enders, Alexandra, Ed.
This sourcebook provides information for the practical implementation of independent living technology in the everyday rehabilitation process. "Information Services and Resources" lists databases, clearinghouses, networks, research and development programs, toll-free telephone numbers, consumer protection caveats, selected publications, and…
Independence test for sparse data
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.
2016-06-01
In this paper a new non-parametric independence test is presented. García and González-López (2014) [1] introduced the LIS test for the hypothesis of independence between two continuous random variables, the test proposed in this work is a generalization of the LIS test. The new test does not require the assumption of continuity for the random variables, it test is applied to two datasets and also compared with the Pearson's Chi-squared test.
Grice's cooperative principle in the psychoanalytic setting.
Ephratt, Michal
2014-12-01
Grice's "cooperative principle," including conversational implicatures and maxims, is commonplace in current pragmatics (a subfield of linguistics), and is generally applied in conversational analysis. The author examines the unique contribution of Grice's principle in considering the psychotherapeutic setting and its discourse. Such an investigation is called for chiefly because of the central role of speech in psychoanalytic practice (the "talking cure"). Symptoms and transference, which are characterized as forms of expression that are fundamentally deceptive, must be equivocal and indirect, and must breach all four of Grice's categories and maxims: truth (Quality), relation (Relevance), Manner (be clear), and Quantity. Therapeutic practice, according to Freud's "fundamental rule of psychoanalysis," encourages the parties (analysand and analyst) to breach each and every one of Grice's maxims. Using case reports drawn from the literature, the author shows that these breachings are essential for therapeutic progress. They serve as a unique and important ground for revealing inner (psychic) contents, and demarcating real self from illusive self, which in turn constitutes leverage for integrating these contents with the self.
ERIC Educational Resources Information Center
Coleman, Jennifer
2005-01-01
The impact that visuals can have on the mind of children while reading in the library is discussed. Thus it is important to provide young readers with a visual feast as they begin to choose reading materials to explore independently.
Non-negative matrix factorization by maximizing correntropy for cancer clustering
2013-01-01
Background Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise. Results We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm. Conclusions Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. PMID:23522344
Maximizing Information Diffusion in the Cyber-physical Integrated Network †
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-01-01
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254
Dieli-Conwright, Christina M; Spektor, Tanya M; Rice, Judd C; Sattler, Fred R; Schroeder, E Todd
2012-05-01
We sought to evaluate baseline mRNA values and changes in gene expression of myostatin-related factors in postmenopausal women taking hormone therapy (HT) and not taking HT after eccentric exercise. Fourteen postmenopausal women participated including 6 controls not using HT (59 ± 4 years, 63 ± 17 kg) and 8 women using HT (59 ± 4 years, 89 ± 24 kg). The participants performed 10 sets of 10 maximal eccentric repetitions of single-leg extension on a dynamometer. Muscle biopsies from the vastus lateralis were obtained from the exercised leg at baseline and 4 hours after the exercise bout. Gene expression was determined using reverse transcriptase polymerase chain reaction for myostatin, activin receptor IIb (ActRIIb), follistatin, follistatin-related gene (FLRG), follistatin-like-3 (FSTL3), and GDF serum-associated protein-1 (GASP-1). In response to the exercise bout, myostatin and ActRIIb significantly decreased (p < 0.05), and follistatin, FLRG, FSTL3, and GASP-1 significantly increased in both groups (p < 0.05). Significantly greater changes in gene expression of all genes occurred in the HT group than in the control group after the acute eccentric exercise bout (p < 0.05). These data suggest that postmenopausal women using HT express greater myostatin-related gene expression, which may reflect a mechanism by which estrogen influences the preservation of muscle mass. Further, postmenopausal women using HT experienced a profoundly greater myostatin-related response to maximal eccentric exercise. PMID:22395277
Maes, F; Vandermeulen, D; Suetens, P
1999-12-01
Maximization of mutual information of voxel intensities has been demonstrated to be a very powerful criterion for three-dimensional medical image registration, allowing robust and accurate fully automated affine registration of multimodal images in a variety of applications, without the need for segmentation or other preprocessing of the images. In this paper, we investigate the performance of various optimization methods and multiresolution strategies for maximization of mutual information, aiming at increasing registration speed when matching large high-resolution images. We show that mutual information is a continuous function of the affine registration parameters when appropriate interpolation is used and we derive analytic expressions of its derivatives that allow numerically exact evaluation of its gradient. Various multiresolution gradient- and non-gradient-based optimization strategies, such as Powell, simplex, steepest-descent, conjugate-gradient, quasi-Newton and Levenberg-Marquardt methods, are evaluated for registration of computed tomography (CT) and magnetic resonance images of the brain. Speed-ups of a factor of 3 on average compared to Powell's method at full resolution are achieved with similar precision and without a loss of robustness with the simplex, conjugate-gradient and Levenberg-Marquardt method using a two-level multiresolution scheme. Large data sets such as 256(2) x 128 MR and 512(2) x 48 CT images can be registered with subvoxel precision in <5 min CPU time on current workstations. PMID:10709702
Maximizing Conservation and Production with Intensive Forest Management: It's All About Location.
Tittler, Rebecca; Filotas, Élise; Kroese, Jasmin; Messier, Christian
2015-11-01
Functional zoning has been suggested as a way to balance the needs of a viable forest industry with those of healthy ecosystems. Under this system, part of the forest is set aside for protected areas, counterbalanced by intensive and extensive management of the rest of the forest. Studies indicate this may provide adequate timber while minimizing road construction and favoring the development of large mature and old stands. However, it is unclear how the spatial arrangement of intensive management areas may affect the success of this zoning. Should these areas be agglomerated or dispersed throughout the forest landscape? Should managers prioritize (a) proximity to existing roads, (b) distance from protected areas, or (c) site-specific productivity? We use a spatially explicit landscape simulation model to examine the effects of different spatial scenarios on landscape structure, connectivity for native forest wildlife, stand diversity, harvest volume, and road construction: (1) random placement of intensive management areas, and (2-8) all possible combinations of rules (a)-(c). Results favor the agglomeration of intensive management areas. For most wildlife species, connectivity was the highest when intensive management was far from the protected areas. This scenario also resulted in relatively high harvest volumes. Maximizing distance of intensive management areas from protected areas may therefore be the best way to maximize the benefits of intensive management areas while minimizing their potentially negative effects on forest structure and biodiversity.
Maximizing Conservation and Production with Intensive Forest Management: It's All About Location
NASA Astrophysics Data System (ADS)
Tittler, Rebecca; Filotas, Élise; Kroese, Jasmin; Messier, Christian
2015-11-01
Functional zoning has been suggested as a way to balance the needs of a viable forest industry with those of healthy ecosystems. Under this system, part of the forest is set aside for protected areas, counterbalanced by intensive and extensive management of the rest of the forest. Studies indicate this may provide adequate timber while minimizing road construction and favoring the development of large mature and old stands. However, it is unclear how the spatial arrangement of intensive management areas may affect the success of this zoning. Should these areas be agglomerated or dispersed throughout the forest landscape? Should managers prioritize (a) proximity to existing roads, (b) distance from protected areas, or (c) site-specific productivity? We use a spatially explicit landscape simulation model to examine the effects of different spatial scenarios on landscape structure, connectivity for native forest wildlife, stand diversity, harvest volume, and road construction: (1) random placement of intensive management areas, and (2-8) all possible combinations of rules (a)-(c). Results favor the agglomeration of intensive management areas. For most wildlife species, connectivity was the highest when intensive management was far from the protected areas. This scenario also resulted in relatively high harvest volumes. Maximizing distance of intensive management areas from protected areas may therefore be the best way to maximize the benefits of intensive management areas while minimizing their potentially negative effects on forest structure and biodiversity.
Time-Course of Neuromuscular Changes during and after Maximal Eccentric Contractions
Doguet, Valentin; Jubeau, Marc; Dorel, Sylvain; Couturier, Antoine; Lacourpaille, Lilian; Guével, Arnaud; Guilhem, Gaël
2016-01-01
This study tested the relationship between the magnitude of muscle damage and both central and peripheral modulations during and after eccentric contractions of plantar flexors. Eleven participants performed 10 sets of 30 maximal eccentric contractions of the plantar flexors at 45°·s−1. Maximal voluntary torque, evoked torque (peripheral component) and voluntary activation (central component) were assessed before, during, immediately after (POST) and 48 h after (48 h) the eccentric exercise. Voluntary eccentric torque progressively decreased (up to −36%) concomitantly to a significant alteration of evoked torque (up to −34%) and voluntary activation (up to −13%) during the exercise. Voluntary isometric torque (−48 ± 7%), evoked torque (−41 ± 14%) and voluntary activation (−13 ± 11%) decreased at POST, but only voluntary isometric torque (−19 ± 6%) and evoked torque (−10 ± 18%) remained depressed at 48 h. Neither changes in voluntary activation nor evoked torque during the exercise were related to the magnitude of muscle damage markers, but the evoked torque decrement at 48 h was significantly correlated with the changes in voluntary activation (r = −0.71) and evoked torque (r = 0.77) at POST. Our findings show that neuromuscular responses observed during eccentric contractions were not associated with muscle damage. Conversely, central and peripheral impairments observed immediately after the exercise reflect the long-lasting reduction in force-generating capacity. PMID:27148075
Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization.
Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu
2016-01-01
In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications. PMID:26771830
Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization.
Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu
2016-01-01
In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications.
Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization
Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu
2016-01-01
In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications. PMID:26771830
NASA Astrophysics Data System (ADS)
Teutsch, Jason
2007-01-01
It is possible to enumerate all computer programs. In particular, for every partial computable function, there is a shortest program which computes that function. f-MIN is the set of indices for shortest programs. In 1972, Meyer showed that f-MIN is Turing equivalent to 0'', the halting set with halting set oracle. This paper generalizes the notion of shortest programs, and we use various measures from computability theory to describe the complexity of the resulting "spectral sets." We show that under certain Godel numberings, the spectral sets are exactly the canonical sets 0', 0'', 0''', ... up to Turing equivalence. This is probably not true in general, however we show that spectral sets always contain some useful information. We show that immunity, or "thinness" is a useful characteristic for distinguishing between spectral sets. In the final chapter, we construct a set which neither contains nor is disjoint from any infinite arithmetic set, yet it is 0-majorized and contains a natural spectral set. Thus a pathological set becomes a bit more friendly. Finally, a number of interesting open problems are left for the inspired reader.
ERIC Educational Resources Information Center
Baker, Mark; Beltran, Jane; Buell, Jason; Conrey, Brian; Davis, Tom; Donaldson, Brianna; Detorre-Ozeki, Jeanne; Dibble, Leila; Freeman, Tom; Hammie, Robert; Montgomery, Julie; Pickford, Avery; Wong, Justine
2013-01-01
Sets in the game "Set" are lines in a certain four-dimensional space. Here we introduce planes into the game, leading to interesting mathematical questions, some of which we solve, and to a wonderful variation on the game "Set," in which every tableau of nine cards must contain at least one configuration for a player to pick up.
Independent functions in rearrangement invariant spaces and the Kruglov property
NASA Astrophysics Data System (ADS)
Astashkin, S. V.
2008-08-01
Let X be a separable or maximal rearrangement invariant space on [0,1]. It is shown that the inequality \\displaystyle \\biggl\\Vert\\,\\sum_{k=1}^\\infty f_k\\biggr\\Vert _{X}\\le C\\biggl\\Vert\\biggl(\\,\\sum_{k=1}^\\infty f_k^2\\biggl)^{1/2}\\biggr\\Vert _Xholds for an arbitrary sequence of independent functions \\{f_k\\}_{k=1}^\\infty\\subset X, \\displaystyle\\int_0^1f_k(t)\\,dt=0, k=1,2,\\dots, if and only if X has the Kruglov property. As a consequence, it is proved that the same property is necessary and sufficient for a version of Maurey's well-known inequality for vector-valued Rademacher series with independent coefficients to hold in X.Bibliography: 24 titles.