Statistical mechanics of maximal independent sets
NASA Astrophysics Data System (ADS)
Dall'Asta, Luca; Pin, Paolo; Ramezanpour, Abolfazl
2009-12-01
The graph theoretic concept of maximal independent set arises in several practical problems in computer science as well as in game theory. A maximal independent set is defined by the set of occupied nodes that satisfy some packing and covering constraints. It is known that finding minimum and maximum-density maximal independent sets are hard optimization problems. In this paper, we use cavity method of statistical physics and Monte Carlo simulations to study the corresponding constraint satisfaction problem on random graphs. We obtain the entropy of maximal independent sets within the replica symmetric and one-step replica symmetry breaking frameworks, shedding light on the metric structure of the landscape of solutions and suggesting a class of possible algorithms. This is of particular relevance for the application to the study of strategic interactions in social and economic networks, where maximal independent sets correspond to pure Nash equilibria of a graphical game of public goods allocation.
Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks
NASA Astrophysics Data System (ADS)
Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.
Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.
Efficient parallel algorithms for (5+1)-coloring and maximal independent set problems
Goldberg, A.V.; Plotkin, S.A.
1987-01-01
An efficient technique for breaking symmetry in parallel is described. The technique works especially well on rooted trees and on graphs with a small maximum degree. In particular, a maximal independent set can be found on a constant-degree graph in O(lg*n) time on an EREW PRAM using a linear number of processors. It is shown how to apply this technique to construct more efficient parallel algorithms for several problems, including coloring of planar graphs and (delta + 1)-coloring of constant-degree graphs. Lower bounds for two related problems are proved.
NASA Astrophysics Data System (ADS)
Assad, S. M.; Thearle, O.; Lam, P. K.
2016-07-01
The rates at which a user can generate device-independent quantum random numbers from a Bell-type experiment depend on the measurements that the user performs. By numerically optimizing over these measurements, we present lower bounds on the randomness generation rates for a family of two-qubit states composed from a mixture of partially entangled states and the completely mixed state. We also report on the randomness generation rates from a tomographic measurement. Interestingly in this case, the randomness generation rates are not monotonic functions of entanglement.
Maximal non-classicality in multi-setting Bell inequalities
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Zohren, Stefan; Pawlowski, Marcin
2016-04-01
The discrepancy between maximally entangled states and maximally non-classical quantum correlations is well-known but still not well understood. We aim to investigate the relation between quantum correlations and entanglement in a family of Bell inequalities with N-settings and d outcomes. Using analytical as well as numerical techniques, we derive both maximal quantum violations and violations obtained from maximally entangled states. Furthermore, we study the most non-classical quantum states in terms of their entanglement entropy for large values of d and many measurement settings. Interestingly, we find that the entanglement entropy behaves very differently depending on whether N = 2 or N\\gt 2: when N = 2 the entanglement entropy is a monotone function of d and the most non-classical state is far from maximally entangled, whereas when N\\gt 2 the entanglement entropy is a non-monotone function of d and converges to that of the maximally entangled state in the limit of large d.
The maximally entangled set of 4-qubit states
NASA Astrophysics Data System (ADS)
Spee, C.; de Vicente, J. I.; Kraus, B.
2016-05-01
Entanglement is a resource to overcome the natural restriction of operations used for state manipulation to Local Operations assisted by Classical Communication (LOCC). Hence, a bipartite maximally entangled state is a state which can be transformed deterministically into any other state via LOCC. In the multipartite setting no such state exists. There, rather a whole set, the Maximally Entangled Set of states (MES), which we recently introduced, is required. This set has on the one hand the property that any state outside of this set can be obtained via LOCC from one of the states within the set and on the other hand, no state in the set can be obtained from any other state via LOCC. Recently, we studied LOCC transformations among pure multipartite states and derived the MES for three and generic four qubit states. Here, we consider the non-generic four qubit states and analyze their properties regarding local transformations. As already the most coarse grained classification, due to Stochastic LOCC (SLOCC), of four qubit states is much richer than in case of three qubits, the investigation of possible LOCC transformations is correspondingly more difficult. We prove that most SLOCC classes show a similar behavior as the generic states, however we also identify here three classes with very distinct properties. The first consists of the GHZ and W class, where any state can be transformed into some other state non-trivially. In particular, there exists no isolation. On the other hand, there also exist classes where all states are isolated. Last but not least we identify an additional class of states, whose transformation properties differ drastically from all the other classes. Although the possibility of transforming states into local-unitary inequivalent states by LOCC turns out to be very rare, we identify those states (with exception of the latter class) which are in the MES and those, which can be obtained (transformed) non-trivially from (into) other states
Finding the maximal membership in a fuzzy set of an element from another fuzzy set
NASA Astrophysics Data System (ADS)
Yager, Ronald R.
2010-11-01
The problem of finding the maximal membership grade in a fuzzy set of an element from another fuzzy set is an important class of optimisation problems manifested in the real world by situations in which we try to find what is the optimal financial satisfaction we can get from a socially responsible investment. Here, we provide a solution to this problem. We then look at the proposed solution for fuzzy sets with various types of membership grades, ordinal, interval value and intuitionistic.
Maximum independent set on diluted triangular lattices.
Fay, C W; Liu, J W; Duxbury, P M
2006-05-01
Core percolation and maximum independent set on random graphs have recently been characterized using the methods of statistical physics. Here we present a statistical physics study of these problems on bond diluted triangular lattices. Core percolation critical behavior is found to be consistent with the standard percolation values, though there are strong finite size effects. A transfer matrix method is developed and applied to find accurate values of the density and degeneracy of the maximum independent set on lattices of limited width but large length. An extrapolation of these results to the infinite lattice limit yields high precision results, which are tabulated. These results are compared to results found using both vertex based and edge based local probability recursion algorithms, which have proven useful in the analysis of hard computational problems, such as the satisfiability problem. PMID:16803003
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
An inability to set independent attentional control settings by hemifield.
Becker, Mark W; Ravizza, Susan M; Peltier, Chad
2015-11-01
Recent evidence suggests that people can simultaneously activate attentional control setting for two distinct colors. However, it is unclear whether both attentional control settings must operate globally across the visual field or whether each can be constrained to a particular spatial location. Using two different paradigms, we investigated participants' ability to apply independent color attentional control settings to distinct regions of space. In both experiments, participants were told to identify red letters in one hemifield and green letters in the opposite hemifield. Additionally, some trials used a "relevant distractor"-a letter that matched the opposite side's target color. In Experiment 1, eight letters appeared (four per hemifield) simultaneously for a brief amount of time and then were masked. Relevant distractors increased the error rate and resulted in a greater number of distractor intrusions than irrelevant distractors. Similar results were observed in Experiment 2 in which red and green targets were presented in two rapid serial visual presentation streams. Relevant distractors were found to produce an attentional blink similar in magnitude to an actual target. The results of both experiments suggest that letters matching either attentional control setting were selected by attention and were processed as if they were targets, providing strong evidence that both attentional control settings were applied globally, rather than being constrained to a particular location. PMID:26220268
Maximizing Social Model Principles in Residential Recovery Settings
Polcin, Douglas; Mericle, Amy; Howell, Jason; Sheridan, Dave; Christensen, Jeff
2014-01-01
Abstract Peer support is integral to a variety of approaches to alcohol and drug problems. However, there is limited information about the best ways to facilitate it. The “social model” approach developed in California offers useful suggestions for facilitating peer support in residential recovery settings. Key principles include using 12-step or other mutual-help group strategies to create and facilitate a recovery environment, involving program participants in decision making and facility governance, using personal recovery experience as a way to help others, and emphasizing recovery as an interaction between the individual and their environment. Although limited in number, studies have shown favorable outcomes for social model programs. Knowledge about social model recovery and how to use it to facilitate peer support in residential recovery homes varies among providers. This article presents specific, practical suggestions for enhancing social model principles in ways that facilitate peer support in a range of recovery residences. PMID:25364996
Speeding up Growth: Selection for Mass-Independent Maximal Metabolic Rate Alters Growth Rates.
Downs, Cynthia J; Brown, Jessi L; Wone, Bernard W M; Donovan, Edward R; Hayes, Jack P
2016-03-01
Investigations into relationships between life-history traits, such as growth rate and energy metabolism, typically focus on basal metabolic rate (BMR). In contrast, investigators rarely examine maximal metabolic rate (MMR) as a relevant metric of energy metabolism, even though it indicates the maximal capacity to metabolize energy aerobically, and hence it might also be important in trade-offs. We studied the relationship between energy metabolism and growth in mice (Mus musculus domesticus Linnaeus) selected for high mass-independent metabolic rates. Selection for high mass-independent MMR increased maximal growth rate, increased body mass at 20 weeks of age, and generally altered growth patterns in both male and female mice. In contrast, there was little evidence that the correlated response in mass-adjusted BMR altered growth patterns. The relationship between mass-adjusted MMR and growth rate indicates that MMR is an important mediator of life histories. Studies investigating associations between energy metabolism and life histories should consider MMR because it is potentially as important in understanding life history as BMR. PMID:26913943
NASA Astrophysics Data System (ADS)
Mahjoub, Dhia; Matula, David W.
The domatic partition problem seeks to maximize the partitioning of the nodes of the network into disjoint dominating sets. These sets represent a series of virtual backbones for wireless sensor networks to be activated successively, resulting in more balanced energy consumption and increased network robustness. In this study, we address the domatic partition problem in random geometric graphs by investigating several vertex coloring algorithms both topology and geometry-aware, color-adaptive and randomized. Graph coloring produces color classes with each class representing an independent set of vertices. The disjoint maximal independent sets constitute a collection of disjoint dominating sets that offer good network coverage. Furthermore, if we relax the full domination constraint then we obtain a partitioning of the network into disjoint dominating and nearly-dominating sets of nearly equal size, providing better redundancy and a near-perfect node coverage yield. In addition, these independent sets can be the basis for clustering a very large sensor network with minimal overlap between the clusters leading to increased efficiency in routing, wireless transmission scheduling and data-aggregation. We also observe that in dense random deployments, certain coloring algorithms yield a packing of the nodes into independent sets each of which is relatively close to the perfect placement in the triangular lattice.
On small set of one-way LOCC indistinguishability of maximally entangled states
NASA Astrophysics Data System (ADS)
Wang, Yan-Ling; Li, Mao-Sheng; Zheng, Zhu-Jun; Fei, Shao-Ming
2016-04-01
In this paper, we study the one-way local operations and classical communication (LOCC) problem. In {C}^d⊗ {C}^d with d≥ 4, we construct a set of 3lceil √{d}rceil -1 one-way LOCC indistinguishable maximally entangled states which are generalized Bell states. Moreover, we show that there are four maximally entangled states which cannot be perfectly distinguished by one-way LOCC measurements for any dimension d≥ 4.
Agreement Measure Comparisons between Two Independent Sets of Raters.
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W., Jr.
1997-01-01
Describes a FORTRAN software program that calculates the probability of an observed difference between agreement measures obtained from two independent sets of raters. An example illustrates the use of the DIFFER program in evaluating undergraduate essays. (Author/SLD)
ERIC Educational Resources Information Center
Green, Robert A.
The doctoral thesis, three-fourths of which consists of appendixes, describes the development and implementation of procedures to maximize the individualized instruction time of speech, hearing, and visually handicapped students in a public school itinerant special education setting in Pennsylvania. A brief review of the Education for All…
Influence maximization in social networks under an independent cascade-based model
NASA Astrophysics Data System (ADS)
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
Optimal quench for distance-independent entanglement and maximal block entropy
NASA Astrophysics Data System (ADS)
Alkurtass, Bedoor; Banchi, Leonardo; Bose, Sougato
2014-10-01
We optimize a quantum walk of multiple fermions following a quench in a spin chain to generate near-ideal resources for quantum networking. We first prove a useful theorem mapping the correlations evolved from specific quenches to the apparently unrelated problem of quantum state transfer between distinct spins. This mapping is then exploited to optimize the dynamics and produce large amounts of entanglement distributed in very special ways. Two applications are considered: the simultaneous generation of many Bell states between pairs of distant spins (maximal block entropy) and high entanglement between the ends of an arbitrarily long chain (distance-independent entanglement). Thanks to the generality of the result, we study its implementation in different experimental setups using present technology: nuclear magnetic resonance, ion traps, and ultracold atoms in optical lattices.
State-independent contextuality sets for a qutrit
NASA Astrophysics Data System (ADS)
Xu, Zhen-Peng; Chen, Jing-Ling; Su, Hong-Yi
2015-09-01
We present a generalized set of complex rays for a qutrit in terms of parameter q =e i 2 π / k, a k-th root of unity. Remarkably, when k = 2 , 3, the set reduces to two well-known state-independent contextuality (SIC) sets: the Yu-Oh set and the Bengtsson-Blanchfield-Cabello set. Based on the Ramanathan-Horodecki criterion and the violation of a noncontextuality inequality, we have proven that the sets with k = 3 m and k = 4 are SIC sets, while the set with k = 5 is not. Our generalized set of rays will theoretically enrich the study of SIC proofs, and stimulate the novel application to quantum information processing.
Existence of independent [1, 2]-sets in caterpillars
NASA Astrophysics Data System (ADS)
Santoso, Eko Budi; Marcelo, Reginaldo M.
2016-02-01
Given a graph G, a subset S ⊆ V (G) is an independent [1, 2]-set if no two vertices in S are adjacent and for every vertex ν ∈ V (G)/S, 1 ≤ |N(ν) ∩ S| ≤ 2, that is, every vertex ν ∈ V (G)/S is adjacent to at least one but not more than two vertices in S. In this paper, we discuss the existence of independent [1, 2]-sets in a family of trees called caterpillars.
Shortest Paths between Shortest Paths and Independent Sets
NASA Astrophysics Data System (ADS)
Kamiński, Marcin; Medvedev, Paul; Milanič, Martin
We study problems of reconfiguration of shortest paths in graphs. We prove that the shortest reconfiguration sequence can be exponential in the size of the graph and that it is NP-hard to compute the shortest reconfiguration sequence even when we know that the sequence has polynomial length. Moreover, we also study reconfiguration of independent sets in three different models and analyze relationships between these models, observing that shortest path reconfiguration is a special case of independent set reconfiguration in perfect graphs, under any of the three models. Finally, we give polynomial results for restricted classes of graphs (even-hole-free and P 4-free graphs).
Balance between noise and information flow maximizes set complexity of network dynamics.
Mäki-Marttunen, Tuomo; Kesseli, Juha; Nykter, Matti
2013-01-01
Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content. PMID:23516395
Sartor, Francesco; Vernillo, Gianluca; de Morree, Helma M; Bonomi, Alberto G; La Torre, Antonio; Kubis, Hans-Peter; Veicsteinas, Arsenio
2013-09-01
Assessment of the functional capacity of the cardiovascular system is essential in sports medicine. For athletes, the maximal oxygen uptake [Formula: see text] provides valuable information about their aerobic power. In the clinical setting, the (VO(2max)) provides important diagnostic and prognostic information in several clinical populations, such as patients with coronary artery disease or heart failure. Likewise, VO(2max) assessment can be very important to evaluate fitness in asymptomatic adults. Although direct determination of [VO(2max) is the most accurate method, it requires a maximal level of exertion, which brings a higher risk of adverse events in individuals with an intermediate to high risk of cardiovascular problems. Estimation of VO(2max) during submaximal exercise testing can offer a precious alternative. Over the past decades, many protocols have been developed for this purpose. The present review gives an overview of these submaximal protocols and aims to facilitate appropriate test selection in sports, clinical, and home settings. Several factors must be considered when selecting a protocol: (i) The population being tested and its specific needs in terms of safety, supervision, and accuracy and repeatability of the VO(2max) estimation. (ii) The parameters upon which the prediction is based (e.g. heart rate, power output, rating of perceived exertion [RPE]), as well as the need for additional clinically relevant parameters (e.g. blood pressure, ECG). (iii) The appropriate test modality that should meet the above-mentioned requirements should also be in line with the functional mobility of the target population, and depends on the available equipment. In the sports setting, high repeatability is crucial to track training-induced seasonal changes. In the clinical setting, special attention must be paid to the test modality, because multiple physiological parameters often need to be measured during test execution. When estimating VO(2max), one has
Carlson, Christopher S.; Eberle, Michael A.; Rieder, Mark J.; Yi, Qian; Kruglyak, Leonid; Nickerson, Deborah A.
2004-01-01
Common genetic polymorphisms may explain a portion of the heritable risk for common diseases. Within candidate genes, the number of common polymorphisms is finite, but direct assay of all existing common polymorphism is inefficient, because genotypes at many of these sites are strongly correlated. Thus, it is not necessary to assay all common variants if the patterns of allelic association between common variants can be described. We have developed an algorithm to select the maximally informative set of common single-nucleotide polymorphisms (tagSNPs) to assay in candidate-gene association studies, such that all known common polymorphisms either are directly assayed or exceed a threshold level of association with a tagSNP. The algorithm is based on the r2 linkage disequilibrium (LD) statistic, because r2 is directly related to statistical power to detect disease associations with unassayed sites. We show that, at a relatively stringent r2 threshold (r2>0.8), the LD-selected tagSNPs resolve >80% of all haplotypes across a set of 100 candidate genes, regardless of recombination, and tag specific haplotypes and clades of related haplotypes in nonrecombinant regions. Thus, if the patterns of common variation are described for a candidate gene, analysis of the tagSNP set can comprehensively interrogate for main effects from common functional variation. We demonstrate that, although common variation tends to be shared between populations, tagSNPs should be selected separately for populations with different ancestries. PMID:14681826
Beyond Maximum Independent Set: AN Extended Model for Point-Feature Label Placement
NASA Astrophysics Data System (ADS)
Haunert, Jan-Henrik; Wolff, Alexander
2016-06-01
Map labeling is a classical problem of cartography that has frequently been approached by combinatorial optimization. Given a set of features in the map and for each feature a set of label candidates, a common problem is to select an independent set of labels (that is, a labeling without label-label overlaps) that contains as many labels as possible and at most one label for each feature. To obtain solutions of high cartographic quality, the labels can be weighted and one can maximize the total weight (rather than the number) of the selected labels. We argue, however, that when maximizing the weight of the labeling, interdependences between labels are insufficiently addressed. Furthermore, in a maximum-weight labeling, the labels tend to be densely packed and thus the map background can be occluded too much. We propose extensions of an existing model to overcome these limitations. Since even without our extensions the problem is NP-hard, we cannot hope for an efficient exact algorithm for the problem. Therefore, we present a formalization of our model as an integer linear program (ILP). This allows us to compute optimal solutions in reasonable time, which we demonstrate for randomly generated instances.
Cometti, Carole; Deley, Gaelle; Babault, Nicolas
2011-01-01
The presents study investigated the effects of between-set interventions on neuromuscular function of the knee extensors during six sets of 10 isokinetic (120°·s-1) maximal concentric contractions separated by three minutes. Twelve healthy men (age: 23.9 ± 2.4 yrs) were tested for four different between-set recovery conditions applied during two minutes: passive recovery, active recovery (cycling), electromyostimulation and stretching, in a randomized, crossover design. Before, during and at the end of the isokinetic session, torque and thigh muscles electromyographic activity were measured during maximal voluntary contractions and electrically-evoked doublets. Activation level was calculated using the twitch interpolation technique. While quadriceps electromyographic activity and activation level were significantly decreased at the end of the isokinetic session (-5.5 ± 14.2 % and -2.7 ± 4.8 %; p < 0.05), significant decreases in maximal voluntary contractions and doublets were observed after the third set (respectively -0.8 ± 12.1% and -5.9 ± 9.9%; p < 0.05). Whatever the recovery modality applied, torque was back to initial values after each recovery period. The present results showed that fatigue appeared progressively during the isokinetic session with peripheral alterations occurring first followed by central ones. Recovery interventions between sets did not modify fatigue time course as compared with passive recovery. It appears that the interval between sets (3 min) was long enough to provide recovery regardless of the interventions. Key points Allowing three minutes of recovery between sets of 10 maximal concentric contractions would help the subjects to recover from the peripheral fatigue induced by each set and therefore to start each new set with a high intensity. During this type of session, with three minutes between sets, passive recovery is sufficient; there is no need to apply complicated recovery interventions. PMID:24149550
Wone, B W M; Madsen, P; Donovan, E R; Labocha, M K; Sears, M W; Downs, C J; Sorensen, D A; Hayes, J P
2015-01-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR, and BMR was slightly, but not significantly, higher (2.5%). Compared with controls, MMR was significantly higher (5.3%) in antagonistically selected lines, and BMR was slightly, but not significantly, lower (4.2%). Analysis of breeding values revealed no positive genetic trend for elevated BMR in high-MMR lines. A weak positive genetic correlation was detected between MMR and BMR. That weak positive genetic correlation supports the aerobic capacity model for the evolution of endothermy in the sense that it fails to falsify a key model assumption. Overall, the results suggest that at least in these mice there is significant capacity for independent evolution of metabolic traits. Whether that is true in the ancestral animals that evolved endothermy remains an important but unanswered question. PMID:25604947
Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.
Nath, Abhigyan; Subbiah, Karthikeyan
2015-12-01
Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance
Takahashi, Kei-ichiro; Takigawa, Ichigaku; Mamitsuka, Hiroshi
2013-01-01
Detecting biclusters from expression data is useful, since biclusters are coexpressed genes under only part of all given experimental conditions. We present a software called SiBIC, which from a given expression dataset, first exhaustively enumerates biclusters, which are then merged into rather independent biclusters, which finally are used to generate gene set networks, in which a gene set assigned to one node has coexpressed genes. We evaluated each step of this procedure: 1) significance of the generated biclusters biologically and statistically, 2) biological quality of merged biclusters, and 3) biological significance of gene set networks. We emphasize that gene set networks, in which nodes are not genes but gene sets, can be more compact than usual gene networks, meaning that gene set networks are more comprehensible. SiBIC is available at http://utrecht.kuicr.kyoto-u.ac.jp:8080/miami/faces/index.jsp. PMID:24386124
Takahashi, Kei-ichiro; Takigawa, Ichigaku; Mamitsuka, Hiroshi
2013-01-01
Detecting biclusters from expression data is useful, since biclusters are coexpressed genes under only part of all given experimental conditions. We present a software called SiBIC, which from a given expression dataset, first exhaustively enumerates biclusters, which are then merged into rather independent biclusters, which finally are used to generate gene set networks, in which a gene set assigned to one node has coexpressed genes. We evaluated each step of this procedure: 1) significance of the generated biclusters biologically and statistically, 2) biological quality of merged biclusters, and 3) biological significance of gene set networks. We emphasize that gene set networks, in which nodes are not genes but gene sets, can be more compact than usual gene networks, meaning that gene set networks are more comprehensible. SiBIC is available at http://utrecht.kuicr.kyoto-u.ac.jp:8080/miami/faces/index.jsp. PMID:24386124
Assessing material flows in urban systems: an approach to maximize the use of incomplete data sets.
Espinosa, G; Otterpohl, R
2014-01-01
Data scarcity and uncertainty are the main limiting factors for an integral evaluation of the urban water and wastewater management system (WWMS) in developing countries. The present research shows an approach to use incomplete data sets to analyse the flows of water and nitrogen and to make an integral evaluation of the WWMS at a case study city. By means of data validation and model adaptations the use of literature values is kept at the minimum possible and so the current trends for water consumption and pollution in the city are identified. The material flows were calculated as central values with a certain confidence range and met the selected plausibility criteria. Thus, the first essential step needed to identify the challenges and opportunities of future improvement strategies at the WWMS of the city was possible. PMID:25259505
Brodsky, Stanley J.; Wu, Xing-Gang; /SLAC /Chongqing U.
2012-02-16
A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The extended renormalization group equations, which express the invariance of physical observables under both the renormalization scale- and scheme-parameter transformations, provide a convenient way for estimating the scale- and scheme-dependence of the physical process. In this paper, we present a solution for the scale-equation of the extended renormalization group equations at the four-loop level. Using the principle of maximum conformality (PMC)/Brodsky-Lepage-Mackenzie (BLM) scale-setting method, all non-conformal {beta}{sub i} terms in the perturbative expansion series can be summed into the running coupling, and the resulting scale-fixed predictions are independent of the renormalization scheme. Different schemes lead to different effective PMC/BLM scales, but the final results are scheme independent. Conversely, from the requirement of scheme independence, one not only can obtain scheme-independent commensurate scale relations among different observables, but also determine the scale displacements among the PMC/BLM scales which are derived under different schemes. In principle, the PMC/BLM scales can be fixed order-by-order, and as a useful reference, we present a systematic and scheme-independent procedure for setting PMC/BLM scales up to NNLO. An explicit application for determining the scale setting of R{sub e{sup +}e{sup -}}(Q) up to four loops is presented. By using the world average {alpha}{sub s}{sup {ovr MS}}(MZ) = 0.1184 {+-} 0.0007, we obtain the asymptotic scale for the 't Hooft associated with the {ovr MS} scheme, {Lambda}{sub {ovr MS}}{sup 'tH} = 245{sub -10}{sup +9} MeV, and the asymptotic scale for the conventional {ovr MS} scheme, {Lambda}{sub {ovr MS}} = 213{sub -8}{sup +19} MeV.
Grignon, Jessica S; Ledikwe, Jenny H; Makati, Ditsapelo; Nyangah, Robert; Sento, Baraedi W; Semo, Bazghina-Werq
2014-01-01
To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs. PMID:24876798
Martín, René San; Appelbaum, Lawrence G.; Pearson, John M.; Huettel, Scott A.; Woldorff, Marty G.
2013-01-01
Success in many decision-making scenarios depends on the ability to maximize gains and minimize losses. Even if an agent knows which cues lead to gains and which lead to losses, that agent could still make choices yielding suboptimal rewards. Here, by analyzing event-related potentials (ERPs) recorded in humans during a probabilistic gambling task, we show that individuals’ behavioral tendencies to maximize gains and to minimize losses are associated with their ERP responses to the receipt of those gains and losses, respectively. We focused our analyses on ERP signals that predict behavioral adjustment: the fronto-central feedback-related negativity (FRN) and two P300 (P3) subcomponents: the fronto-central P3a and the parietal P3b. We found that, across participants, gain-maximization was predicted by differences in amplitude of the P3b for suboptimal versus optimal gains (i.e., P3b amplitude difference between the least good and the best possible gains). Conversely, loss-minimization was predicted by differences in the P3b amplitude to suboptimal versus optimal losses (i.e., difference between the worst and the least bad losses). Finally, we observed that the P3a and P3b, but not the FRN, predicted behavioral adjustment on subsequent trials, suggesting a specific adaptive mechanism by which prior experience may alter ensuing behavior. These findings indicate that individual differences in gain-maximization and loss-minimization are linked to individual differences in rapid neural responses to monetary outcomes. PMID:23595758
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
NASA Astrophysics Data System (ADS)
Kaloshin, Vadim; Saprykina, Maria
2012-11-01
The famous ergodic hypothesis suggests that for a typical Hamiltonian on a typical energy surface nearly all trajectories are dense. KAM theory disproves it. Ehrenfest (The Conceptual Foundations of the Statistical Approach in Mechanics. Ithaca, NY: Cornell University Press, 1959) and Birkhoff (Collected Math Papers. Vol 2, New York: Dover, pp 462-465, 1968) stated the quasi-ergodic hypothesis claiming that a typical Hamiltonian on a typical energy surface has a dense orbit. This question is wide open. Herman (Proceedings of the International Congress of Mathematicians, Vol II (Berlin, 1998). Doc Math 1998, Extra Vol II, Berlin: Int Math Union, pp 797-808, 1998) proposed to look for an example of a Hamiltonian near {H_0(I)= < I, I rangle/2} with a dense orbit on the unit energy surface. In this paper we construct a Hamiltonian {H_0(I)+\\varepsilon H_1(θ , I , \\varepsilon)} which has an orbit dense in a set of maximal Hausdorff dimension equal to 5 on the unit energy surface.
Pal, Karoly F.; Vertesi, Tamas
2010-08-15
The I{sub 3322} inequality is the simplest bipartite two-outcome Bell inequality beyond the Clauser-Horne-Shimony-Holt (CHSH) inequality, consisting of three two-outcome measurements per party. In the case of the CHSH inequality the maximal quantum violation can already be attained with local two-dimensional quantum systems; however, there is no such evidence for the I{sub 3322} inequality. In this paper a family of measurement operators and states is given which enables us to attain the maximum quantum value in an infinite-dimensional Hilbert space. Further, it is conjectured that our construction is optimal in the sense that measuring finite-dimensional quantum systems is not enough to achieve the true quantum maximum. We also describe an efficient iterative algorithm for computing quantum maximum of an arbitrary two-outcome Bell inequality in any given Hilbert space dimension. This algorithm played a key role in obtaining our results for the I{sub 3322} inequality, and we also applied it to improve on our previous results concerning the maximum quantum violation of several bipartite two-outcome Bell inequalities with up to five settings per party.
Maximally nonlocal theories cannot be maximally random.
de la Torre, Gonzalo; Hoban, Matty J; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio
2015-04-24
Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle. PMID:25955039
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
Holocene sea level variations on the basis of integration of independent data sets
Sahagian, D.; Berkman, P. . Dept. of Geological Sciences and Byrd Polar Research Center)
1992-01-01
Variations in sea level through earth history have occurred at a wide variety of time scales. Sea level researchers have attacked the problem of measuring these sea level changes through a variety of approaches, each relevant only to the time scale in question, and usually only relevant to the specific locality from which a specific type of data are derived. There is a plethora of different data types that can and have been used (locally) for the measurement of Holocene sea level variations. The problem of merging different data sets for the purpose of constructing a global eustatic sea level curve for the Holocene has not previously been adequately addressed. The authors direct the efforts to that end. Numerous studies have been published regarding Holocene sea level changes. These have involved exposed fossil reef elevations, elevation of tidal deltas, elevation of depth of intertidal peat deposits, caves, tree rings, ice cores, moraines, eolian dune ridges, marine-cut terrace elevations, marine carbonate species, tide gauges, and lake level variations. Each of these data sets is based on particular set of assumptions, and is valid for a specific set of environments. In order to obtain the most accurate possible sea level curve for the Holocene, these data sets must be merged so that local and other influences can be filtered out of each data set. Since each data set involves very different measurements, each is scaled in order to define the sensitivity of the proxy measurement parameter to sea level, including error bounds. This effectively determines the temporal and spatial resolution of each data set. The level of independence of data sets is also quantified, in order to rule out the possibility of a common non-eustatic factor affecting more than one variety of data. The Holocene sea level curve is considered to be independent of other factors affecting the proxy data, and is taken to represent the relation between global ocean water and basin volumes.
Composite alignment media for the measurement of independent sets of NMR residual dipolar couplings.
Ruan, Ke; Tolman, Joel R
2005-11-01
The measurement of independent sets of NMR residual dipolar couplings (RDCs) in multiple alignment media can provide a detailed view of biomolecular structure and dynamics, yet remains experimentally challenging. It is demonstrated here that independent sets of RDCs can be measured for ubiquitin using just a single alignment medium composed of aligned bacteriophage Pf1 particles embedded in a strained polyacrylamide gel matrix. Using this composite medium, molecular alignment can be modulated by varying the angle between the directors of ordering for the Pf1 and strained gel matrix, or by varying the ionic strength or concentration of the Pf1 particles. This approach offers significant advantages in that greater experimental control can be exercised over the acquisition of multi-alignment RDC data while a homogeneous chemical environment is maintained across all of the measured RDC data. PMID:16248635
GreedyMAX-type Algorithms for the Maximum Independent Set Problem
NASA Astrophysics Data System (ADS)
Borowiecki, Piotr; Göring, Frank
A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.
ERIC Educational Resources Information Center
Trapp, Georgina; Giles-Corti, Billie; Martin, Karen; Timperio, Anna; Villanueva, Karen
2012-01-01
Background: Schools are an ideal setting in which to involve children in research. Yet for investigators wishing to work in these settings, there are few method papers providing insights into working efficiently in this setting. Objective: The aim of this paper is to describe the five strategies used to increase response rates, data quality and…
Scudese, Estevão; Willardson, Jeffrey M; Simão, Roberto; Senna, Gilmar; de Salles, Belmiro F; Miranda, Humberto
2015-11-01
The purpose of this study was to compare different rest intervals between sets on repetition consistency and ratings of perceived exertion (RPE) during consecutive bench press sets with an absolute 3RM (3 repetition maximum) load. Sixteen trained men (23.75 ± 4.21 years; 74.63 ± 5.36 kg; 175 ± 4.64 cm; bench press relative strength: 1.44 ± 0.19 kg/kg of body mass) attended 4 randomly ordered sessions during which 5 consecutive sets of the bench press were performed with an absolute 3RM load and 1, 2, 3, or 5 minutes of rest interval between sets. The results indicated that significantly greater bench press repetitions were completed with 2, 3, and 5 minutes vs. 1-minute rest between sets (p ≤ 0.05); no significant differences were noted between the 2, 3, and 5 minutes rest conditions. For the 1-minute rest condition, performance reductions (relative to the first set) were observed commencing with the second set; whereas for the other conditions (2, 3, and 5 minutes rest), performance reductions were not evident until the third and fourth sets. The RPE values before each of the successive sets were significantly greater, commencing with the second set for the 1-minute vs. the 3 and 5 minutes rest conditions. Significant increases were also evident in RPE immediately after each set between the 1 and 5 minutes rest conditions from the second through fifth sets. These findings indicate that when utilizing an absolute 3RM load for the bench press, practitioners may prescribe a time-efficient minimum of 2 minutes rest between sets without significant impairments in repetition performance. However, lower perceived exertion levels may necessitate prescription of a minimum of 3 minutes rest between sets. PMID:24045632
Cell Wall Invertase Promotes Fruit Set under Heat Stress by Suppressing ROS-Independent Cell Death.
Liu, Yong-Hua; Offler, Christina E; Ruan, Yong-Ling
2016-09-01
Reduced cell wall invertase (CWIN) activity has been shown to be associated with poor seed and fruit set under abiotic stress. Here, we examined whether genetically increasing native CWIN activity would sustain fruit set under long-term moderate heat stress (LMHS), an important factor limiting crop production, by using transgenic tomato (Solanum lycopersicum) with its CWIN inhibitor gene silenced and focusing on ovaries and fruits at 2 d before and after pollination, respectively. We found that the increase of CWIN activity suppressed LMHS-induced programmed cell death in fruits. Surprisingly, measurement of the contents of H2O2 and malondialdehyde and the activities of a cohort of antioxidant enzymes revealed that the CWIN-mediated inhibition on programmed cell death is exerted in a reactive oxygen species-independent manner. Elevation of CWIN activity sustained Suc import into fruits and increased activities of hexokinase and fructokinase in the ovaries in response to LMHS Compared to the wild type, the CWIN-elevated transgenic plants exhibited higher transcript levels of heat shock protein genes Hsp90 and Hsp100 in ovaries and HspII17.6 in fruits under LMHS, which corresponded to a lower transcript level of a negative auxin responsive factor IAA9 but a higher expression of the auxin biosynthesis gene ToFZY6 in fruits at 2 d after pollination. Collectively, the data indicate that CWIN enhances fruit set under LMHS through suppression of programmed cell death in a reactive oxygen species-independent manner that could involve enhanced Suc import and catabolism, HSP expression, and auxin response and biosynthesis. PMID:27462084
NASA Astrophysics Data System (ADS)
Hebenstreit, M.; Spee, C.; Kraus, B.
2016-01-01
Entanglement is the resource to overcome the restriction of operations to local operations assisted by classical communication (LOCC). The maximally entangled set (MES) of states is the minimal set of n -partite pure states with the property that any truly n -partite entangled pure state can be obtained deterministically via LOCC from some state in this set. Hence, this set contains the most useful states for applications. In this work, we characterize the MES for generic three-qutrit states. Moreover, we analyze which generic three-qutrit states are reachable (and convertible) under LOCC transformations. To this end, we study reachability via separable operations (SEP), a class of operations that is strictly larger than LOCC. Interestingly, we identify a family of pure states that can be obtained deterministically via SEP but not via LOCC. This gives an affirmative answer to the question of whether there is a difference between SEP and LOCC for transformations among pure states.
Lee, Wei-Ning; Qian, Zhen; Tosti, Christina L.; Brown, Truman R.; Metaxas, Dimitris N.; Konofagou, Elisa E.
2014-01-01
Myocardial Elastography (ME), a radio-frequency (RF) based speckle tracking technique, was employed in order to image the entire two-dimensional (2D) transmural deformation field in full view, and validated against tagged Magnetic Resonance Imaging (tMRI) in normal as well as reperfused (i.e., treated myocardial infarction (MI)) human left ventricles. RF ultrasound and tMRI frames were acquired at the papillary muscle level in 2D short-axis (SA) views at nominal frame rates of 136 (fps; real time) and 33 fps (electrocardiogram (ECG)-gated), respectively. In ultrasound, in-plane, 2D (lateral and axial) incremental displacements were iteratively estimated using one-dimensional (1D) cross-correlation and recorrelation techniques in a 2D search with a 1D matching kernel. In tMRI, cardiac motion was estimated by a template-matching algorithm on a 2D grid-shaped mesh. In both ME and tMRI, cumulative 2D displacements were estimated and then used to estimate 2D Lagrangian finite systolic strains, from which polar (i.e., radial and circumferential) strains, namely angle-independent measures, were further obtained through coordinate transformation. Principal strains, which are angle-independent and less centroid-dependent than polar strains, were also computed and imaged based on the 2D finite strains with a previously established strategy. Both qualitatively and quantitatively, angle-independent ME is shown to be capable of 1) estimating myocardial deformation in good agreement with tMRI estimates in a clinical setting and of 2) differentiating abnormal from normal myocardium in a full left-ventricular view. Finally, the principal strains are suggested to be an alternative diagnostic tool of detecting cardiac disease with the characteristics of their reduced centroid dependence. PMID:18952364
ERIC Educational Resources Information Center
Hume, Kara; Plavnick, Joshua; Odom, Samuel L.
2012-01-01
Strategies that promote the independent demonstration of skills across educational settings are critical for improving the accessibility of general education settings for students with ASD. This research assessed the impact of an individual work system on the accuracy of task completion and level of adult prompting across educational setting.…
A Method Defining a Limited Set of Character-Strings with Maximal Coverage of a Sample of Text.
ERIC Educational Resources Information Center
Hultgren, Jan; Larsson, Rolf
This is a progress report on a project attempting to design a system of compacting text for storage appropriate to disc oriented demand searching. After noting a number of previously designed methods of compression, it offers a tentative solution which couples a dictionary of most frequent character-strings with a set of variable-length codes. The…
Wang, Ning; Braun, Edward L; Kimball, Rebecca T
2012-02-01
Although many phylogenetic studies have focused on developing hypotheses about relationships, advances in data collection and computation have increased the feasibility of collecting large independent data sets to rigorously test controversial hypotheses or carefully assess artifacts that may be misleading. One such relationship in need of independent evaluation is the position of Passeriformes (perching birds) in avian phylogeny. This order comprises more than half of all extant birds, and it includes one of the most important avian model systems (the zebra finch). Recent large-scale studies using morphology, mitochondrial, and nuclear sequence data have generated very different hypotheses about the sister group of Passeriformes, and all conflict with an older hypothesis generated using DNA-DNA hybridization. We used novel data from 30 nuclear loci, primarily introns, for 28 taxa to evaluate five major a priori hypotheses regarding the phylogenetic position of Passeriformes. Although previous studies have suggested that nuclear introns are ideal for the resolution of ancient avian relationships, introns have also been criticized because of the potential for alignment ambiguities and the loss of signal due to saturation. To examine these issues, we generated multiple alignments using several alignment programs, varying alignment parameters, and using guide trees that reflected the different a priori hypotheses. Although different alignments and analyses yielded slightly different results, our analyses excluded all but one of the five a priori hypotheses. In many cases, the passerines were sister to the Psittaciformes (parrots), and taxa were members of a larger clade that includes Falconidae (falcons) and Cariamidae (seriemas). However, the position of Coliiformes (mousebirds) was highly unstable in our analyses of 30 loci, and this represented the primary source of incongruence among analyses. Mousebirds were united with passerines or parrots in some analyses
NASA Astrophysics Data System (ADS)
Douthett, Elwood (Jack) Moser, Jr.
1999-10-01
Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. PMID:23070592
Mangle, Lisa; Phillips, Paula; Pitts, Mark; Laver-Bradbury, Cathy
2014-12-01
Legislative changes that came into effect in the UK in April 2012 gave nurse independent prescribers (NIPs) the power to prescribe schedule 2-5 controlled drugs. Therefore, suitably qualified UK nurses can now independently prescribe any drug for any medical condition within their clinical competence. The potential benefits of independent nurse prescribing include improved access to medications and more efficient use of skills within the National Health Service workforce. This review explores the published literature (to July 2013) to investigate whether the predicted benefits of NIPs in mental health settings can be supported by empirical evidence, with a specific focus on nurse-led management of patients with attention-deficit/hyperactivity disorder (ADHD). The most common pharmacological treatments for ADHD are controlled drugs. Therefore, the 2012 legislative changes allow nurse-led ADHD services to offer holistic packages of care for patients. Evidence suggests that independent prescribing by UK nurses is safe, clinically appropriate and associated with high levels of patient satisfaction. The quality of the nurse-patient relationship and nurses' ability to provide flexible follow-up services suggests that nurse-led ADHD services are well positioned to enhance the outcomes for patients and their parents/carers. However, the empirical evidence available to support the value of NIPs in mental health settings is limited. There is a need for additional high-quality data to verify scientifically the value of nurse-delivered ADHD care. This evidence will be invaluable in supporting the growth of nurse-led ADHD services and for those who support greater remuneration for the expanded role of NIPs. PMID:24744052
10 Questions about Independent Reading
ERIC Educational Resources Information Center
Truby, Dana
2012-01-01
Teachers know that establishing a robust independent reading program takes more than giving kids a little quiet time after lunch. But how do they set up a program that will maximize their students' gains? Teachers have to know their students' reading levels inside and out, help them find just-right books, and continue to guide them during…
A set of ligation-independent expression vectors for co-expression of proteins in Escherichia coli.
Chanda, Pranab K; Edris, Wade A; Kennedy, Jeffrey D
2006-05-01
A set of ligation-independent expression vectors system has been developed for co-expression of proteins in Escherichia coli. These vectors contain a strong T7 promoter, different drug resistant genes, and an origin of DNA replication from a different incompatibility group, allowing combinations of these plasmids to be stably maintained together. In addition, these plasmids also contain the lacI gene, a transcriptional terminator, and a 3' polyhistidine (6x His) affinity tag (H6) for easy purification of target proteins. All of these vectors contain an identical transportable cassette flanked by suitable restriction enzyme cleavage sites for easy cloning and shuttling among different vectors. This cassette incorporates a ligation-independent cloning (LIC) site for LIC manipulations, an optimal ribosome binding site for efficient protein translation, and a 6x His affinity tag for protein purification Therefore, any E. coli expression vector of choice can be easily converted to LIC type expression vectors by shuttling the cassette using the restriction enzyme cleavage sites at the ends. We have demonstrated the expression capabilities of these vectors by co-expressing three bacterial (dsbA, dsbG, and Trx) and also two other mammalian proteins (KChIP1 and Kv4.3). We further show that co-expressed KChIP1/Kv4.3 forms soluble protein complexes that can be purified for further studies. PMID:16325426
Resources and energetics determined dinosaur maximal size
McNab, Brian K.
2009-01-01
Some dinosaurs reached masses that were ≈8 times those of the largest, ecologically equivalent terrestrial mammals. The factors most responsible for setting the maximal body size of vertebrates are resource quality and quantity, as modified by the mobility of the consumer, and the vertebrate's rate of energy expenditure. If the food intake of the largest herbivorous mammals defines the maximal rate at which plant resources can be consumed in terrestrial environments and if that limit applied to dinosaurs, then the large size of sauropods occurred because they expended energy in the field at rates extrapolated from those of varanid lizards, which are ≈22% of the rates in mammals and 3.6 times the rates of other lizards of equal size. Of 2 species having the same energy income, the species that uses the most energy for mass-independent maintenance of necessity has a smaller size. The larger mass found in some marine mammals reflects a greater resource abundance in marine environments. The presumptively low energy expenditures of dinosaurs potentially permitted Mesozoic communities to support dinosaur biomasses that were up to 5 times those found in mammalian herbivores in Africa today. The maximal size of predatory theropods was ≈8 tons, which if it reflected the maximal capacity to consume vertebrates in terrestrial environments, corresponds in predatory mammals to a maximal mass less than a ton, which is what is observed. Some coelurosaurs may have evolved endothermy in association with the evolution of feathered insulation and a small mass. PMID:19581600
Samus, QM; Johnston, D; Black, BS; Hess, E; Lyman, C; Vavilikolanu, A; Pollutra, J; Leoutsakos, J-M; Gitlin, LN; Rabins, PV; Lyketsos, CG
2014-01-01
Objectives To assess whether a dementia care coordination intervention delays time to transition from home and reduces unmet needs in elders with memory disorders. Design 18-month randomized controlled trial of 303 community-living elders. Setting: 28 postal code areas of Baltimore, MD. Participants Age 70+, with a cognitive disorder, community-living, English-speaking, and having a study partner available. Intervention 18-month care coordination intervention to systematically identify and address dementia-related care needs through individualized care planning; referral and linkage to services; provision of dementia education and skill building strategies; and care monitoring by an interdisciplinary team. Measurements Primary outcomes were time to transfer from home and total percent of unmet care needs at 18 months. Results Intervention participants had a significant delay in time to all-cause transition from home and the adjusted hazard of leaving the home was decreased by 37% (HR = 0.63, 95% CI 0.42 to 0.94) compared to control participants. While there was no significant group difference in reduction of total percent of unmet needs from baseline to 18 months, the intervention group had significant reductions in the proportion of unmet needs in safety and legal/advance care domains relative to controls. Intervention participants had a significant improvement in self-reported quality of life (QOL) relative to control participants. No group differences were found in proxy-rated QOL, neuropsychiatric symptoms, or depression. Conclusions A home-based dementia care coordination intervention delivered by non-clinical community workers trained and overseen by geriatric clinicians led to delays in transition from home, reduced unmet needs, and improved self-reported QOL. PMID:24502822
Ansari, Elnaz Saberi; Eslahchi, Changiz; Pezeshk, Hamid; Sadeghi, Mehdi
2014-09-01
Decomposition of structural domains is an essential task in classifying protein structures, predicting protein function, and many other proteomics problems. As the number of known protein structures in PDB grows exponentially, the need for accurate automatic domain decomposition methods becomes more essential. In this article, we introduce a bottom-up algorithm for assigning protein domains using a graph theoretical approach. This algorithm is based on a center-based clustering approach. For constructing initial clusters, members of an independent dominating set for the graph representation of a protein are considered as the centers. A distance matrix is then defined for these clusters. To obtain final domains, these clusters are merged using the compactness principle of domains and a method similar to the neighbor-joining algorithm considering some thresholds. The thresholds are computed using a training set consisting of 50 protein chains. The algorithm is implemented using C++ language and is named ProDomAs. To assess the performance of ProDomAs, its results are compared with seven automatic methods, against five publicly available benchmarks. The results show that ProDomAs outperforms other methods applied on the mentioned benchmarks. The performance of ProDomAs is also evaluated against 6342 chains obtained from ASTRAL SCOP 1.71. ProDomAs is freely available at http://www.bioinf.cs.ipm.ir/software/prodomas. PMID:24596179
MAZZETTI, SCOTT A.; WOLFF, CHRISTOPHER; COLLINS, BRITTANY; KOLANKOWSKI, MICHAEL T.; WILKERSON, BRITTANY; OVERSTREET, MATTHEW; GRUBE, TROY
2011-01-01
With resistance exercise, greater intensity typically elicits increased energy expenditure, but heavier loads require that the lifter perform more sets of fewer repetitions, which alters the kilograms lifted per set. Thus, the effect of exercise-intensity on energy expenditure has yielded varying results, especially with explosive resistance exercise. This study was designed to examine the effect of exercise-intensity and kilograms/set on energy expenditure during explosive resistance exercise. Ten resistance-trained men (22±3.6 years; 84±6.4 kg, 180±5.1 cm, and 13±3.8 %fat) performed squat and bench press protocols once/week using different exercise-intensities including 48% (LIGHT-48), 60% (MODERATE-60), and 72% of 1-repetition-maximum (1-RM) (HEAVY-72), plus a no-exercise protocol (CONTROL). To examine the effects of kilograms/set, an additional protocol using 72% of 1-RM was performed (HEAVY-72MATCHED) with kilograms/set matched with LIGHT-48 and MODERATE-60. LIGHT-48 was 4 sets of 10 repetitions (4×10); MODERATE-60 4×8; HEAVY-72 5×5; and HEAVY-72MATCHED 4×6.5. Eccentric and concentric repetition speeds, ranges-of-motion, rest-intervals, and total kilograms were identical between protocols. Expired air was collected continuously throughout each protocol using a metabolic cart, [Blood lactate] using a portable analyzer, and bench press peak power were measured. Rates of energy expenditure were significantly greater (p≤0.05) with LIGHT-48 and HEAVY-72MATCHED than HEAVY-72 during squat (7.3±0.7; 6.9±0.6 > 6.1±0.7 kcal/min), bench press (4.8±0.3; 4.7±0.3 > 4.0±0.4 kcal/min), and +5min after (3.7±0.1; 3.7±0.2 > 3.3±0.3 kcal/min), but there were no significant differences in total kcal among protocols. Therefore, exercise-intensity may not effect energy expenditure with explosive contractions, but light loads (~50% of 1-RM) may be preferred because of higher rates of energy expenditure, and since heavier loading requires more sets with lower
Sabree, Zakee L; Hansen, Allison K; Moran, Nancy A
2012-01-01
Starting in 2003, numerous studies using culture-independent methodologies to characterize the gut microbiota of honey bees have retrieved a consistent and distinctive set of eight bacterial species, based on near identity of the 16S rRNA gene sequences. A recent study [Mattila HR, Rios D, Walker-Sperling VE, Roeselers G, Newton ILG (2012) Characterization of the active microbiotas associated with honey bees reveals healthier and broader communities when colonies are genetically diverse. PLoS ONE 7(3): e32962], using pyrosequencing of the V1-V2 hypervariable region of the 16S rRNA gene, reported finding entirely novel bacterial species in honey bee guts, and used taxonomic assignments from these reads to predict metabolic activities based on known metabolisms of cultivable species. To better understand this discrepancy, we analyzed the Mattila et al. pyrotag dataset. In contrast to the conclusions of Mattila et al., we found that the large majority of pyrotag sequences belonged to clusters for which representative sequences were identical to sequences from previously identified core species of the bee microbiota. On average, they represent 95% of the bacteria in each worker bee in the Mattila et al. dataset, a slightly lower value than that found in other studies. Some colonies contain small proportions of other bacteria, mostly species of Enterobacteriaceae. Reanalysis of the Mattila et al. dataset also did not support a relationship between abundances of Bifidobacterium and of putative pathogens or a significant difference in gut communities between colonies from queens that were singly or multiply mated. Additionally, consistent with previous studies, the dataset supports the occurrence of considerable strain variation within core species, even within single colonies. The roles of these bacteria within bees, or the implications of the strain variation, are not yet clear. PMID:22829932
Sabree, Zakee L.; Hansen, Allison K.; Moran, Nancy A.
2012-01-01
Starting in 2003, numerous studies using culture-independent methodologies to characterize the gut microbiota of honey bees have retrieved a consistent and distinctive set of eight bacterial species, based on near identity of the 16S rRNA gene sequences. A recent study [Mattila HR, Rios D, Walker-Sperling VE, Roeselers G, Newton ILG (2012) Characterization of the active microbiotas associated with honey bees reveals healthier and broader communities when colonies are genetically diverse. PLoS ONE 7(3): e32962], using pyrosequencing of the V1–V2 hypervariable region of the 16S rRNA gene, reported finding entirely novel bacterial species in honey bee guts, and used taxonomic assignments from these reads to predict metabolic activities based on known metabolisms of cultivable species. To better understand this discrepancy, we analyzed the Mattila et al. pyrotag dataset. In contrast to the conclusions of Mattila et al., we found that the large majority of pyrotag sequences belonged to clusters for which representative sequences were identical to sequences from previously identified core species of the bee microbiota. On average, they represent 95% of the bacteria in each worker bee in the Mattila et al. dataset, a slightly lower value than that found in other studies. Some colonies contain small proportions of other bacteria, mostly species of Enterobacteriaceae. Reanalysis of the Mattila et al. dataset also did not support a relationship between abundances of Bifidobacterium and of putative pathogens or a significant difference in gut communities between colonies from queens that were singly or multiply mated. Additionally, consistent with previous studies, the dataset supports the occurrence of considerable strain variation within core species, even within single colonies. The roles of these bacteria within bees, or the implications of the strain variation, are not yet clear. PMID:22829932
Pelletier, Alexandra; Sunthara, Gajen; Gujral, Nitin; Mittal, Vandna; Bourgeois, Fabienne C
2016-01-01
understanding what features should be built into the app. Phase 3 involved deployment of TaskList on a clinical floor at BCH. Lastly, Phase 4 gathered the lessons learned from the pilot to refine the guideline. Results Fourteen practical recommendations were identified to create the BCH Mobile Application Development Guideline to safeguard custom applications in hospital BYOD settings. The recommendations were grouped into four categories: (1) authentication and authorization, (2) data management, (3) safeguarding app environment, and (4) remote enforcement. Following the guideline, the TaskList app was developed and then was piloted with an inpatient ward team. Conclusions The Mobile Application Development guideline was created and used in the development of TaskList. The guideline is intended for use by developers when addressing integration with hospital information systems, deploying apps in BYOD health care settings, and meeting compliance standards, such as Health Insurance Portability and Accountability Act (HIPAA) regulations. PMID:27169345
ERIC Educational Resources Information Center
Velastegui, Pamela J.
2013-01-01
This hypothesis-generating case study investigates the naturally emerging roles of technology brokers and technology leaders in three independent schools in New York involving 92 school educators. A multiple and mixed method design utilizing Social Network Analysis (SNA) and fuzzy set Qualitative Comparative Analysis (FSQCA) involved gathering…
ERIC Educational Resources Information Center
Sireci, Stephen G.
Whether item response theory (IRT) is useful to the small-scale testing practitioner is examined. The stability of IRT item parameters is evaluated with respect to the classical item parameters (i.e., p-values, biserials) obtained from the same data set. Previous research investigating the effect of sample size on IRT parameter estimation has…
Řezáč, Jan; de la Lande, Aurélien
2015-02-10
Separation of the energetic contribution of charge transfer to interaction energy in noncovalent complexes would provide important insight into the mechanisms of the interaction. However, the calculation of charge-transfer energy is not an easy task. It is not a physically well-defined term, and the results might depend on how it is described in practice. Commonly, the charge transfer is defined in terms of molecular orbitals; in this framework, however, the charge transfer vanishes as the basis set size increases toward the complete basis set limit. This can be avoided by defining the charge transfer in terms of the spatial extent of the electron densities of the interacting molecules, but the schemes used so far do not reflect the actual electronic structure of each particular system and thus are not reliable. We propose a spatial partitioning of the system, which is based on a charge transfer-free reference state, namely superimposition of electron densities of the noninteracting fragments. We show that this method, employing constrained DFT for the calculation of the charge-transfer energy, yields reliable results and is robust with respect to the strength of the charge transfer, the basis set size, and the DFT functional used. Because it is based on DFT, the method is applicable to rather large systems. PMID:26580910
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean W.; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Wei, Jun; Patel, Smita
2015-03-01
We have developed a computer-aided detection (CAD) system for assisting radiologists in detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images. The CAD system includes stages of pulmonary vessel segmentation, prescreening of PE candidates and false positive (FP) reduction to identify suspicious PEs. The system was trained with 59 CTPA PE cases collected retrospectively from our patient files (UM set) with IRB approval. Five feature groups containing 139 features that characterized the intensity texture, gradient, intensity homogeneity, shape, and topology of PE candidates were initially extracted. Stepwise feature selection guided by simplex optimization was used to select effective features for FP reduction. A linear discriminant analysis (LDA) classifier was formulated to differentiate true PEs from FPs. The purpose of this study is to evaluate the performance of our CAD system using an independent test set of CTPA cases. The test set consists of 50 PE cases from the PIOPED II data set collected by multiple institutions with access permission. A total of 537 PEs were manually marked by experienced thoracic radiologists as reference standard for the test set. The detection performance was evaluated by freeresponse receiver operating characteristic (FROC) analysis. The FP classifier obtained a test Az value of 0.847 and the FROC analysis indicated that the CAD system achieved an overall sensitivity of 80% at 8.6 FPs/case for the PIOPED test set.
Maximally Expressive Task Modeling
NASA Technical Reports Server (NTRS)
Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.
Maximal combustion temperature estimation
NASA Astrophysics Data System (ADS)
Golodova, E.; Shchepakina, E.
2006-12-01
This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models.
Sun, Guangyan; Zhou, Zhipeng; Liu, Xiao; Gai, Kexin; Liu, Qingqing; Cha, Joonseok; Kaleri, Farah Naz; Wang, Ying; He, Qun
2016-05-20
The circadian system in Neurospora is based on the transcriptional/translational feedback loops and rhythmic frequency (frq) transcription requires the WHITE COLLAR (WC) complex. Our previous paper has shown that frq could be transcribed in a WC-independent pathway in a strain lacking the histone H3K36 methyltransferase, SET-2 (su(var)3-9-enhancer-of-zeste-trithorax-2) (1), but the mechanism was unclear. Here we disclose that loss of histone H3K36 methylation, due to either deletion of SET-2 or H3K36R mutation, results in arrhythmic frq transcription and loss of overt rhythmicity. Histone acetylation at frq locus increases in set-2(KO) mutant. Consistent with these results, loss of H3K36 methylation readers, histone deacetylase RPD-3 (reduced potassium dependence 3) or EAF-3 (essential SAS-related acetyltransferase-associated factor 3), also leads to hyperacetylation of histone at frq locus and WC-independent frq expression, suggesting that proper chromatin modification at frq locus is required for circadian clock operation. Furthermore, a mutant strain with three amino acid substitutions (histone H3 lysine 9, 14, and 18 to glutamine) was generated to mimic the strain with hyperacetylation state of histone H3. H3K9QK14QK18Q mutant exhibits the same defective clock phenotype as rpd-3(KO) mutant. Our results support a scenario in which H3K36 methylation is required to establish a permissive chromatin state for circadian frq transcription by maintaining proper acetylation status at frq locus. PMID:27002152
NASA Astrophysics Data System (ADS)
Last, Isidore; Baer, Michael
1992-01-01
Recently we introduced a time-independent approach to treat reactive collisions employing the negative imaginary absorbing potentials and L 2 basis sets. The application of these potentials led to the formulation of a method whereby only one arrangement channel has to be considered in a given calculation. In the present work we further extend this approach. (a) We show how this method is capable of yielding reactive state-to-state S-matrix elements. (In the previous versions of this method, these could not be obtained.) (b) We show that by employing contracted vibrational adiabatic and translational Gaussian functions the number of algebraic equations to be solved within this approach is significantly reduced (by a factor of four).
Bradshaw, P J; Ko, D T; Newman, A M; Donovan, L R
2006-01-01
Objective To determine the validity of the GRACE (Global Registry of Acute Coronary Events) prediction model for death six months after discharge in all forms of acute coronary syndrome in an independent dataset of a community based cohort of patients with acute myocardial infarction (AMI). Design Independent validation study based on clinical data collected retrospectively for a clinical trial in a community based population and record linkage to administrative databases. Setting Study conducted among patients from the EFFECT (enhanced feedback for effective cardiac treatment) study from Ontario, Canada. Patients Randomly selected men and women hospitalised for AMI between 1999 and 2001. Main outcome measure Discriminatory capacity and calibration of the GRACE prediction model for death within six months of hospital discharge in the contemporaneous EFFECT AMI study population. Results Post‐discharge crude mortality at six months for the EFFECT study patients with AMI was 7.0%. The discriminatory capacity of the GRACE model was good overall (C statistic 0.80) and for patients with ST segment elevation AMI (STEMI) (0.81) and non‐STEMI (0.78). Observed and predicted deaths corresponded well in each stratum of risk at six months, although the risk was underestimated by up to 30% in the higher range of scores among patients with non‐STEMI. Conclusions In an independent validation the GRACE risk model had good discriminatory capacity for predicting post‐discharge death at six months and was generally well calibrated, suggesting that it is suitable for clinical use in general populations. PMID:16387810
Maximizing Classroom Participation.
ERIC Educational Resources Information Center
Englander, Karen
2001-01-01
Discusses how to maximize classroom participation in the English-as-a-Second-or-Foreign-Language classroom, and provides a classroom discussion method that is based on real-life problem solving. (Author/VWL)
Polarity Related Influence Maximization in Signed Social Networks
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites
ERIC Educational Resources Information Center
Penev, Spiridon; Raykov, Tenko
2006-01-01
A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Froeschke, John T.; Stunz, Gregory W.; Sterba-Boatwright, Blair; Wildhaber, Mark L.
2010-01-01
Using a long-term fisheries-independent data set, we tested the 'shark nursery area concept' proposed by Heupel et al. (2007) with the suggested working assumptions that a shark nursery habitat would: (1) have an abundance of immature sharks greater than the mean abundance across all habitats where they occur; (2) be used by sharks repeatedly through time (years); and (3) see immature sharks remaining within the habitat for extended periods of time. We tested this concept using young-of-the-year (age 0) and juvenile (age 1+ yr) bull sharks Carcharhinus leucas from gill-net surveys conducted in Texas bays from 1976 to 2006 to estimate the potential nursery function of 9 coastal bays. Of the 9 bay systems considered as potential nursery habitat, only Matagorda Bay satisfied all 3 criteria for young-of-the-year bull sharks. Both Matagorda and San Antonio Bays met the criteria for juvenile bull sharks. Through these analyses we examined the utility of this approach for characterizing nursery areas and we also describe some practical considerations, such as the influence of the temporal or spatial scales considered when applying the nursery role concept to shark populations.
Maximal Outboxes of Quadrilaterals
ERIC Educational Resources Information Center
Zhao, Dongsheng
2011-01-01
An outbox of a quadrilateral is a rectangle such that each vertex of the given quadrilateral lies on one side of the rectangle and different vertices lie on different sides. We first investigate those quadrilaterals whose every outbox is a square. Next, we consider the maximal outboxes of rectangles and those quadrilaterals with perpendicular…
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2004-01-01
Google is shaking out to be the leading Web search engine, with recent research from Nielsen NetRatings reporting about 40 percent of all U.S. households using the tool at least once in January 2004. This brief article discusses how teachers and students can maximize their use of Google.
NASA Astrophysics Data System (ADS)
Wang, Y.; Penning de Vries, M.; Xie, P. H.; Beirle, S.; Dörner, S.; Remmers, J.; Li, A.; Wagner, T.
2015-12-01
Multi-axis differential optical absorption spectroscopy (MAX-DOAS) observations of trace gases can be strongly influenced by clouds and aerosols. Thus it is important to identify clouds and characterize their properties. In a recent study Wagner et al. (2014) developed a cloud classification scheme based on the MAX-DOAS measurements themselves with which different "sky conditions" (e.g., clear sky, continuous clouds, broken clouds) can be distinguished. Here we apply this scheme to long-term MAX-DOAS measurements from 2011 to 2013 in Wuxi, China (31.57° N, 120.31° E). The original algorithm has been adapted to the characteristics of the Wuxi instrument, and extended towards smaller solar zenith angles (SZA). Moreover, a method for the determination and correction of instrumental degradation is developed to avoid artificial trends of the cloud classification results. We compared the results of the MAX-DOAS cloud classification scheme to several independent measurements: aerosol optical depth from a nearby Aerosol Robotic Network (AERONET) station and from two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments, visibility derived from a visibility meter and various cloud parameters from different satellite instruments (MODIS, the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment (GOME-2)). Here it should be noted that no quantitative comparison between the MAX-DOAS results and the independent data sets is possible, because (a) not exactly the same quantities are measured, and (b) the spatial and temporal sampling is quite different. Thus our comparison is performed in a semi-quantitative way: the MAX-DOAS cloud classification results are studied as a function of the external quantities. The most important findings from these comparisons are as follows: (1) most cases characterized as clear sky with low or high aerosol load were associated with the respective aerosol optical depth (AOD) ranges obtained by AERONET and MODIS
Generation and Transmission Maximization Model
Energy Science and Technology Software Center (ESTSC)
2001-04-05
GTMax was developed to study complex marketing and system operational issues facing electric utility power systems. The model maximizes the value of the electric system taking into account not only a single system''s limited energy and transmission resources but also firm contracts, independent power producer (IPP) agreements, and bulk power transaction opportunities on the spot market. GTMax maximizes net revenues of power systems by finding a solution that increases income while keeping expenses at amore » minimum. It does this while ensuring that market transactions and system operations are within the physical and institutional limitations of the power system. When multiple systems are simulated, GTMax identifies utilities that can successfully compete on the market by tracking hourly energy transactions, costs, and revenues. Some limitations that are modeled are power plant seasonal capabilities and terms specified in firm and IPP contracts. GTMax also considers detaile operational limitations such as power plant ramp rates and hydropower reservoir constraints.« less
Infrared Maximally Abelian Gauge
Mendes, Tereza; Cucchieri, Attilio; Mihara, Antonio
2007-02-27
The confinement scenario in Maximally Abelian gauge (MAG) is based on the concepts of Abelian dominance and of dual superconductivity. Recently, several groups pointed out the possible existence in MAG of ghost and gluon condensates with mass dimension 2, which in turn should influence the infrared behavior of ghost and gluon propagators. We present preliminary results for the first lattice numerical study of the ghost propagator and of ghost condensation for pure SU(2) theory in the MAG.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
ERIC Educational Resources Information Center
Wells, Ruth Herman
This document is one of eight in a series of guides designed to help teach and counsel troubled youth. This document presents 20 lessons on the social skills necessary to live independently. It includes four lessons designed to help students accurately evaluate their readiness for independent living. Other lessons teach the basic steps for…
NASA Astrophysics Data System (ADS)
Knop, R. A.; Aldering, G.; Amanullah, R.; Astier, P.; Blanc, G.; Burns, M. S.; Conley, A.; Deustua, S. E.; Doi, M.; Ellis, R.; Fabbro, S.; Folatelli, G.; Fruchter, A. S.; Garavini, G.; Garmond, S.; Garton, K.; Gibbons, R.; Goldhaber, G.; Goobar, A.; Groom, D. E.; Hardin, D.; Hook, I.; Howell, D. A.; Kim, A. G.; Lee, B. C.; Lidman, C.; Mendez, J.; Nobili, S.; Nugent, P. E.; Pain, R.; Panagia, N.; Pennypacker, C. R.; Perlmutter, S.; Quimby, R.; Raux, J.; Regnault, N.; Ruiz-Lapuente, P.; Sainton, G.; Schaefer, B.; Schahmaneche, K.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Sullivan, M.; Walton, N. A.; Wang, L.; Wood-Vasey, W. M.; Yasuda, N.
2003-11-01
We report measurements of ΩM, ΩΛ, and w from 11 supernovae (SNe) at z=0.36-0.86 with high-quality light curves measured using WFPC2 on the Hubble Space Telescope (HST). This is an independent set of high-redshift SNe that confirms previous SN evidence for an accelerating universe. The high-quality light curves available from photometry on WFPC2 make it possible for these 11 SNe alone to provide measurements of the cosmological parameters comparable in statistical weight to the previous results. Combined with earlier Supernova Cosmology Project data, the new SNe yield a measurement of the mass density ΩM=0.25+0.07-0.06(statistical)+/-0.04 (identified systematics), or equivalently, a cosmological constant of ΩΛ=0.75+0.06-0.07(statistical)+/-0.04 (identified systematics), under the assumptions of a flat universe and that the dark energy equation-of-state parameter has a constant value w=-1. When the SN results are combined with independent flat-universe measurements of ΩM from cosmic microwave background and galaxy redshift distortion data, they provide a measurement of w=-1.05+0.15-0.20(statistical)+/-0.09 (identified systematic), if w is assumed to be constant in time. In addition to high-precision light-curve measurements, the new data offer greatly improved color measurements of the high-redshift SNe and hence improved host galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host galaxy extinction correction directly for individual SNe without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with P(ΩΛ>0)>0.99, a result consistent with previous and current SN analyses that rely on the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution. Based in part on
NASA Technical Reports Server (NTRS)
Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris
2004-01-01
The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.
Energy Band Calculations for Maximally Even Superlattices
NASA Astrophysics Data System (ADS)
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Maximally spaced projection sequencing in electron paramagnetic resonance imaging
Redler, Gage; Epel, Boris; Halpern, Howard J.
2015-01-01
Electron paramagnetic resonance imaging (EPRI) provides 3D images of absolute oxygen concentration (pO2) in vivo with excellent spatial and pO2 resolution. When investigating such physiologic parameters in living animals, the situation is inherently dynamic. Improvements in temporal resolution and experimental versatility are necessary to properly study such a system. Uniformly distributed projections result in efficient use of data for image reconstruction. This has dictated current methods such as equal-solid-angle (ESA) spacing of projections. However, acquisition sequencing must still be optimized to achieve uniformity throughout imaging. An object-independent method for uniform acquisition of projections, using the ESA uniform distribution for the final set of projections, is presented. Each successive projection maximizes the distance in the gradient space between itself and prior projections. This maximally spaced projection sequencing (MSPS) method improves image quality for intermediate images reconstructed from incomplete projection sets, enabling useful real-time reconstruction. This method also provides improved experimental versatility, reduced artifacts, and the ability to adjust temporal resolution post factum to best fit the data and its application. The MSPS method in EPRI provides the improvements necessary to more appropriately study a dynamic system. PMID:26185490
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth; Richardson, Lea
2004-01-01
Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.
Maximizing your teaching moment
... have assessed the patient's needs and selected the education materials and methods you will use, you will need to: Set up a good learning environment. This may include things such as adjusting the ...
Origin of constrained maximal CP violation in flavor symmetry
NASA Astrophysics Data System (ADS)
He, Hong-Jian; Rodejohann, Werner; Xu, Xun-Jie
2015-12-01
Current data from neutrino oscillation experiments are in good agreement with δ = -π/2 and θ23 =π/4 under the standard parametrization of the mixing matrix. We define the notion of "constrained maximal CP violation" (CMCPV) for predicting these features and study their origin in flavor symmetry. We derive the parametrization-independent solution of CMCPV and give a set of equivalent definitions for it. We further present a theorem on how the CMCPV can be realized. This theorem takes the advantage of residual symmetries in neutrino and charged lepton mass matrices, and states that, up to a few minor exceptions, (| δ | ,θ23) = (π/2 ,π/4) is generated when those symmetries are real. The often considered μ- τ reflection symmetry, as well as specific discrete subgroups of O(3), is a special case of our theorem.
Maximizing Brightness in Photoinjectors
Limborg-Deprey, C.; Tomizawa, H.; /JAERI-RIKEN, Hyogo
2011-11-30
If the laser pulse driving photoinjectors could be arbitrarily shaped, the emittance growth induced by space charge effects could be totally compensated for. In particular, for RF guns the photo-electron distribution leaving the cathode should have a 3D-ellipsoidal shape. The emittance at the end of the injector could be as small as the cathode emittance. We explore how the emittance and the brightness can be optimized for photoinjector based on RF gun depending on the peak current requirements. Techniques available to produce those ideal laser pulse shapes are also discussed. If the laser pulse driving photoinjectors could be arbitrarily shaped, the emittance growth induced by space charge effects could be totally compensated for. In particular, for RF guns, the photo-electron distribution leaving the cathode should be close to a uniform distribution contained in a 3D-ellipsoid contour. For photo-cathodes which have very fast emission times, and assuming a perfectly uniform emitting surface, this could be achieved by shaping the laser in a pulse of constant fluence and limited in space by a 3D-ellipsoid contour. Simulations show that in such conditions, with the standard linear emittance compensation, the emittance at the end of the photo-injector beamline approaches the minimum value imposed by the cathode emittance. Brightness, which is expressed as the ratio of peak current over the product of the two transverse emittance, seems to be maximized for small charges. Numerical simulations also show that for very high charge per bunch (10nC), emittances as small as 2 mm-mrad could be reached by using 3D-ellipsoidal laser pulses in an S-Band gun. The production of 3D-ellipsoidal pulses is very challenging, but seems worthwhile the effort. We briefly discuss some of the present ideas and difficulties of achieving such pulses.
A Method for Maximizing the Internal Consistency Coefficient Alpha.
ERIC Educational Resources Information Center
Pepin, Michel
This paper presents three different ways of computing the internal consistency coefficient alpha for a same set of data. The main objective of the paper is the illustration of a method for maximizing coefficient alpha. The maximization of alpha can be achieved with the aid of a principal component analysis. The relation between alpha max. and the…
Maximal switchability of centralized networks
NASA Astrophysics Data System (ADS)
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
Hamiltonian formalism and path entropy maximization
NASA Astrophysics Data System (ADS)
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Curiel, José Antonio; de Las Rivas, Blanca; Mancheño, José Miguel; Muñoz, Rosario
2011-03-01
A family of restriction enzyme- and ligation-independent cloning vectors has been developed for producing recombinant His-tagged fusion proteins in Escherichia coli. These are based on pURI2 and pURI3 expression vectors which have been previously used for the successful production of recombinant proteins at the milligram scale. The newly designed vectors combines two different promoters (lpp(p)-5 and T7 RNA polymerase Ø10), two different endoprotease recognition sites for the His₆-tag removal (enterokinase and tobacco etch virus), different antibiotic selectable markers (ampicillin and erythromycin resistance), and different placements of the His₆-tag (N- and C-terminus). A single gene can be cloned and further expressed in the eight pURI vectors by using six nucleotide primers, avoiding the restriction enzyme and ligation steps. A unique NotI site was introduced to facilitate the selection of the recombinant plasmid. As a case study, the new vectors have been used to clone the gene coding for the phenolic acid decarboxylase from Lactobacillus plantarum. Interestingly, the obtained results revealed markedly different production levels of the target protein, emphasizing the relevance of the cloning strategy on soluble protein production yield. Efficient purification and tag removal steps showed that the affinity tag and the protease cleavage sites functioned properly. The novel family of pURI vectors designed for parallel cloning is a useful and versatile tool for the production and purification of a protein of interest. PMID:21055470
Excap: Maximization of Haplotypic Diversity of Linked Markers
Kahles, André; Sarqume, Fahad; Savolainen, Peter; Arvestad, Lars
2013-01-01
Genetic markers, defined as variable regions of DNA, can be utilized for distinguishing individuals or populations. As long as markers are independent, it is easy to combine the information they provide. For nonrecombinant sequences like mtDNA, choosing the right set of markers for forensic applications can be difficult and requires careful consideration. In particular, one wants to maximize the utility of the markers. Until now, this has mainly been done by hand. We propose an algorithm that finds the most informative subset of a set of markers. The algorithm uses a depth first search combined with a branch-and-bound approach. Since the worst case complexity is exponential, we also propose some data-reduction techniques and a heuristic. We implemented the algorithm and applied it to two forensic caseworks using mitochondrial DNA, which resulted in marker sets with significantly improved haplotypic diversity compared to previous suggestions. Additionally, we evaluated the quality of the estimation with an artificial dataset of mtDNA. The heuristic is shown to provide extensive speedup at little cost in accuracy. PMID:24244403
2010-01-01
Background The genus Neisseria contains two important yet very different pathogens, N. meningitidis and N. gonorrhoeae, in addition to non-pathogenic species, of which N. lactamica is the best characterized. Genomic comparisons of these three bacteria will provide insights into the mechanisms and evolution of pathogenesis in this group of organisms, which are applicable to understanding these processes more generally. Results Non-pathogenic N. lactamica exhibits very similar population structure and levels of diversity to the meningococcus, whilst gonococci are essentially recent descendents of a single clone. All three species share a common core gene set estimated to comprise around 1190 CDSs, corresponding to about 60% of the genome. However, some of the nucleotide sequence diversity within this core genome is particular to each group, indicating that cross-species recombination is rare in this shared core gene set. Other than the meningococcal cps region, which encodes the polysaccharide capsule, relatively few members of the large accessory gene pool are exclusive to one species group, and cross-species recombination within this accessory genome is frequent. Conclusion The three Neisseria species groups represent coherent biological and genetic groupings which appear to be maintained by low rates of inter-species horizontal genetic exchange within the core genome. There is extensive evidence for exchange among positively selected genes and the accessory genome and some evidence of hitch-hiking of housekeeping genes with other loci. It is not possible to define a 'pathogenome' for this group of organisms and the disease causing phenotypes are therefore likely to be complex, polygenic, and different among the various disease-associated phenotypes observed. PMID:21092259
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2009-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
NASA Astrophysics Data System (ADS)
Wang, Y.; Penning de Vries, M.; Xie, P. H.; Beirle, S.; Dörner, S.; Remmers, J.; Li, A.; Wagner, T.
2015-05-01
Multi-Axis-Differential Optical Absorption Spectroscopy (MAX-DOAS) observations of trace gases can be strongly influenced by clouds and aerosols. Thus it is important to identify clouds and characterise their properties. In a recent study Wagner et al. (2014) developed a cloud classification scheme based on the MAX-DOAS measurements themselves with which different "sky conditions" (e.g. clear sky, continuous clouds, broken clouds) can be distinguished. Here we apply this scheme to long term MAX-DOAS measurements from 2011 to 2013 in Wuxi, China (31.57° N, 120.31° E). The original algorithm has been modified, in particular in order to account for smaller solar zenith angles (SZA). Instrumental degradation is accounted for to avoid artificial trends of the cloud classification. We compared the results of the MAX-DOAS cloud classification scheme to several independent measurements: aerosol optical depth from a nearby AERONET station and from MODIS, visibility derived from a visibility meter; and various cloud parameters from different satellite instruments (MODIS, OMI, and GOME-2). The most important findings from these comparisons are: (1) most cases characterized as clear sky with low or high aerosol load were associated with the respective AOD ranges obtained by AERONET and MODIS, (2) the observed dependences of MAX-DOAS results on cloud optical thickness and effective cloud fraction from satellite indicate that the cloud classification scheme is sensitive to cloud (optical) properties, (3) separation of cloudy scenes by cloud pressure shows that the MAX-DOAS cloud classification scheme is also capable of detecting high clouds, (4) some clear sky conditions, especially with high aerosol load, classified from MAX-DOAS observations corresponding to the optically thin and low clouds derived by satellite observations probably indicate that the satellite cloud products contain valuable information on aerosols.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-03-01
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169
Hohman, Timothy J; Bush, William S; Jiang, Lan; Brown-Gentry, Kristin D; Torstenson, Eric S; Dudek, Scott M; Mukherjee, Shubhabrata; Naj, Adam; Kunkle, Brian W; Ritchie, Marylyn D; Martin, Eden R; Schellenberg, Gerard D; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Haines, Jonathan L; Thornton-Wells, Tricia A
2016-02-01
Late-onset Alzheimer disease (AD) has a complex genetic etiology, involving locus heterogeneity, polygenic inheritance, and gene-gene interactions; however, the investigation of interactions in recent genome-wide association studies has been limited. We used a biological knowledge-driven approach to evaluate gene-gene interactions for consistency across 13 data sets from the Alzheimer Disease Genetics Consortium. Fifteen single nucleotide polymorphism (SNP)-SNP pairs within 3 gene-gene combinations were identified: SIRT1 × ABCB1, PSAP × PEBP4, and GRIN2B × ADRA1A. In addition, we extend a previously identified interaction from an endophenotype analysis between RYR3 × CACNA1C. Finally, post hoc gene expression analyses of the implicated SNPs further implicate SIRT1 and ABCB1, and implicate CDH23 which was most recently identified as an AD risk locus in an epigenetic analysis of AD. The observed interactions in this article highlight ways in which genotypic variation related to disease may depend on the genetic context in which it occurs. Further, our results highlight the utility of evaluating genetic interactions to explain additional variance in AD risk and identify novel molecular mechanisms of AD pathogenesis. PMID:26827652
ERIC Educational Resources Information Center
Lange, L. H.
1974-01-01
Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)
NASA Astrophysics Data System (ADS)
Salvio, Alberto; Staub, Florian; Strumia, Alessandro; Urbano, Alfredo
2016-03-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into γγ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
All maximally entangling unitary operators
Cohen, Scott M.
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Maximizing the usefulness of hypnosis in forensic investigative settings.
Hibler, Neil S; Scheflin, Alan W
2012-07-01
This is an article written for mental health professionals interested in using investigative hypnosis with law enforcement agencies in the effort to enhance the memory of witnesses and victims. Discussion focuses on how to work with law enforcement agencies so as to control for factors that can interfere with recall. Specifics include what police need to know about how to conduct case review, to prepare interviewees, to conduct interviews, and what to do with the results. Case examples are used to illustrate applications of this guidance in actual investigations. PMID:22913226
Maximally polarized states for quantum light fields
Sanchez-Soto, Luis L.; Yustas, Eulogio C.; Bjoerk, Gunnar; Klimov, Andrei B.
2007-10-15
The degree of polarization of a quantum field can be defined as its distance to an appropriate set of states. When we take unpolarized states as this reference set, the states optimizing this degree for a fixed average number of photons N present a fairly symmetric, parabolic photon statistic, with a variance scaling as N{sup 2}. Although no standard optical process yields such a statistic, we show that, to an excellent approximation, a highly squeezed vacuum can be taken as maximally polarized. We also consider the distance of a field to the set of its SU(2) transformed, finding that certain linear superpositions of SU(2) coherent states make this degree to be unity.
Are Independent Probes Truly Independent?
ERIC Educational Resources Information Center
Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene
2009-01-01
The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…
Basic principles of maximizing dental office productivity.
Mamoun, John
2012-01-01
To maximize office productivity, dentists should focus on performing tasks that only they can perform and not spend office hours performing tasks that can be delegated to non-dentist personnel. An important element of maximizing productivity is to arrange the schedule so that multiple patients are seated simultaneously in different operatories. Doing so allows the dentist to work on one patient in one operatory without needing to wait for local anesthetic to take effect on another patient in another operatory, or for assistants to perform tasks (such as cleaning up, taking radiographs, performing prophylaxis, or transporting and preparing equipment and supplies) in other operatories. Another way to improve productivity is to structure procedures so that fewer steps are needed to set up and implement them. In addition, during procedures, four-handed dental passing methods can be used to provide the dentist with supplies or equipment when needed. This article reviews basic principles of maximizing dental office productivity, based on the author's observations of business logistics used by various dental offices. PMID:22414506
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... systems advocacy--to maximize the leadership, empowerment, independence and productivity of individuals..., empowerment, independence and productivity of individuals with significant disabilities and to promote...
Optimizing Population Variability to Maximize Benefit
Izu, Leighton T.; Bányász, Tamás; Chen-Izu, Ye
2015-01-01
Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the “right mix” of individuals in a population. PMID:26650247
Simple conditions constraining the set of quantum correlations
NASA Astrophysics Data System (ADS)
de Vicente, Julio I.
2015-09-01
The characterization of the set of quantum correlations in Bell scenarios is a problem of paramount importance for both the foundations of quantum mechanics and quantum information processing in the device-independent scenario. However, a clear-cut (physical or mathematical) characterization of this set remains elusive and many of its properties are still unknown. We provide here simple and general analytical conditions that are necessary for an arbitrary bipartite behavior to be quantum. Although the conditions are not sufficient, we illustrate the strength and nontriviality of these conditions with a few examples. Moreover, we provide several applications of this result: we prove a quantitative separation of the quantum set from extremal nonlocal no-signaling behaviors in several general scenarios, we provide a relation to obtain Tsirelson bounds for arbitrary Bell inequalities and a construction of Bell expressions whose maximal quantum value is attained by a maximally entangled state of any given dimension.
Factors affecting maximal acid secretion
Desai, H. G.
1969-01-01
The mechanisms by which different factors affect the maximal acid secretion of the stomach are discussed with particular reference to nationality, sex, age, body weight or lean body mass, procedural details, mode of calculation, the nature, dose and route of administration of a stimulus, the synergistic action of another stimulus, drugs, hormones, electrolyte levels, anaemia or deficiency of the iron-dependent enzyme system, vagal continuity and parietal cell mass. PMID:4898322
RG flows of Quantum Einstein Gravity on maximally symmetric spaces
NASA Astrophysics Data System (ADS)
Demmel, Maximilian; Saueressig, Frank; Zanusso, Omar
2014-06-01
We use the Wetterich-equation to study the renormalization group flow of f ( R)-gravity in a three-dimensional, conformally reduced setting. Building on the exact heat kernel for maximally symmetric spaces, we obtain a partial differential equation which captures the scale-dependence of f ( R) for positive and, for the first time, negative scalar curvature. The effects of different background topologies are studied in detail and it is shown that they affect the gravitational RG flow in a way that is not visible in finite-dimensional truncations. Thus, while featuring local background independence, the functional renormalization group equation is sensitive to the topological properties of the background. The detailed analytical and numerical analysis of the partial differential equation reveals two globally well-defined fixed functionals with at most a finite number of relevant deformations. Their properties are remarkably similar to two of the fixed points identified within the R 2-truncation of full Quantum Einstein Gravity. As a byproduct, we obtain a nice illustration of how the functional renormalization group realizes the "integrating out" of fluctuation modes on the three-sphere.
NASA Astrophysics Data System (ADS)
Fraser, Gordon
2009-01-01
In his kind review of my biography of the Nobel laureate Abdus Salam (December 2008 pp45-46), John W Moffat wrongly claims that Salam had "independently thought of the idea of parity violation in weak interactions".
Sensitivity to conversational maxims in deaf and hearing children.
Surian, Luca; Tedoldi, Mariantonia; Siegal, Michael
2010-09-01
We investigated whether access to a sign language affects the development of pragmatic competence in three groups of deaf children aged 6 to 11 years: native signers from deaf families receiving bimodal/bilingual instruction, native signers from deaf families receiving oralist instruction and late signers from hearing families receiving oralist instruction. The performance of these children was compared to a group of hearing children aged 6 to 7 years on a test designed to assess sensitivity to violations of conversational maxims. Native signers with bimodal/bilingual instruction were as able as the hearing children to detect violations that concern truthfulness (Maxim of Quality) and relevance (Maxim of Relation). On items involving these maxims, they outperformed both the late signers and native signers attending oralist schools. These results dovetail with previous findings on mindreading in deaf children and underscore the role of early conversational experience and instructional setting in the development of pragmatics. PMID:19719886
Maximizing algebraic connectivity in air transportation networks
NASA Astrophysics Data System (ADS)
Wei, Peng
In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
Maximally coherent mixed states: Complementarity between maximal coherence and mixedness
NASA Astrophysics Data System (ADS)
Singh, Uttam; Bera, Manabendra Nath; Dhar, Himadri Shekhar; Pati, Arun Kumar
2015-05-01
Quantum coherence is a key element in topical research on quantum resource theories and a primary facilitator for design and implementation of quantum technologies. However, the resourcefulness of quantum coherence is severely restricted by environmental noise, which is indicated by the loss of information in a quantum system, measured in terms of its purity. In this work, we derive the limits imposed by the mixedness of a quantum system on the amount of quantum coherence that it can possess. We obtain an analytical trade-off between the two quantities that upperbound the maximum quantum coherence for fixed mixedness in a system. This gives rise to a class of quantum states, "maximally coherent mixed states," whose coherence cannot be increased further under any purity-preserving operation. For the above class of states, quantum coherence and mixedness satisfy a complementarity relation, which is crucial to understand the interplay between a resource and noise in open quantum systems.
Maximal energy extraction under discrete diffusive exchange
Hay, M. J.; Schiff, J.; Fisch, N. J.
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Mixtures of maximally entangled pure states
NASA Astrophysics Data System (ADS)
Flores, M. M.; Galapon, E. A.
2016-09-01
We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake.
Convertino, V A
1997-02-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake
NASA Technical Reports Server (NTRS)
Convertino, V. A.
1997-01-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
ERIC Educational Resources Information Center
Kaplan, Suzanne; Wilson, Gordon
1978-01-01
The Independent Human Studies program at Schoolcraft College offers an alternative method of earning academic credits. Students delineate an area of study, pose research questions, gather resources, synthesize the information, state the thesis, choose the method of presentation, set schedules, and take responsibility for meeting deadlines. (MB)
NASA Astrophysics Data System (ADS)
Annan, James; Hargreaves, Julia
2016-04-01
In order to perform any Bayesian processing of a model ensemble, we need a prior over the ensemble members. In the case of multimodel ensembles such as CMIP, the historical approach of ``model democracy'' (i.e. equal weight for all models in the sample) is no longer credible (if it ever was) due to model duplication and inbreeding. The question of ``model independence'' is central to the question of prior weights. However, although this question has been repeatedly raised, it has not yet been satisfactorily addressed. Here I will discuss the issue of independence and present a theoretical foundation for understanding and analysing the ensemble in this context. I will also present some simple examples showing how these ideas may be applied and developed.
Maximal acceleration and radiative processes
NASA Astrophysics Data System (ADS)
Papini, Giorgio
2015-08-01
We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.
Lighting spectrum to maximize colorfulness.
Masuda, Osamu; Nascimento, Sérgio M C
2012-02-01
The spectrum of modern illumination can be computationally tailored considering the visual effects of lighting. We investigated the spectral profiles of the white illumination maximizing the theoretical limits of the perceivable object colors. A large number of metamers with various degrees of smoothness were generated on and around the Planckian locus, and the volume in the CIELAB space of the optimal colors for each metamer was calculated. The optimal spectrum was found at the color temperature of around 5.7×10(3) K, had three peaks at both ends of the visible band and at around 510 nm, and was 25% better than daylight and 35% better than Thornton's prime color lamp. PMID:22297368
Varieties of maximal line subbundles
NASA Astrophysics Data System (ADS)
Oxbury, W. M.
2000-07-01
The point of this note is to make an observation concerning the variety M(E) parametrizing line subbundles of maximal degree in a generic stable vector bundle E over an algebraic curve C. M(E) is smooth and projective and its dimension is known in terms of the rank and degree of E and the genus of C (see Section 1). Our observation (Theorem 3·1) is that it has exactly the Chern numbers of an étale cover of the symmetric product S[delta]C where [delta] = dim M(E).This suggests looking for a natural map M(E) [rightward arrow] S[delta]C; however, it is not clear what such a map should be. Indeed, we exhibit an example in which M(E) is connected and deforms non-trivially with E, while there are only finitely many isomorphism classes of étale cover of the symmetric product. This shows that for a general deformation in the family M(E) cannot be such a cover (see Section 4).One may conjecture that M(E) is always connected. This would follow from ampleness of a certain Picard-type bundle on the Jacobian and there seems to be some evidence for expecting this, though we do not pursue this question here.Note that by forgetting the inclusion of a maximal line subbundle in E we get a natural map from M(E) to the Jacobian whose image W(E) is analogous to the classical (Brill-Noether) varieties of special line bundles. (In this sense M(E) is precisely a generalization of the symmetric products of C.) In Section 2 we give some results on W(E) which generalise standard Brill-Noether properties. These are due largely to Laumon, to whom the author is grateful for the reference [9].
Reif, Maria M.; Huenenberger, Philippe H.
2011-04-14
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [M. A. Kastenholz and P. H. Huenenberger, J. Chem. Phys. 124, 224501 (2006); M. M. Reif and P. H. Huenenberger, J. Chem. Phys. 134, 144103 (2010)], the application of appropriate correction terms permits to obtain methodology-independent results. The corrected values are then exclusively characteristic of the underlying molecular model including in particular the ion-solvent van der Waals interaction parameters, determining the effective ion size and the magnitude of its dispersion interactions. In the present study, the comparison of calculated (corrected) hydration free energies with experimental data (along with the consideration of ionic polarizabilities) is used to calibrate new sets of ion-solvent van der Waals (Lennard-Jones) interaction parameters for the alkali (Li{sup +}, Na{sup +}, K{sup +}, Rb{sup +}, Cs{sup +}) and halide (F{sup -}, Cl{sup -}, Br{sup -}, I{sup -}) ions along with either the SPC or the SPC/E water models. The experimental dataset is defined by conventional single-ion hydration free energies [Tissandier et al., J. Phys. Chem. A 102, 7787 (1998); Fawcett, J. Phys. Chem. B 103, 11181] along with three plausible choices for the (experimentally elusive) value of the absolute (intrinsic) hydration free energy of the proton, namely, {Delta}G{sub hyd} {sup O-minus} [H{sup +}]=-1100, -1075 or -1050 kJ mol{sup -1}, resulting in three sets L, M, and H for the SPC water model and three sets L{sub E}, M{sub E}, and H{sub E} for the SPC/E water model (alternative sets can easily be interpolated to intermediate {Delta}G{sub hyd} {sup O-minus} [H{sup +}] values). The residual sensitivity of the calculated (corrected) hydration free energies on the volume-pressure boundary conditions and on the effective
Larsen, Filip J; Weitzberg, Eddie; Lundberg, Jon O; Ekblom, Björn
2010-01-15
The anion nitrate-abundant in our diet-has recently emerged as a major pool of nitric oxide (NO) synthase-independent NO production. Nitrate is reduced stepwise in vivo to nitrite and then NO and possibly other bioactive nitrogen oxides. This reductive pathway is enhanced during low oxygen tension and acidosis. A recent study shows a reduction in oxygen consumption during submaximal exercise attributable to dietary nitrate. We went on to study the effects of dietary nitrate on various physiological and biochemical parameters during maximal exercise. Nine healthy, nonsmoking volunteers (age 30+/-2.3 years, VO(2max) 3.72+/-0.33 L/min) participated in this study, which had a randomized, double-blind crossover design. Subjects received dietary supplementation with sodium nitrate (0.1 mmol/kg/day) or placebo (NaCl) for 2 days before the test. This dose corresponds to the amount found in 100-300 g of a nitrate-rich vegetable such as spinach or beetroot. The maximal exercise tests consisted of an incremental exercise to exhaustion with combined arm and leg cranking on two separate ergometers. Dietary nitrate reduced VO(2max) from 3.72+/-0.33 to 3.62+/-0.31 L/min, P<0.05. Despite the reduction in VO(2max) the time to exhaustion trended to an increase after nitrate supplementation (524+/-31 vs 563+/-30 s, P=0.13). There was a correlation between the change in time to exhaustion and the change in VO(2max) (R(2)=0.47, P=0.04). A moderate dietary dose of nitrate significantly reduces VO(2max) during maximal exercise using a large active muscle mass. This reduction occurred with a trend toward increased time to exhaustion implying that two separate mechanisms are involved: one that reduces VO(2max) and another that improves the energetic function of the working muscles. PMID:19913611
Maximal dinucleotide and trinucleotide circular codes.
Michel, Christian J; Pellegrini, Marco; Pirillo, Giuseppe
2016-01-21
We determine here the number and the list of maximal dinucleotide and trinucleotide circular codes. We prove that there is no maximal dinucleotide circular code having strictly less than 6 elements (maximum size of dinucleotide circular codes). On the other hand, a computer calculus shows that there are maximal trinucleotide circular codes with less than 20 elements (maximum size of trinucleotide circular codes). More precisely, there are maximal trinucleotide circular codes with 14, 15, 16, 17, 18 and 19 elements and no maximal trinucleotide circular code having less than 14 elements. We give the same information for the maximal self-complementary dinucleotide and trinucleotide circular codes. The amino acid distribution of maximal trinucleotide circular codes is also determined. PMID:26382231
Many Parameter Sets in a Multicompartment Model Oscillator Are Robust to Temperature Perturbations
Caplan, Jonathan S.; Williams, Alex H.
2014-01-01
Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca2+ buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators. PMID:24695714
Many parameter sets in a multicompartment model oscillator are robust to temperature perturbations.
Caplan, Jonathan S; Williams, Alex H; Marder, Eve
2014-04-01
Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca(2+) buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators. PMID:24695714
Maximizing the optical network capacity
Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.
2016-01-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
Maximizing the optical network capacity.
Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I
2016-03-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Maximal Oxygen Intake and Maximal Work Performance of Active College Women.
ERIC Educational Resources Information Center
Higgs, Susanne L.
Maximal oxygen intake and associated physiological variables were measured during strenuous exercise on women subjects (N=20 physical education majors). Following assessment of maximal oxygen intake, all subjects underwent a performance test at the work level which had elicited their maximal oxygen intake. Mean maximal oxygen intake was 41.32…
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Click on the image for 'Independence' Panorama (QTVR)
This is the Spirit 'Independence' panorama, acquired on martian days, or sols, 536 to 543 (July 6 to 13, 2005), from a position in the 'Columbia Hills' near the summit of 'Husband Hill.' The summit of 'Husband Hill' is the peak near the right side of this panorama and is about 100 meters (328 feet) away from the rover and about 30 meters (98 feet) higher in elevation. The rocky outcrops downhill and on the left side of this mosaic include 'Larry's Lookout' and 'Cumberland Ridge,' which Spirit explored in April, May, and June of 2005.
The panorama spans 360 degrees and consists of 108 individual images, each acquired with five filters of the rover's panoramic camera. The approximate true color of the mosaic was generated using the camera's 750-, 530-, and 480-nanometer filters. During the 8 martian days, or sols, that it took to acquire this image, the lighting varied considerably, partly because of imaging at different times of sol, and partly because of small sol-to-sol variations in the dustiness of the atmosphere. These slight changes produced some image seams and rock shadows. These seams have been eliminated from the sky portion of the mosaic to better simulate the vista a person standing on Mars would see. However, it is often not possible or practical to smooth out such seams for regions of rock, soil, rover tracks or solar panels. Such is the nature of acquiring and assembling large panoramas from the rovers.
Moving multiple sinks through wireless sensor networks for lifetime maximization.
Petrioli, Chiara; Carosi, Alessio; Basagni, Stefano; Phillips, Cynthia Ann
2008-01-01
Unattended sensor networks typically watch for some phenomena such as volcanic events, forest fires, pollution, or movements in animal populations. Sensors report to a collection point periodically or when they observe reportable events. When sensors are too far from the collection point to communicate directly, other sensors relay messages for them. If the collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of the network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead. We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized heuristic that produces sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed heuristic that produces lifetimes at most 25:3% below the upper bound. More specifically, we formulate a linear program (LP) that is a relaxation of the scheduling problem. The variables are naturally continuous, but the LP relaxes some constraints. The LP has an exponential number of constraints, but we can satisfy them all by enforcing only a polynomial number using a separation algorithm. This separation algorithm is a p-median facility location problem, which we can solve efficiently in practice for huge instances using integer programming technology. This LP selects a set of good sensor configurations. Given the solution to the LP, we can find a feasible schedule by selecting a subset of these configurations, ordering them
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Predicted maximal heart rate for upper body exercise testing.
Hill, M; Talbot, C; Price, M
2016-03-01
Age-predicted maximal heart rate (HRMAX ) equations are commonly used for the purpose of prescribing exercise regimens, as criteria for achieving maximal exertion and for diagnostic exercise testing. Despite the growing popularity of upper body exercise in both healthy and clinical settings, no recommendations are available for exercise modes using the smaller upper body muscle mass. The purpose of this study was to determine how well commonly used age-adjusted prediction equations for HRMAX estimate actual HRMAX for upper body exercise in healthy young and older adults. A total of 30 young (age: 20 ± 2 years, height: 171·9 ± 32·8 cm, mass: 77·7 ± 12·6 kg) and 20 elderly adults (age: 66 ± 6 years, height: 162 ± 8·1 cm, mass: 65·3 ± 12·3 kg) undertook maximal incremental exercise tests on a conventional arm crank ergometer. Age-adjusted maximal heart rate was calculated using prediction equations based on leg exercise and compared with measured HRMAX data for the arms. Maximal HR for arm exercise was significantly overpredicted compared with age-adjusted prediction equations in both young and older adults. Subtracting 10-20 beats min(-1) from conventional prediction equations provides a reasonable estimate of HRMAX for upper body exercise in healthy older and younger adults. PMID:25319169
Does mental exertion alter maximal muscle activation?
Rozand, Vianney; Pageaux, Benjamin; Marcora, Samuele M.; Papaxanthis, Charalambos; Lepers, Romuald
2014-01-01
Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 min each: (i) high mental exertion (incongruent Stroop task), (ii) moderate mental exertion (congruent Stroop task), (iii) low mental exertion (watching a movie). In each condition, mental exertion was combined with 10 intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 min). Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors. PMID:25309404
Comparison of Hardy-Littlewood and dyadic maximal functions on spaces of homogeneous type
NASA Astrophysics Data System (ADS)
Aimar, Hugo; Bernardis, Ana; Iaffei, Bibiana
2005-12-01
We obtain a comparison of the level sets for two maximal functions on a space of homogeneous type: the Hardy-Littlewood maximal function of mean values over balls and the dyadic maximal function of mean values over the dyadic sets introduced by M. Christ in [M. Christ, A T(b) theorem with remarks on analytic capacity and the Cauchy integral, Colloq. Math. 60/61 (1990) 601-628]. As applications to the theory of Ap weights on this setting, we compare the standard and the dyadic Muckenhoupt classes and we give an alternative proof of reverse Hölder type inequalities.
Inflation in maximal gauged supergravities
Kodama, Hideo; Nozawa, Masato
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Maximally Expressive Modeling of Operations Tasks
NASA Technical Reports Server (NTRS)
Jaap, John; Richardson, Lea; Davis, Elizabeth
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.
Maximizing TDRS Command Load Lifetime
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2002-01-01
was therefore the key to achieving this goal. This goal was eventually realized through development of an Excel spreadsheet tool called EMMIE (Excel Mean Motion Interactive Estimation). EMMIE utilizes ground ephemeris nodal data to perform a least-squares fit to inferred mean anomaly as a function of time, thus generating an initial estimate for mean motion. This mean motion in turn drives a plot of estimated downtrack position difference versus time. The user can then manually iterate the mean motion, and determine an optimal value that will maximize command load lifetime. Once this optimal value is determined, the mean motion initially calculated by the command builder tool is overwritten with the new optimal value, and the command load is built for uplink to ISS. EMMIE also provides the capability for command load lifetime to be tracked through multiple TORS ephemeris updates. Using EMMIE, TORS command load lifetimes of approximately 30 days have been achieved.
Specificity of a Maximal Step Exercise Test
ERIC Educational Resources Information Center
Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.
2007-01-01
To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…
The maximal affinity of ligands
Kuntz, I. D.; Chen, K.; Sharp, K. A.; Kollman, P. A.
1999-01-01
We explore the question of what are the best ligands for macromolecular targets. A survey of experimental data on a large number of the strongest-binding ligands indicates that the free energy of binding increases with the number of nonhydrogen atoms with an initial slope of ≈−1.5 kcal/mol (1 cal = 4.18 J) per atom. For ligands that contain more than 15 nonhydrogen atoms, the free energy of binding increases very little with relative molecular mass. This nonlinearity is largely ascribed to nonthermodynamic factors. An analysis of the dominant interactions suggests that van der Waals interactions and hydrophobic effects provide a reasonable basis for understanding binding affinities across the entire set of ligands. Interesting outliers that bind unusually strongly on a per atom basis include metal ions, covalently attached ligands, and a few well known complexes such as biotin–avidin. PMID:10468550
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-01
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. PMID:24530825
Independent task Fourier filters
NASA Astrophysics Data System (ADS)
Caulfield, H. John
2001-11-01
Since the early 1960s, a major part of optical computing systems has been Fourier pattern recognition, which takes advantage of high speed filter changes to enable powerful nonlinear discrimination in `real time.' Because filter has a task quite independent of the tasks of the other filters, they can be applied and evaluated in parallel or, in a simple approach I describe, in sequence very rapidly. Thus I use the name ITFF (independent task Fourier filter). These filters can also break very complex discrimination tasks into easily handled parts, so the wonderful space invariance properties of Fourier filtering need not be sacrificed to achieve high discrimination and good generalizability even for ultracomplex discrimination problems. The training procedure proceeds sequentially, as the task for a given filter is defined a posteriori by declaring it to be the discrimination of particular members of set A from all members of set B with sufficient margin. That is, we set the threshold to achieve the desired margin and note the A members discriminated by that threshold. Discriminating those A members from all members of B becomes the task of that filter. Those A members are then removed from the set A, so no other filter will be asked to perform that already accomplished task.
The effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake.
Oh, Deuk-Ja; Hong, Hyeon-Ok; Lee, Bo-Ae
2016-02-01
The purpose of this study is to investigate the effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake. To achieve the purpose of the study, a total of 30 subjects were selected, including 15 people who performed continued regular exercises and 15 people as the control group. With regard to data processing, the IBM SPSS Statistics ver. 21.0 was used to calculate the mean and standard deviation. The difference of mean change between groups was verified through an independent t-test. As a result, there were significant differences in resting heart rate, maximal heart rate, maximal systolic blood pressure, and maximal oxygen uptake. However, the maximal systolic blood pressure was found to be an exercise-induced high blood pressure. Thus, it is thought that a risk diagnosis for it through a regular exercise stress test is necessary. PMID:26933659
The effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake
Oh, Deuk-Ja; Hong, Hyeon-Ok; Lee, Bo-Ae
2016-01-01
The purpose of this study is to investigate the effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake. To achieve the purpose of the study, a total of 30 subjects were selected, including 15 people who performed continued regular exercises and 15 people as the control group. With regard to data processing, the IBM SPSS Statistics ver. 21.0 was used to calculate the mean and standard deviation. The difference of mean change between groups was verified through an independent t-test. As a result, there were significant differences in resting heart rate, maximal heart rate, maximal systolic blood pressure, and maximal oxygen uptake. However, the maximal systolic blood pressure was found to be an exercise-induced high blood pressure. Thus, it is thought that a risk diagnosis for it through a regular exercise stress test is necessary. PMID:26933659
Maximizing your return on people.
Bassi, Laurie; McMurrer, Daniel
2007-03-01
Though most traditional HR performance metrics don't predict organizational performance, alternatives simply have not existed--until now. During the past ten years, researchers Laurie Bassi and Daniel McMurrer have worked to develop a system that allows executives to assess human capital management (HCM) and to use those metrics both to predict organizational performance and to guide organizations' investments in people. The new framework is based on a core set of HCM drivers that fall into five major categories: leadership practices, employee engagement, knowledge accessibility, workforce optimization, and organizational learning capacity. By employing rigorously designed surveys to score a company on the range of HCM practices across the five categories, it's possible to benchmark organizational HCM capabilities, identify HCM strengths and weaknesses, and link improvements or back-sliding in specific HCM practices with improvements or shortcomings in organizational performance. The process requires determining a "maturity" score for each practice, based on a scale of 1 (low) to 5 (high). Over time, evolving maturity scores from multiple surveys can reveal progress in each of the HCM practices and help a company decide where to focus improvement efforts that will have a direct impact on performance. The authors draw from their work with American Standard, South Carolina's Beaufort County School District, and a bevy of financial firms to show how improving HCM scores led to increased sales, safety, academic test scores, and stock returns. Bassi and McMurrer urge HR departments to move beyond the usual metrics and begin using HCM measurement tools to gauge how well people are managed and developed throughout the organization. In this new role, according to the authors, HR can take on strategic responsibility and ensure that superior human capital management becomes central to the organization's culture. PMID:17348175
Matching, maximizing, and hill-climbing
Hinson, John M.; Staddon, J. E. R.
1983-01-01
In simple situations, animals consistently choose the better of two alternatives. On concurrent variable-interval variable-interval and variable-interval variable-ratio schedules, they approximately match aggregate choice and reinforcement ratios. The matching law attempts to explain the latter result but does not address the former. Hill-climbing rules such as momentary maximizing can account for both. We show that momentary maximizing constrains molar choice to approximate matching; that molar choice covaries with pigeons' momentary-maximizing estimate; and that the “generalized matching law” follows from almost any hill-climbing rule. PMID:16812350
Are all maximally entangled states pure?
NASA Astrophysics Data System (ADS)
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
Are all maximally entangled states pure?
Cavalcanti, D.; Brandao, F.G.S.L.; Terra Cunha, M.O.
2005-10-15
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
MAXIM Pathfinder x-ray interferometry mission
NASA Astrophysics Data System (ADS)
Gendreau, Keith C.; Cash, Webster C.; Shipley, Ann F.; White, Nicholas
2003-03-01
The MAXIM Pathfinder (MP) mission is under study as a scientific and technical stepping stone for the full MAXIM X-ray interferometry mission. While full MAXIM will resolve the event horizons of black holes with 0.1 microarcsecond imaging, MP will address scientific and technical issues as a 100 microarcsecond imager with some capabilities to resolve microarcsecond structure. We will present the primary science goals of MP. These include resolving stellar coronae, distinguishing between jets and accretion disks in AGN. This paper will also present the baseline design of MP. We will overview the challenging technical requirements and solutions for formation flying, target acquisition, and metrology.
Maximal hypersurfaces in asymptotically stationary spacetimes
NASA Astrophysics Data System (ADS)
Chrusciel, Piotr T.; Wald, Robert M.
1992-12-01
The purpose of the work is to extend the results on the existence of maximal hypersurfaces to encompass some situations considered by other authors. The existence of maximal hypersurface in asymptotically stationary spacetimes is proven. Existence of maximal surface and of foliations by maximal hypersurfaces is proven in two classes of asymptotically flat spacetimes which possess a one parameter group of isometries whose orbits are timelike 'near infinity'. The first class consists of strongly causal asymptotically flat spacetimes which contain no 'blackhole or white hole' (but may contain 'ergoregions' where the Killing orbits fail to be timelike). The second class of space times possess a black hole and a white hole, with the black and white hole horizon intersecting in a compact 2-surface S.
Gaussian maximally multipartite-entangled states
Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano
2009-12-15
We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n<=7.
AUC-Maximizing Ensembles through Metalearning
LeDell, Erin; van der Laan, Mark J.; Peterson, Maya
2016-01-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits. PMID:25899152
NASA Astrophysics Data System (ADS)
Jois, Manjunath Holaykoppa Nanjunda
The conventional Influence Maximization problem is the problem of finding such a team (a small subset) of seed nodes in a social network that would maximize the spread of influence over the whole network. This paper considers a lottery system aimed at maximizing the awareness spread to promote energy conservation behavior as a stochastic Influence Maximization problem with the constraints ensuring lottery fairness. The resulting Multi-Team Influence Maximization problem involves assigning the probabilities to multiple teams of seeds (interpreted as lottery winners) to maximize the expected awareness spread. Such a variation of the Influence Maximization problem is modeled as a Linear Program; however, enumerating all the possible teams is a hard task considering that the feasible team count grows exponentially with the network size. In order to address this challenge, we develop a column generation based approach to solve the problem with a limited number of candidate teams, where new candidates are generated and added to the problem iteratively. We adopt a piecewise linear function to model the impact of including a new team so as to pick only such teams which can improve the existing solution. We demonstrate that with this approach we can solve such influence maximization problems to optimality, and perform computational study with real-world social network data sets to showcase the efficiency of the approach in finding lottery designs for optimal awareness spread. Lastly, we explore other possible scenarios where this model can be utilized to optimally solve the otherwise hard to solve influence maximization problems.
Tseng, Kuo-Wei; Tseng, Wei-Chin; Lin, Ming-Ju; Chen, Hsin-Lian; Nosaka, Kazunori; Chen, Trevor C
2016-01-01
This study investigated whether maximal voluntary isometric contractions (MVIC) performed before maximal eccentric contractions (MaxEC) would attenuate muscle damage of the knee extensors. Untrained men were placed to an experimental group that performed 6 sets of 10 MVIC at 90° knee flexion 2 weeks before 6 sets of 10 MaxEC or a control group that performed MaxEC only (n = 13/group). Changes in muscle damage markers were assessed before to 5 days after each exercise. Small but significant changes in maximal voluntary concentric contraction torque, range of motion (ROM) and plasma creatine kinase (CK) activity were evident at immediately to 2 days post-MVIC (p < 0.05), but other variables (e.g. thigh girth, myoglobin concentration, B-mode echo intensity) did not change significantly. Changes in all variables after MaxEC were smaller (p < 0.05) by 45% (soreness)-67% (CK) for the experimental than the control group. These results suggest that MVIC conferred potent protective effect against MaxEC-induced muscle damage. PMID:27366814
Associations of maximal strength and muscular endurance with cardiovascular risk factors.
Vaara, J P; Fogelholm, M; Vasankari, T; Santtila, M; Häkkinen, K; Kyröläinen, H
2014-04-01
The aim was to study the associations of maximal strength and muscular endurance with single and clustered cardiovascular risk factors. Muscular endurance, maximal strength, cardiorespiratory fitness and waist circumference were measured in 686 young men (25±5 years). Cardiovascular risk factors (plasma glucose, serum high- and low-density lipoprotein cholesterol, triglycerides, blood pressure) were determined. The risk factors were transformed to z-scores and the mean of values formed clustered cardiovascular risk factor. Muscular endurance was inversely associated with triglycerides, s-LDL-cholesterol, glucose and blood pressure (β=-0.09 to - 0.23, p<0.05), and positively with s-HDL cholesterol (β=0.17, p<0.001) independent of cardiorespiratory fitness. Muscular endurance was negatively associated with the clustered cardiovascular risk factor independent of cardiorespiratory fitness (β=-0.26, p<0.05), whereas maximal strength was not associated with any of the cardiovascular risk factors or the clustered cardiovascular risk factor independent of cardiorespiratory fitness. Furthermore, cardiorespiratory fitness was inversely associated with triglycerides, s-LDL-cholesterol and the clustered cardiovascular risk factor (β=-0.14 to - 0.24, p<0.005), as well as positively with s-HDL cholesterol (β=0.11, p<0.05) independent of muscular fitness. This cross-sectional study demonstrated that in young men muscular endurance and cardiorespiratory fitness were independently associated with the clustering of cardiovascular risk factors, whereas maximal strength was not. PMID:24022567
Independence Generalizing Monotone and Boolean Independences
NASA Astrophysics Data System (ADS)
Hasebe, Takahiro
2011-01-01
We define conditionally monotone independence in two states which interpolates monotone and Boolean ones. This independence is associative, and therefore leads to a natural probability theory in a non-commutative algebra.
Caffeine, maximal power output and fatigue.
Williams, J H; Signorile, J F; Barnes, W S; Henrich, T W
1988-01-01
The purpose of this investigation was to determine the effects of caffeine ingestion on maximal power output and fatigue during short term, high intensity exercise. Nine adult males performed 15 s maximal exercise bouts 60 min after ingestion of caffeine (7 mg.kg-1) or placebo. Exercise bouts were carried out on a modified cycle ergometer which allowed power output to be computed for each one-half pedal stroke via microcomputer. Peak power output under caffeine conditions was not significantly different from that obtained following placebo ingestion. Similarly, time to peak power, total work, power fatigue index and power fatigue rate did not differ significantly between caffeine and placebo conditions. These results suggest that caffeine ingestion does not increase one's maximal ability to generate power. Further, caffeine does not alter the rate or magnitude of fatigue during high intensity, dynamic exercise. PMID:3228680
Maximal Holevo Quantity Based on Weak Measurements
Wang, Yao-Kun; Fei, Shao-Ming; Wang, Zhi-Xi; Cao, Jun-Peng; Fan, Heng
2015-01-01
The Holevo bound is a keystone in many applications of quantum information theory. We propose “ maximal Holevo quantity for weak measurements” as the generalization of the maximal Holevo quantity which is defined by the optimal projective measurements. The scenarios that weak measurements is necessary are that only the weak measurements can be performed because for example the system is macroscopic or that one intentionally tries to do so such that the disturbance on the measured system can be controlled for example in quantum key distribution protocols. We evaluate systematically the maximal Holevo quantity for weak measurements for Bell-diagonal states and find a series of results. Furthermore, we find that weak measurements can be realized by noise and project measurements. PMID:26090962
An information maximization model of eye movements
NASA Technical Reports Server (NTRS)
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
Optimum array design to maximize Fisher information for bearing estimation.
Tuladhar, Saurav R; Buck, John R
2011-11-01
Source bearing estimation is a common application of linear sensor arrays. The Cramer-Rao bound (CRB) sets a lower bound on the achievable mean square error (MSE) of any unbiased bearing estimate. In the spatially white noise case, the CRB is minimized by placing half of the sensors at each end of the array. However, many realistic ocean environments have a mixture of both white noise and spatially correlated noise. In shallow water environments, the correlated ambient noise can be modeled as cylindrically isotropic. This research designs a fixed aperture linear array to maximize the bearing Fisher information (FI) under these noise conditions. The FI is the inverse of the CRB, so maximizing the FI minimizes the CRB. The elements of the optimum array are located closer to the array ends than uniform spacing, but are not as extreme as in the white noise case. The optimum array results from a trade off between maximizing the array bearing sensitivity and minimizing output noise power variation over the bearing. Depending on the source bearing, the resulting improvement in MSE performance of the optimized array over a uniform array is equivalent to a gain of 2-5 dB in input signal-to-noise ratio. PMID:22087908
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062
Flux maximization techniques for compton backscatter depth profilometry.
Lawson, L
1993-01-01
Resolution in x-ray backscatter imaging has often been hampered by low fluxes. But, for a given set of resolution requirements and geometric constraints, it is possible to define a maximization problem in the geometric parameters for which the solution is the maximum flux possible in those circumstances. In this way, resolution in noncritical directions can be traded for improved resolution in a desired direction. Making this the thickness, or surface normal direction, makes practicable the depth profiling of layered structures. Such techniques were applied to the problem of imaging the layered structure of corroding aircraft sheet metal joints using Compton backscatter. PMID:21307450
Quantum-state reconstruction by maximizing likelihood and entropy.
Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk
2011-07-01
Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored. PMID:21797584
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018
Understanding violations of Gricean maxims in preschoolers and adults
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018
Maximal aerobic exercise following prolonged sleep deprivation.
Goodman, J; Radomski, M; Hart, L; Plyley, M; Shephard, R J
1989-12-01
The effect of 60 h without sleep upon maximal oxygen intake was examined in 12 young women, using a cycle ergometer protocol. The arousal of the subjects was maintained by requiring the performance of a sequence of cognitive tasks throughout the experimental period. Well-defined oxygen intake plateaus were obtained both before and after sleep deprivation, and no change of maximal oxygen intake was observed immediately following sleep deprivation. The endurance time for exhausting exercise also remained unchanged, as did such markers of aerobic performance as peak exercise ventilation, peak heart rate, peak respiratory gas exchange ratio, and peak blood lactate. However, as in an earlier study of sleep deprivation with male subjects (in which a decrease of treadmill maximal oxygen intake was observed), the formula of Dill and Costill (4) indicated the development of a substantial (11.6%) increase of estimated plasma volume percentage with corresponding decreases in hematocrit and red cell count. Possible factors sustaining maximal oxygen intake under the conditions of the present experiment include (1) maintained arousal of the subjects with no decrease in peak exercise ventilation or the related respiratory work and (2) use of a cycle ergometer rather than a treadmill test with possible concurrent differences in the impact of hematocrit levels and plasma volume expansion upon peak cardiac output and thus oxygen delivery to the working muscles. PMID:2628360
Does evolution lead to maximizing behavior?
Lehmann, Laurent; Alger, Ingela; Weibull, Jörgen
2015-07-01
A long-standing question in biology and economics is whether individual organisms evolve to behave as if they were striving to maximize some goal function. We here formalize this "as if" question in a patch-structured population in which individuals obtain material payoffs from (perhaps very complex multimove) social interactions. These material payoffs determine personal fitness and, ultimately, invasion fitness. We ask whether individuals in uninvadable population states will appear to be maximizing conventional goal functions (with population-structure coefficients exogenous to the individual's behavior), when what is really being maximized is invasion fitness at the genetic level. We reach two broad conclusions. First, no simple and general individual-centered goal function emerges from the analysis. This stems from the fact that invasion fitness is a gene-centered multigenerational measure of evolutionary success. Second, when selection is weak, all multigenerational effects of selection can be summarized in a neutral type-distribution quantifying identity-by-descent between individuals within patches. Individuals then behave as if they were striving to maximize a weighted sum of material payoffs (own and others). At an uninvadable state it is as if individuals would freely choose their actions and play a Nash equilibrium of a game with a goal function that combines self-interest (own material payoff), group interest (group material payoff if everyone does the same), and local rivalry (material payoff differences). PMID:26082379
How to Generate Good Profit Maximization Problems
ERIC Educational Resources Information Center
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Ehrenfest's Lottery--Time and Entropy Maximization
ERIC Educational Resources Information Center
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Faculty Salaries and the Maximization of Prestige
ERIC Educational Resources Information Center
Melguizo, Tatiana; Strober, Myra H.
2007-01-01
Through the lens of the emerging economic theory of higher education, we look at the relationship between salary and prestige. Starting from the premise that academic institutions seek to maximize prestige, we hypothesize that monetary rewards are higher for faculty activities that confer prestige. We use data from the 1999 National Study of…
Maximizing the Spectacle of Water Fountains
ERIC Educational Resources Information Center
Simoson, Andrew J.
2009-01-01
For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…
A Model of College Tuition Maximization
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.
2009-01-01
This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
Maximizing the Phytonutrient Content of Potatoes
Technology Transfer Automated Retrieval System (TEKTRAN)
We are exploring to what extent the rich genetic diversity of potatoes can be used to maximize the nutritional potential of potatoes. Metabolic profiling is being used to screen potatoes for genotypes with elevated amounts of vitamins and phytonutrients. Substantial differences in phytonutrients am...
Maximizing Resource Utilization in Video Streaming Systems
ERIC Educational Resources Information Center
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2011-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
2012-03-16
Independent Assessments: DOE's Systems Integrator convenes independent technical reviews to gauge progress toward meeting specific technical targets and to provide technical information necessary for key decisions.
NASA Astrophysics Data System (ADS)
Wagner, T.; Beirle, S.; Brauers, T.; Deutschmann, T.; Frieß, U.; Hak, C.; Halla, J. D.; Heue, K. P.; Junkermann, W.; Li, X.; Platt, U.; Pundt-Gruber, I.
2011-12-01
We present aerosol and trace gas profiles derived from MAX-DOAS observations. Our inversion scheme is based on simple profile parameterisations used as input for an atmospheric radiative transfer model (forward model). From a least squares fit of the forward model to the MAX-DOAS measurements, two profile parameters are retrieved including integrated quantities (aerosol optical depth or trace gas vertical column density), and parameters describing the height and shape of the respective profiles. From these results, the aerosol extinction and trace gas mixing ratios can also be calculated. We apply the profile inversion to MAX-DOAS observations during a measurement campaign in Milano, Italy, September 2003, which allowed simultaneous observations from three telescopes (directed to north, west, south). Profile inversions for aerosols and trace gases were possible on 23 days. Especially in the middle of the campaign (17-20 September 2003), enhanced values of aerosol optical depth and NO2 and HCHO mixing ratios were found. The retrieved layer heights were typically similar for HCHO and aerosols. For NO2, lower layer heights were found, which increased during the day. The MAX-DOAS inversion results are compared to independent measurements: (1) aerosol optical depth measured at an AERONET station at Ispra; (2) near-surface NO2 and HCHO (formaldehyde) mixing ratios measured by long path DOAS and Hantzsch instruments at Bresso; (3) vertical profiles of HCHO and aerosols measured by an ultra light aircraft. Depending on the viewing direction, the aerosol optical depths from MAX-DOAS are either smaller or larger than those from AERONET observations. Similar comparison results are found for the MAX-DOAS NO2 mixing ratios versus long path DOAS measurements. In contrast, the MAX-DOAS HCHO mixing ratios are generally higher than those from long path DOAS or Hantzsch instruments. The comparison of the HCHO and aerosol profiles from the aircraft showed reasonable agreement with
NASA Astrophysics Data System (ADS)
Wagner, T.; Beirle, S.; Brauers, T.; Deutschmann, T.; Frieß, U.; Hak, C.; Halla, J. D.; Heue, K. P.; Junkermann, W.; Li, X.; Platt, U.; Pundt-Gruber, I.
2011-06-01
We present aerosol and trace gas profiles derived from MAX-DOAS observations. Our inversion scheme is based on simple profile parameterisations used as input for an atmospheric radiative transfer model (forward model). From a least squares fit of the forward model to the MAX-DOAS measurements, two profile parameters are retrieved including integrated quantities (aerosol optical depth or trace gas vertical column density), and parameters describing the height and shape of the respective profiles. From these results, the aerosol extinction and trace gas mixing ratios can also be calculated. We apply the profile inversion to MAX-DOAS observations during a measurement campaign in Milano, Italy, September 2003, which allowed simultaneous observations from three telescopes (directed to north, west, south). Profile inversions for aerosols and trace gases were possible on 23 days. Especially in the middle of the campaign (17-20 September 2003), enhanced values of aerosol optical depth and NO2 and HCHO mixing ratios were found. The retrieved layer heights were typically similar for HCHO and aerosols. For NO2, lower layer heights were found, which increased during the day. The MAX-DOAS inversion results are compared to independent measurements: (1) aerosol optical depth measured at an AERONET station at Ispra; (2) near-surface NO2 and HCHO (formaldehyde) mixing ratios measured by long path DOAS and Hantzsch instruments at Bresso; (3) vertical profiles of HCHO and aerosols measured by an ultra light aircraft. Depending on the viewing direction, the aerosol optical depths from MAX-DOAS are either smaller or larger than those from AERONET observations. Similar comparison results are found for the MAX-DOAS NO2 mixing ratios versus long path DOAS measurements. In contrast, the MAX-DOAS HCHO mixing ratios are generally higher than those from long path DOAS or Hantzsch instruments. The comparison of the HCHO and aerosol profiles from the aircraft showed reasonable agreement with
A New Algorithm to Optimize Maximal Information Coefficient.
Chen, Yuan; Zeng, Ying; Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments
NASA Technical Reports Server (NTRS)
Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)
2002-01-01
A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2006-01-01
Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…
Auctions with Dynamic Populations: Efficiency and Revenue Maximization
NASA Astrophysics Data System (ADS)
Said, Maher
We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.
Maximal violation of Bell inequalities by position measurements
Kiukas, J.; Werner, R. F.
2010-07-15
We show that it is possible to find maximal violations of the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality using only position measurements on a pair of entangled nonrelativistic free particles. The device settings required in the CHSH inequality are done by choosing one of two times at which position is measured. For different assignments of the '+' outcome to positions, namely, to an interval, to a half-line, or to a periodic set, we determine violations of the inequalities and states where they are attained. These results have consequences for the hidden variable theories of Bohm and Nelson, in which the two-time correlations between distant particle trajectories have a joint distribution, and hence cannot violate any Bell inequality.
Karbowski, Jan
2015-10-01
The structure and quantitative composition of the cerebral cortex are interrelated with its computational capacity. Empirical data analyzed here indicate a certain hierarchy in local cortical composition. Specifically, neural wire, i.e., axons and dendrites take each about 1/3 of cortical space, spines and glia/astrocytes occupy each about (1/3)(2), and capillaries around (1/3)(4). Moreover, data analysis across species reveals that these fractions are roughly brain size independent, which suggests that they could be in some sense optimal and thus important for brain function. Is there any principle that sets them in this invariant way? This study first builds a model of local circuit in which neural wire, spines, astrocytes, and capillaries are mutually coupled elements and are treated within a single mathematical framework. Next, various forms of wire minimization rule (wire length, surface area, volume, or conduction delays) are analyzed, of which, only minimization of wire volume provides realistic results that are very close to the empirical cortical fractions. As an alternative, a new principle called "spine economy maximization" is proposed and investigated, which is associated with maximization of spine proportion in the cortex per spine size that yields equally good but more robust results. Additionally, a combination of wire cost and spine economy notions is considered as a meta-principle, and it is found that this proposition gives only marginally better results than either pure wire volume minimization or pure spine economy maximization, but only if spine economy component dominates. However, such a combined meta-principle yields much better results than the constraints related solely to minimization of wire length, wire surface area, and conduction delays. Interestingly, the type of spine size distribution also plays a role, and better agreement with the data is achieved for distributions with long tails. In sum, these results suggest that for the
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold. PMID:26066138
The price of anarchy is maximized at the percolation threshold
NASA Astrophysics Data System (ADS)
Skinner, Brian
2015-03-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called ``price of anarchy'' (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly-placed ``congestible'' and ``incongestible'' links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Price of anarchy is maximized at the percolation threshold
NASA Astrophysics Data System (ADS)
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
An updated version of wannier90: A tool for obtaining maximally-localised Wannier functions
NASA Astrophysics Data System (ADS)
Mostofi, Arash A.; Yates, Jonathan R.; Pizzi, Giovanni; Lee, Young-Su; Souza, Ivo; Vanderbilt, David; Marzari, Nicola
2014-08-01
wannier90 is a program for calculating maximally-localised Wannier functions (MLWFs) from a set of Bloch energy bands that may or may not be attached to or mixed with other bands. The formalism works by minimising the total spread of the MLWFs in real space. This is done in the space of unitary matrices that describe rotations of the Bloch bands at each k-point. As a result, wannier90 is independent of the basis set used in the underlying calculation to obtain the Bloch states. Therefore, it may be interfaced straightforwardly to any electronic structure code. The locality of MLWFs can be exploited to compute band-structure, density of states and Fermi surfaces at modest computational cost. Furthermore, wannier90 is able to output MLWFs for visualisation and other post-processing purposes. Wannier functions are already used in a wide variety of applications. These include analysis of chemical bonding in real space; calculation of dielectric properties via the modern theory of polarisation; and as an accurate and minimal basis set in the construction of model Hamiltonians for large-scale systems, in linear-scaling quantum Monte Carlo calculations, and for efficient computation of material properties, such as the anomalous Hall coefficient. We present here an updated version of wannier90, wannier90 2.0, including minor bug fixes and parallel (MPI) execution for band-structure interpolation and the calculation of properties such as density of states, Berry curvature and orbital magnetisation. wannier90 is freely available under the GNU General Public License from http://www.wannier.org/.
Maximal element theorems in product FC-spaces and generalized games
NASA Astrophysics Data System (ADS)
Ding, Xie Ping
2005-05-01
Let I be a finite or infinite index set, X be a topological space and (Yi,{[phi]Ni})i[set membership, variant]I be a family of finitely continuous topological spaces (in short, FC-space). For each i[set membership, variant]I, let be a set-valued mapping. Some existence theorems of maximal elements for the family {Ai}i[set membership, variant]I are established under noncompact setting of FC-spaces. As applications, some equilibrium existence theorems for generalized games with fuzzy constraint correspondences are proved in noncompact FC-spaces. These theorems improve, unify and generalize many important results in recent literature.
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-01-01
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np − 1) ⊗ (np − 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system. PMID:27440087
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-01-01
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np - 1) ⊗ (np - 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system. PMID:27440087
Maximal CP violation in flavor neutrino masses
NASA Astrophysics Data System (ADS)
Kitabayashi, Teruyuki; Yasuè, Masaki
2016-03-01
Since flavor neutrino masses Mμμ,ττ,μτ can be expressed in terms of Mee,eμ,eτ, mutual dependence among Mμμ,ττ,μτ is derived by imposing some constraints on Mee,eμ,eτ. For appropriately imposed constraints on Mee,eμ,eτ giving rise to both maximal CP violation and the maximal atmospheric neutrino mixing, we show various specific textures of neutrino mass matrices including the texture with Mττ = Mμμ∗ derived as the simplest solution to the constraint of Mττ ‑ Mμμ = imaginary, which is required by the constraint of Meμcos θ23 ‑ Meτsin θ23 = real for cos 2θ23 = 0. It is found that Majorana CP violation depends on the phase of Mee.
Nondecoupling of maximal supergravity from the superstring.
Green, Michael B; Ooguri, Hirosi; Schwarz, John H
2007-07-27
We consider the conditions necessary for obtaining perturbative maximal supergravity in d dimensions as a decoupling limit of type II superstring theory compactified on a (10-d) torus. For dimensions d=2 and d=3, it is possible to define a limit in which the only finite-mass states are the 256 massless states of maximal supergravity. However, in dimensions d>or=4, there are infinite towers of additional massless and finite-mass states. These correspond to Kaluza-Klein charges, wound strings, Kaluza-Klein monopoles, or branes wrapping around cycles of the toroidal extra dimensions. We conclude that perturbative supergravity cannot be decoupled from string theory in dimensions>or=4. In particular, we conjecture that pure N=8 supergravity in four dimensions is in the Swampland. PMID:17678349
Maximal temperature in a simple thermodynamical system
NASA Astrophysics Data System (ADS)
Dai, De-Chang; Stojkovic, Dejan
2016-06-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Experimental implementation of maximally synchronizable networks
NASA Astrophysics Data System (ADS)
Sevilla-Escoboza, R.; Buldú, J. M.; Boccaletti, S.; Papo, D.; Hwang, D.-U.; Huerta-Cuellar, G.; Gutiérrez, R.
2016-04-01
Maximally synchronizable networks (MSNs) are acyclic directed networks that maximize synchronizability. In this paper, we investigate the feasibility of transforming networks of coupled oscillators into their corresponding MSNs. By tuning the weights of any given network so as to reach the lowest possible eigenratio λN /λ2, the synchronized state is guaranteed to be maintained across the longest possible range of coupling strengths. We check the robustness of the resulting MSNs with an experimental implementation of a network of nonlinear electronic oscillators and study the propagation of the synchronization errors through the network. Importantly, a method to study the effects of topological uncertainties on the synchronizability is proposed and explored both theoretically and experimentally.
Formation Control for the MAXIM Mission
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
Revenue maximization in survivable WDM networks
NASA Astrophysics Data System (ADS)
Sridharan, Murari; Somani, Arun K.
2000-09-01
Service availability is an indispensable requirement for many current and future applications over the Internet and hence has to be addressed as part of the optical QoS service model. Network service providers can offer varying classes of services based on the choice of protection employed which can vary from full protection to no protection. Based on the service classes, traffic in the network falls into one of the three classes viz., full protection, no protection and best-effort. The network typically relies on the best-effort traffic for maximizing revenue. We consider two variations on the best-effort class, (1) all connections are accepted and network tries to protect as many as possible and (2) a mix of protected and unprotected connections and the goal is to maximize revenue. In this paper, we present a mathematical formulation, that captures service differentiation based on lightpath protection, for revenue maximization in a wavelength routed backbone networks. Our approach also captures the service disruption aspect into the problem formulation, as there may be a penalty for disrupting currently working connections.
Maximal acceleration is non-rotating
NASA Astrophysics Data System (ADS)
Page, Don N.
1998-06-01
In a stationary axisymmetric spacetime, the angular velocity of a stationary observer whose acceleration vector is Fermi-Walker transported is also the angular velocity that locally extremizes the magnitude of the acceleration of such an observer. The converse is also true if the spacetime is symmetric under reversing both t and 0264-9381/15/6/020/img1 together. Thus a congruence of non-rotating acceleration worldlines (NAW) is equivalent to a stationary congruence accelerating locally extremely (SCALE). These congruences are defined completely locally, unlike the case of zero angular momentum observers (ZAMOs), which requires knowledge around a symmetry axis. The SCALE subcase of a stationary congruence accelerating maximally (SCAM) is made up of stationary worldlines that may be considered to be locally most nearly at rest in a stationary axisymmetric gravitational field. Formulae for the angular velocity and other properties of the SCALEs are given explicitly on a generalization of an equatorial plane, infinitesimally near a symmetry axis, and in a slowly rotating gravitational field, including the far-field limit, where the SCAM is shown to be counter-rotating relative to infinity. These formulae are evaluated in particular detail for the Kerr-Newman metric. Various other congruences are also defined, such as a stationary congruence rotating at minimum (SCRAM), and stationary worldlines accelerating radially maximally (SWARM), both of which coincide with a SCAM on an equatorial plane of reflection symmetry. Applications are also made to the gravitational fields of maximally rotating stars, the Sun and the Solar System.
The “Independent Components” of Natural Scenes are Edge Filters
BELL, ANTHONY J.; SEJNOWSKI, TERRENCE J.
2010-01-01
It has previously been suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and it has been reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that a new unsupervised learning algorithm based on information maximization, a nonlinear “infomax” network, when applied to an ensemble of natural scenes produces sets of visual filters that are localized and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximization network. In addition, the outputs of these filters are as independent as possible, since this infomax network performs Independent Components Analysis or ICA, for sparse (super-gaussian) component distributions. We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). The ICA filters have more sparsely distributed (kurtotic) outputs on natural scenes. They also resemble the receptive fields of simple cells in visual cortex, which suggests that these neurons form a natural, information-theoretic coordinate system for natural images. PMID:9425547
Maximal violation of tight Bell inequalities for maximal high-dimensional entanglement
Lee, Seung-Woo; Jaksch, Dieter
2009-07-15
We propose a Bell inequality for high-dimensional bipartite systems obtained by binning local measurement outcomes and show that it is tight. We find a binning method for even d-dimensional measurement outcomes for which this Bell inequality is maximally violated by maximally entangled states. Furthermore, we demonstrate that the Bell inequality is applicable to continuous variable systems and yields strong violations for two-mode squeezed states.
ERIC Educational Resources Information Center
Giorgis, Cyndi; Johnson, Nancy J.
2002-01-01
Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L
2015-05-01
This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides. PMID:25941139
Independent Schools - Independent Thinking - Independent Art: Testing Assumptions.
ERIC Educational Resources Information Center
Carnes, Virginia
This study consists of a review of selected educational reform issues from the past 10 years that deal with changing attitudes towards art and art instruction in the context of independent private sector schools. The major focus of the study is in visual arts and examines various programs and initiatives with an art focus. Programs include…
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame. PMID:12416921
Electromagnetically induced grating with maximal atomic coherence
Carvalho, Silvania A.; Araujo, Luis E. E. de
2011-10-15
We describe theoretically an atomic diffraction grating that combines an electromagnetically induced grating with a coherence grating in a double-{Lambda} atomic system. With the atom in a condition of maximal coherence between its lower levels, the combined gratings simultaneously diffract both the incident probe beam as well as the signal beam generated through four-wave mixing. A special feature of the atomic grating is that it will diffract any beam resonantly tuned to any excited state of the atom accessible by a dipole transition from its ground state.
Coloring random graphs and maximizing local diversity.
Bounkong, S; van Mourik, J; Saad, D
2006-11-01
We study a variation of the graph coloring problem on random graphs of finite average connectivity. Given the number of colors, we aim to maximize the number of different colors at neighboring vertices (i.e., one edge distance) of any vertex. Two efficient algorithms, belief propagation and Walksat, are adapted to carry out this task. We present experimental results based on two types of random graphs for different system sizes and identify the critical value of the connectivity for the algorithms to find a perfect solution. The problem and the suggested algorithms have practical relevance since various applications, such as distributed storage, can be mapped onto this problem. PMID:17280022
Using molecular biology to maximize concurrent training.
Baar, Keith
2014-11-01
Very few sports use only endurance or strength. Outside of running long distances on a flat surface and power-lifting, practically all sports require some combination of endurance and strength. Endurance and strength can be developed simultaneously to some degree. However, the development of a high level of endurance seems to prohibit the development or maintenance of muscle mass and strength. This interaction between endurance and strength is called the concurrent training effect. This review specifically defines the concurrent training effect, discusses the potential molecular mechanisms underlying this effect, and proposes strategies to maximize strength and endurance in the high-level athlete. PMID:25355186
Forms and algebras in (half-)maximal supergravity theories
NASA Astrophysics Data System (ADS)
Howe, Paul; Palmkvist, Jakob
2015-05-01
The forms in D-dimensional (half-)maximal supergravity theories are discussed for 3 ≤ D ≤ 11. Superspace methods are used to derive consistent sets of Bianchi identities for all the forms for all degrees, and to show that they are soluble and fully compatible with supersymmetry. The Bianchi identities determine Lie superalgebras that can be extended to Borcherds superalgebras of a special type. It is shown that any Borcherds superalgebra of this type gives the same form spectrum, up to an arbitrary degree, as an associated Kac-Moody algebra. For maximal supergravity up to D-form potentials, this is the very extended Kac-Moody algebra E 11. It is also shown how gauging can be carried out in a simple fashion by deforming the Bianchi identities by means of a new algebraic element related to the embedding tensor. In this case the appropriate extension of the form algebra is a truncated version of the so-called tensor hierarchy algebra.
Maximizing Information Diffusion in the Cyber-physical Integrated Network.
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-01-01
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254
Whose Entropy: A Maximal Entropy Analysis of Phosphorylation Signaling
NASA Astrophysics Data System (ADS)
Remacle, F.; Graeber, T. G.; Levine, R. D.
2011-07-01
High throughput experiments, characteristic of studies in systems biology, produce large output data sets often at different time points or under a variety of related conditions or for different patients. In several recent papers the data is modeled by using a distribution of maximal information-theoretic entropy. We pose the question: `whose entropy' meaning how do we select the variables whose distribution should be compared to that of maximal entropy. The point is that different choices can lead to different answers. Due to the technological advances that allow for the system-wide measurement of hundreds to thousands of events from biological samples, addressing this question is now part of the analysis of systems biology datasets. The analysis of the extent of phosphorylation in reference to the transformation potency of Bcr-Abl fusion oncogene mutants is used as a biological example. The approach taken seeks to use entropy not simply as a statistical measure of dispersion but as a physical, thermodynamic, state function. This highlights the dilemma of what are the variables that describe the state of the signaling network. Is what matters Boolean, spin-like, variables that specify whether a particular phosphorylation site is or is not actually phosphorylated. Or does the actual extent of phosphorylation matter. Last but not least is the possibility that in a signaling network some few specific phosphorylation sites are the key to the signal transduction even though these sites are not at any time abundantly phosphorylated in an absolute sense.
Maximally Entangled States of a Two-Qubit System
NASA Astrophysics Data System (ADS)
Singh, Manu P.; Rajput, B. S.
2013-12-01
Entanglement has been explored as one of the key resources required for quantum computation, the functional dependence of the entanglement measures on spin correlation functions has been established, correspondence between evolution of maximally entangled states (MES) of two-qubit system and representation of SU(2) group has been worked out and the evolution of MES under a rotating magnetic field has been investigated. Necessary and sufficient conditions for the general two-qubit state to be maximally entangled state (MES) have been obtained and a new set of MES constituting a very powerful and reliable eigen basis (different from magic bases) of two-qubit systems has been constructed. In terms of the MES constituting this basis, Bell’s States have been generated and all the qubits of two-qubit system have been obtained. It has shown that a MES corresponds to a point in the SO(3) sphere and an evolution of MES corresponds to a trajectory connecting two points on this sphere. Analysing the evolution of MES under a rotating magnetic field, it has been demonstrated that a rotating magnetic field is equivalent to a three dimensional rotation in real space leading to the evolution of a MES.
Conditional independence in quantum many-body systems
NASA Astrophysics Data System (ADS)
Kim, Isaac Hyun
In this thesis, I will discuss how information-theoretic arguments can be used to produce sharp bounds in the studies of quantum many-body systems. The main advantage of this approach, as opposed to the conventional field-theoretic argument, is that it depends very little on the precise form of the Hamiltonian. The main idea behind this thesis lies on a number of results concerning the structure of quantum states that are conditionally independent. Depending on the application, some of these statements are generalized to quantum states that are approximately conditionally independent. These structures can be readily used in the studies of gapped quantum many-body systems, especially for the ones in two spatial dimensions. A number of rigorous results are derived, including (i) a universal upper bound for a maximal number of topologically protected states that is expressed in terms of the topological entanglement entropy, (ii) a first-order perturbation bound for the topological entanglement entropy that decays superpolynomially with the size of the subsystem, and (iii) a correlation bound between an arbitrary local operator and a topological operator constructed from a set of local reduced density matrices. I also introduce exactly solvable models supported on a three-dimensional lattice that can be used as a reliable quantum memory.
Fredriksson, Albin Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goals to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.
Distinguishing maximally entangled states by one-way local operations and classical communication
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Chao; Feng, Ke-Qin; Gao, Fei; Wen, Qiao-Yan
2015-01-01
In this paper, we mainly study the local indistinguishability of mutually orthogonal bipartite maximally entangled states. We construct sets of fewer than d orthogonal maximally entangled states which are not distinguished by one-way local operations and classical communication (LOCC) in the Hilbert space of d ⊗d . The proof, based on the Fourier transform of an additive group, is very simple but quite effective. Simultaneously, our results give a general unified upper bound for the minimum number of one-way LOCC indistinguishable maximally entangled states. This improves previous results which only showed sets of N ≥d -2 such states. Finally, our results also show that previous conjectures in Zhang et al. [Z.-C. Zhang, Q.-Y. Wen, F. Gao, G.-J. Tian, and T.-Q. Cao, Quant. Info. Proc. 13, 795 (2014), 10.1007/s11128-013-0691-9] are indeed correct.
NASA Astrophysics Data System (ADS)
Adhikari, Dhruba R.; Kartsatos, Athanassios G.
2008-12-01
Let X be a real reflexive Banach space with dual X*. Let L:X[superset or implies]=(L)-->X* be densely defined, linear and maximal monotone. Let T:X[superset or implies]=(T)-->2X*, with 0[set membership, variant]=(T) and 0[set membership, variant]T(0), be strongly quasibounded and maximal monotone, and C:X[superset or implies]=(C)-->X* bounded, demicontinuous and of type (S+) w.r.t. =(L). A new topological degree theory has been developed for the sum L+T+C. This degree theory is an extension of the Berkovits-Mustonen theory (for T=0) and an improvement of the work of Addou and Mermri (for T:X-->2X* bounded). Unbounded maximal monotone operators with are strongly quasibounded and may be used with the new degree theory.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Factors affecting maximal momentary grip strength.
Martin, S; Neale, G; Elia, M
1985-03-01
Maximal voluntary grip strength has been measured in normal adults aged 18-70 years (17 f, 18 m) and compared with other indices of body muscle mass. Grip strength (dominant side) was directly proportional to creatinine excretion (r = 0.81); to forearm muscle area (r = 0.73); to upper arm muscle area (r = 0.71) and to lean body mass (r = 0.65). Grip strength relative to forearm muscle area decreased with age. The study of a subgroup of normal subjects revealed a small but significant postural and circadian effect on grip strength. The effect on maximal voluntary grip strength of sedatives in elderly subjects undergoing routine endoscopy (n = 6), and of acute infections in otherwise healthy individuals (n = 6), severe illness in patients requiring intensive care (n = 6), chronic renal failure (n = 7) and anorexia nervosa (n = 6) has been assessed. Intravenous diazepam and buscopan produced a 50 per cent reduction in grip strength which returned to normal within the next 2-3 h. Acute infections reduced grip strength by a mean of 35 per cent and severe illness in patients in intensive care by 60 per cent. In patients with chronic renal failure grip strength was 80-85 per cent of that predicted from forearm 'muscle area' (P less than 0.05). In anorectic patients the values were appropriate for their forearm muscle area. Nevertheless nutritional rehabilitation of one anorectic patient did not lead to a consistent improvement in grip strength. PMID:3926728
Spiders Tune Glue Viscosity to Maximize Adhesion.
Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali
2015-11-24
Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives. PMID:26513350
Robust estimation by expectation maximization algorithm
NASA Astrophysics Data System (ADS)
Koch, Karl Rudolf
2013-02-01
A mixture of normal distributions is assumed for the observations of a linear model. The first component of the mixture represents the measurements without gross errors, while each of the remaining components gives the distribution for an outlier. Missing data are introduced to deliver the information as to which observation belongs to which component. The unknown location parameters and the unknown scale parameter of the linear model are estimated by the EM algorithm, which is iteratively applied. The E (expectation) step of the algorithm determines the expected value of the likelihood function given the observations and the current estimate of the unknown parameters, while the M (maximization) step computes new estimates by maximizing the expectation of the likelihood function. In comparison to Huber's M-estimation, the EM algorithm does not only identify outliers by introducing small weights for large residuals but also estimates the outliers. They can be corrected by the parameters of the linear model freed from the distortions by gross errors. Monte Carlo methods with random variates from the normal distribution then give expectations, variances, covariances and confidence regions for functions of the parameters estimated by taking care of the outliers. The method is demonstrated by the analysis of measurements with gross errors of a laser scanner.
Maximizing strain in miniaturized dielectric elastomer actuators
NASA Astrophysics Data System (ADS)
Rosset, Samuel; Araromi, Oluwaseun; Shea, Herbert
2015-04-01
We present a theoretical model to optimise the unidirectional motion of a rigid object bonded to a miniaturized dielectric elastomer actuator (DEA), a configuration found for example in AMI's haptic feedback devices, or in our tuneable RF phase shifter. Recent work has shown that unidirectional motion is maximized when the membrane is both anistropically prestretched and subjected to a dead load in the direction of actuation. However, the use of dead weights for miniaturized devices is clearly highly impractical. Consequently smaller devices use the membrane itself to generate the opposing force. Since the membrane covers the entire frame, one has the same prestretch condition in the active (actuated) and passive zones. Because the passive zone contracts when the active zone expands, it does not provide a constant restoring force, reducing the maximum achievable actuation strain. We have determined the optimal ratio between the size of the electrode (active zone) and the passive zone, as well as the optimal prestretch in both in-plane directions, in order to maximize the absolute displacement of the rigid object placed at the active/passive border. Our model and experiments show that the ideal active ratio is 50%, with a displacement twice smaller than what can be obtained with a dead load. We expand our fabrication process to also show how DEAs can be laser-post-processed to remove carefully chosen regions of the passive elastomer membrane, thereby increasing the actuation strain of the device.
Maximal lactate steady state in Judo
de Azevedo, Paulo Henrique Silva Marques; Pithon-Curi, Tania; Zagatto, Alessandro Moura; Oliveira, João; Perez, Sérgio
2014-01-01
Summary Background: the purpose of this study was to verify the validity of respiratory compensation threshold (RCT) measured during a new single judo specific incremental test (JSIT) for aerobic demand evaluation. Methods: to test the validity of the new test, the JSIT was compared with Maximal Lactate Steady State (MLSS), which is the gold standard procedure for aerobic demand measuring. Eight well-trained male competitive judo players (24.3 ± 7.9 years; height of 169.3 ± 6.7cm; fat mass of 12.7 ± 3.9%) performed a maximal incremental specific test for judo to assess the RCT and performed on 30-minute MLSS test, where both tests were performed mimicking the UchiKomi drills. Results: the intensity at RCT measured on JSIT was not significantly different compared to MLSS (p=0.40). In addition, it was observed high and significant correlation between MLSS and RCT (r=0.90, p=0.002), as well as a high agreement. Conclusions: RCT measured during JSIT is a valid procedure to measure the aerobic demand, respecting the ecological validity of Judo. PMID:25332923
Steps to Independent Living Series.
ERIC Educational Resources Information Center
Lobb, Nancy
This set of six activity books and a teacher's guide is designed to help students from eighth grade to adulthood with special needs to learn independent living skills. The activity books have a reading level of 2.5 and address: (1) "How to Get Well When You're Sick or Hurt," including how to take a temperature, see a doctor, and use medicines…
Calculating dispersion interactions using maximally localized Wannier functions.
Andrinopoulos, Lampros; Hine, Nicholas D M; Mostofi, Arash A
2011-10-21
We investigate a recently developed approach [P. L. Silvestrelli, Phys. Rev. Lett. 100, 053002 (2008); J. Phys. Chem. A 113, 5224 (2009)] that uses maximally localized Wannier functions to evaluate the van der Waals contribution to the total energy of a system calculated with density-functional theory. We test it on a set of atomic and molecular dimers of increasing complexity (argon, methane, ethene, benzene, phthalocyanine, and copper phthalocyanine) and demonstrate that the method, as originally proposed, has a number of shortcomings that hamper its predictive power. In order to overcome these problems, we have developed and implemented a number of improvements to the method and show that these modifications give rise to calculated binding energies and equilibrium geometries that are in closer agreement to results of quantum-chemical coupled-cluster calculations. PMID:22029295
ERIC Educational Resources Information Center
Elkins, Aaron J.
1977-01-01
The author questions the extent to which educators have relied on "relevance" and learner participation in objective-setting in the past decade. He describes a useful approach to learner-oriented evaluation in which content relevance was not judged by participants until after they had been exposed to it. (MF)
Alkner, Björn A; Berg, Hans E; Kozlovskaya, Inessa; Sayenko, Dimitri; Tesch, Per A
2003-09-01
The efficacy of a resistance exercise paradigm, using a gravity-independent flywheel principle, was examined in four men subjected to 110 days of confinement (simulation of flight of international crew on space station; SFINCSS-99). Subjects performed six upper- and lower-body exercises (calf raise, squat, back extension, seated row, lateral shoulder raise, biceps curl) 2-3 times weekly during the confinement. The exercise regimen consisted of four sets of ten repetitions of each exercise at estimated 80-100% of maximal effort. Work was measured and recorded in each exercise session. Maximal voluntary isometric force in the calf press, squat and back extension, was assessed at three different joint angles before and after confinement. Overall, the training load (work) increased in all subjects (range 16-108%) over the course of the intervention. Maximal voluntary isometric force was unchanged following confinement. Although the perceived level of strain and comfort varied between exercises and among individuals, the results of the present study suggest this resistance exercise regimen is effective in maintaining or even increasing performance and maximal force output during long-term confinement. These findings should be considered in the design of resistance exercise hardware and prescriptions to be employed on the International Space Station. PMID:12783231
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design
Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html. PMID:27487245
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design.
Zhang, Shaoqiang; Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html. PMID:27487245
Hofer, Scott M; Piccinin, Andrea M
2009-06-01
Replication of research findings across independent longitudinal studies is essential for a cumulative and innovative developmental science. Meta-analysis of longitudinal studies is often limited by the amount of published information on particular research questions, the complexity of longitudinal designs and the sophistication of analyses, and practical limits on full reporting of results. In many cases, cross-study differences in sample composition and measurements impede or lessen the utility of pooled data analysis. A collaborative, coordinated analysis approach can provide a broad foundation for cumulating scientific knowledge by facilitating efficient analysis of multiple studies in ways that maximize comparability of results and permit evaluation of study differences. The goal of such an approach is to maximize opportunities for replication and extension of findings across longitudinal studies through open access to analysis scripts and output for published results, permitting modification, evaluation, and extension of alternative statistical models and application to additional data sets. Drawing on the cognitive aging literature as an example, the authors articulate some of the challenges of meta-analytic and pooled-data approaches and introduce a coordinated analysis approach as an important avenue for maximizing the comparability, replication, and extension of results from longitudinal studies. PMID:19485626
Hofer, Scott M.; Piccinin, Andrea M.
2009-01-01
Replication of research findings across independent longitudinal studies is essential for a cumulative and innovative developmental science. Meta-analysis of longitudinal studies is often limited by the amount of published information on particular research questions, the complexity of longitudinal designs and sophistication of analyses, and practical limits on full reporting of results. In many cases, cross-study differences in sample composition and measurements impede or lessen the utility of pooled data analysis. A collaborative, coordinated analysis approach can provide a broad foundation for cumulating scientific knowledge by facilitating efficient analysis of multiple studies in ways that maximize comparability of results and permit evaluation of study differences. The goal of such an approach is to maximize opportunities for replication and extension of findings across longitudinal studies through open access to analysis scripts and output for published results, permitting modification, evaluation, and extension of alternative statistical models, and application to additional data sets. Drawing on the cognitive aging literature as an example, we articulate some of the challenges of meta-analytic and pooled-data approaches and introduce a coordinated analysis approach as an important avenue for maximizing the comparability, replication, and extension of results from longitudinal studies. PMID:19485626
Heart Rate Recovery Is Impaired After Maximal Exercise Testing in Children with Sickle Cell Anemia
Alvarado, Anthony M.; Ward, Kendra M.; Muntz, Devin S.; Thompson, Alexis A.; Rodeghier, Mark; Fernhall, Bo; Liem, Robert I.
2014-01-01
Objective To examine heart rate recovery (HRR) as an indicator of autonomic nervous system (ANS) dysfunction following maximal exercise testing in children and young adults with sickle cell anemia (SCA). Study design Recovery phase heart rate (HR) in the first 5 minutes following maximal exercise testing in 60 subjects with SCA and 30 matched controls without SCA was assessed. The difference between maximal HR and HR at both 1-minute (ΔHR1min) and 2-minute (ΔHR2min) recovery was our primary outcome. Results Compared with controls, subjects with SCA demonstrated significantly smaller mean ΔHR1min (23 bpm, 95% CI [20, 26] vs. 32 bpm, 95% CI [26, 37], p = 0.006) and ΔHR2min (39 bpm, 95% CI [36, 43] vs. 48 bpm, 95% CI [42, 53], p = 0.011). Subjects with SCA also showed smaller mean changes in HR from peak HR to 1 minute, from 1 minute to 2 minutes and from 2 through 5 minutes of recovery by repeated measures testing. In a multivariable regression model, older age was independently associated with smaller ΔHR1min in subjects with SCA. Cardiopulmonary fitness and hydroxyurea use, however, were not independent predictors of ΔHR1min. Conclusions Children with SCA demonstrate impaired HRR following maximal exercise. Reduced post-exercise HRR in SCA suggests impaired parasympathetic function, which may become progressively worse with age, in this population. PMID:25477159
Seizures and Teens: Maximizing Health and Safety
ERIC Educational Resources Information Center
Sundstrom, Diane
2007-01-01
As parents and caregivers, their job is to help their children become happy, healthy, and productive members of society. They try to balance the desire to protect their children with their need to become independent young adults. This can be a struggle for parents of teens with seizures, since there are so many challenges they may face. Teenagers…
Dispatch Scheduling to Maximize Exoplanet Detection
NASA Astrophysics Data System (ADS)
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
Characterizing maximally singular phase-space distributions
NASA Astrophysics Data System (ADS)
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
SETS. Set Equation Transformation System
Worrel, R.B.
1992-01-13
SETS is used for symbolic manipulation of Boolean equations, particularly the reduction of equations by the application of Boolean identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze noncoherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access through nullification of sensors in its protection system.
ERIC Educational Resources Information Center
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
Distributed instruction set computer
Wang, L.
1989-01-01
The Distributed Instruction Set Computer, or DISC for short, is an experimental computer system for fine-grained parallel processing. DISC employs a new parallel instruction set, an Early Binding and Scheduling data tagging scheme, and a distributed control mechanism to explore a software dataflow control method in a multiple-functional unit system. With zero system control overhead, multiple instructions are executed in parallel and/or out of order at the highest speed of n instructions/cycle, where n is the number of functional units. The quantitative simulation result indicates that a DISC system with 16 functional units can deliverer a maximal 7.7X performance speedup over a single functional-unit system at the same clock speed. Exploring a new parallel instruction set and distributed control mechanism, DISC represents three major breakthroughs in the domain of fine-grained parallel processing: (1) Fast multiple instruction issuing mechanism; (2) Parallel and/or out-of-order execution; (3) Software dataflow control scheme.
Data compression preserving statistical independence
NASA Technical Reports Server (NTRS)
Morduch, G. E.; Rice, W. M.
1973-01-01
The purpose of this study was to determine the optimum points of evaluation of data compressed by means of polynomial smoothing. It is shown that a set y of m statistically independent observations Y(t sub 1), Y(t sub 2), ... Y(t sub m) of a quantity X(t), which can be described by a (n-1)th degree polynomial in time, may be represented by a set Z of n statistically independent compressed observations Z (tau sub 1), Z (tau sub 2),...Z (tau sub n), such that The compressed set Z has the same information content as the observed set Y. the times tau sub 1, tau sub 2,.. tau sub n are the zeros of an nth degree polynomial P sub n, to whose definition and properties the bulk of this report is devoted. The polynomials P sub n are defined as functions of the observation times t sub 1, t sub 2,.. t sub n, and it is interesting to note that if the observation times are continuously distributed the polynomials P sub n degenerate to legendre polynomials. The proposed data compression scheme is a little more complex than those usually employed, but has the advantage of preserving all the information content of the original observations.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Nan, Hua; Tao, Yuan-Hong; Fei, Shao-Ming
2016-02-01
The mutually unbiasedness between a maximally entangled basis (MEB) and an unextendible maximally entangled system (UMES) in the bipartite system C2⊗ C^{2k} (k>1) are introduced and discussed first in this paper. Then two mutually unbiased pairs of a maximally entangled basis and an unextendible maximally entangled system are constructed; lastly, explicit constructions are obtained for mutually unbiased MEB and UMES in C2⊗ C4 and C2⊗ C8, respectively.
Karbowski, Jan
2015-01-01
The structure and quantitative composition of the cerebral cortex are interrelated with its computational capacity. Empirical data analyzed here indicate a certain hierarchy in local cortical composition. Specifically, neural wire, i.e., axons and dendrites take each about 1/3 of cortical space, spines and glia/astrocytes occupy each about (1/3)2, and capillaries around (1/3)4. Moreover, data analysis across species reveals that these fractions are roughly brain size independent, which suggests that they could be in some sense optimal and thus important for brain function. Is there any principle that sets them in this invariant way? This study first builds a model of local circuit in which neural wire, spines, astrocytes, and capillaries are mutually coupled elements and are treated within a single mathematical framework. Next, various forms of wire minimization rule (wire length, surface area, volume, or conduction delays) are analyzed, of which, only minimization of wire volume provides realistic results that are very close to the empirical cortical fractions. As an alternative, a new principle called “spine economy maximization” is proposed and investigated, which is associated with maximization of spine proportion in the cortex per spine size that yields equally good but more robust results. Additionally, a combination of wire cost and spine economy notions is considered as a meta-principle, and it is found that this proposition gives only marginally better results than either pure wire volume minimization or pure spine economy maximization, but only if spine economy component dominates. However, such a combined meta-principle yields much better results than the constraints related solely to minimization of wire length, wire surface area, and conduction delays. Interestingly, the type of spine size distribution also plays a role, and better agreement with the data is achieved for distributions with long tails. In sum, these results suggest that for the
Romano, Raffaele; Loock, Peter van
2010-07-15
Quantum teleportation enables deterministic and faithful transmission of quantum states, provided a maximally entangled state is preshared between sender and receiver, and a one-way classical channel is available. Here, we prove that these resources are not only sufficient, but also necessary, for deterministically and faithfully sending quantum states through any fixed noisy channel of maximal rank, when a single use of the cannel is admitted. In other words, for this family of channels, there are no other protocols, based on different (and possibly cheaper) sets of resources, capable of replacing quantum teleportation.
Maximal entanglement concentration for a set of (n+1)-qubit states
NASA Astrophysics Data System (ADS)
Banerjee, Anindita; Shukla, Chitra; Pathak, Anirban
2015-12-01
We propose two schemes for concentration of (n+1)-qubit entangled states that can be written in the form of ( α |\\varphi 0rangle |0rangle +β |\\varphi 1rangle |1rangle ) _{n+1} where |\\varphi 0rangle and |\\varphi 1rangle are mutually orthogonal n-qubit states. The importance of this general form is that the entangled states such as Bell, cat, GHZ, GHZ-like, |\\varOmega rangle , |Q5rangle , 4-qubit cluster states and specific states from the nine SLOCC-nonequivalent families of 4-qubit entangled states can be expressed in this form. The proposed entanglement concentration protocol is based on the local operations and classical communications (LOCC). It is shown that the maximum success probability for ECP using quantum nondemolition technique (QND) is 2β 2 for (n+1)-qubit states of the prescribed form. It is shown that the proposed schemes can be implemented optically. Further, it is also noted that the proposed schemes can be implemented using quantum dot and microcavity systems.
ERIC Educational Resources Information Center
McCook, Byron Alexander
2009-01-01
Pennsylvania public school districts are largely funded through basic education subsidy for providing educational services for resident students and non-resident students who are placed in residential programs within the school district boundaries. Non-resident placements occur through, but are not limited to, adjudication proceedings, foster home…
Energy Efficiency Maximization of Practical Wireless Communication Systems
NASA Astrophysics Data System (ADS)
Eraslan, Eren
Energy consumption of the modern wireless communication systems is rapidly growing due to the ever-increasing data demand and the advanced solutions employed in order to address this demand, such as multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. These MIMO systems are power hungry, however, they are capable of changing the transmission parameters, such as number of spatial streams, number of transmitter/receiver antennas, modulation, code rate, and transmit power. They can thus choose the best mode out of possibly thousands of modes in order to optimize an objective function. This problem is referred to as the link adaptation problem. In this work, we focus on the link adaptation for energy efficiency maximization problem, which is defined as choosing the optimal transmission mode to maximize the number of successfully transmitted bits per unit energy consumed by the link. We model the energy consumption and throughput performances of a MIMO-OFDM link and develop a practical link adaptation protocol, which senses the channel conditions and changes its transmission mode in real-time. It turns out that the brute force search, which is usually assumed in previous works, is prohibitively complex, especially when there are large numbers of transmit power levels to choose from. We analyze the relationship between the energy efficiency and transmit power, and prove that energy efficiency of a link is a single-peaked quasiconcave function of transmit power. This leads us to develop a low-complexity algorithm that finds a near-optimal transmit power and take this dimension out of the search space. We further prune the search space by analyzing the singular value decomposition of the channel and excluding the modes that use higher number of spatial streams than the channel can support. These algorithms and our novel formulations provide simpler computations and limit the search space into a much smaller set; hence
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631
Expectation-Maximization Binary Clustering for Behavioural Annotation
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631
Maximizing NGL recovery by refrigeration optimization
Baldonedo H., A.H.
1999-07-01
PDVSA--Petroleo y Gas, S.A. has within its facilities in Lake Maracaibo two plants that extract liquids from natural gas (NGL), They use a combined mechanic refrigeration absorption with natural gasoline. Each of these plants processes 420 MMsccfd with a pressure of 535 psig and 95 F that comes from the compression plants PCTJ-2 and PCTJ-3 respectively. About 40 MMscfd of additional rich gas comes from the high pressure system. Under the present conditions these plants produce in the order of 16,800 and 23,800 b/d of NGL respectively, with a propane recovery percentage of approximately 75%, limited by the capacity of the refrigeration system. To optimize the operation and the design of the refrigeration system and to maximize the NGL recovery, a conceptual study was developed in which the following aspects about the process were evaluated: capacity of the refrigeration system, refrigeration requirements, identification of limitations and evaluation of the system improvements. Based on the results obtained it was concluded that by relocating some condensers, refurbishing the main refrigeration system turbines and using HIGH FLUX piping in the auxiliary refrigeration system of the evaporators, there will be an increase of 85% on the propane recovery, with an additional production of 25,000 b/d of NGL and 15 MMscfd of ethane rich gas.
Maximizing exosome colloidal stability following electroporation.
Hood, Joshua L; Scott, Michael J; Wickline, Samuel A
2014-03-01
Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo. PMID:24333249
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
Independence of Internal Auditors.
ERIC Educational Resources Information Center
Montondon, Lucille; Meixner, Wilda F.
1993-01-01
A survey of 288 college and university auditors investigated patterns in their appointment, reporting, and supervisory practices as indicators of independence and objectivity. Results indicate a weakness in the positioning of internal auditing within institutions, possibly compromising auditor independence. Because the auditing function is…
American Independence. Fifth Grade.
ERIC Educational Resources Information Center
Crosby, Annette
This fifth grade teaching unit covers early conflicts between the American colonies and Britain, battles of the American Revolutionary War, and the Declaration of Independence. Knowledge goals address the pre-revolutionary acts enforced by the British, the concepts of conflict and independence, and the major events and significant people from the…
Fostering Musical Independence
ERIC Educational Resources Information Center
Shieh, Eric; Allsup, Randall Everett
2016-01-01
Musical independence has always been an essential aim of musical instruction. But this objective can refer to everything from high levels of musical expertise to more student choice in the classroom. While most conceptualizations of musical independence emphasize the demonstration of knowledge and skills within particular music traditions, this…
Centering on Independent Study.
ERIC Educational Resources Information Center
Miller, Stephanie
Independent study is an instructional approach that can have enormous power in the classroom. It can be used successfully with students at all ability levels, even though it is often associated with gifted students. Independent study is an opportunity for students to study a subject of their own choosing under the guidance of a teacher. The…
Metabolomics of aerobic metabolism in mice selected for increased maximal metabolic rate
Wone, Bernard; Donovan, Edward R.; Hayes, Jack P.
2014-01-01
Maximal aerobic metabolic rate (MMR) is an important physiological and ecological variable that sets an upper limit to sustained, vigorous activity. How the oxygen cascade from the external environment to the mitochondria may affect MMR has been the subject of much interest, but little is known about the metabolic profiles that underpin variation in MMR. We tested how seven generations of artificial selection for high mass-independent MMR affected metabolite profiles of two skeletal muscles (gastrocnemius and plantaris) and the liver. MMR was 12.3% higher in mass selected for high MMR than in controls. Basal metabolic rate was 3.5% higher in selected mice than in controls. Artificial selection did not lead to detectable changes in the metabolic profiles from plantaris muscle, but in the liver amino acids and tricarboxylic acid cycle (TCA cycle) metabolites were lower in high-MMR mice than in controls. In gastrocnemius, amino acids and TCA cycle metabolites were higher in high-MMR mice than in controls, indicating elevated amino acid and energy metabolism. Moreover, in gastrocnemius free fatty acids and triacylglycerol fatty acids were lower in high-MMR mice than in controls. Because selection for high MMR was associated with changes in the resting metabolic profile of both liver and gastrocnemius, the result suggests a possible mechanistic link between resting metabolism and MMR. In addition, it is well established that diet and exercise affect the composition of fatty acids in muscle. The differences that we found between control lines and lines selected for high MMR demonstrate that the composition of fatty acids in muscle is also affected by genetic factors. PMID:21982590
Scheinker, Alexander; Baily, Scott; Young, Daniel; Kolski, Jeffrey S.; Prokop, Mark
2014-08-01
In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic fieldmore » cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.« less
Scheinker, Alexander; Baily, Scott; Young, Daniel; Kolski, Jeffrey S.; Prokop, Mark
2014-08-01
In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic field cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.
Maximal yields from multispecies fisheries systems: rules for systems with multiple trophic levels.
Matsuda, Hiroyuki; Abrams, Peter A
2006-02-01
Increasing centralization of the control of fisheries combined with increased knowledge of food-web relationships is likely to lead to attempts to maximize economic yield from entire food webs. With the exception of predator-prey systems, we lack any analysis of the nature of such yield-maximizing strategies. We use simple food-web models to investigate the nature of yield- or profit-maximizing exploitation of communities including two types of three-species food webs and a variety of six-species systems with as many as five trophic levels. These models show that, for most webs, relatively few species are harvested at equilibrium and that a significant fraction of the species is lost from the web. These extinctions occur for two reasons: (1) indirect effects due to harvesting of species that had positive effects on the extinct species, and (2) intentional eradication of species that are not themselves valuable, but have negative effects on more valuable species. In most cases, the yield-maximizing harvest involves taking only species from one trophic level. In no case was an unharvested top predator part of the yield-maximizing strategy. Analyses reveal that the existence of direct density dependence in consumers has a large effect on the nature of the optimal harvest policy, typically resulting in harvest of a larger number of species. A constraint that all species must be retained in the system (a "constraint of biodiversity conservation") usually increases the number of species and trophic levels harvested at the yield-maximizing policy. The reduction in total yield caused by such a constraint is modest for most food webs but can be over 90% in some cases. Independent harvesting of species within the web can also cause extinctions but is less likely to do so. PMID:16705975
Maximizing Experiential Learning for Student Success
ERIC Educational Resources Information Center
Coker, Jeffrey Scott; Porter, Desiree Jasmine
2015-01-01
Several years ago, Elon University set out to better understand experiential learning on campus. At the time, there was a pragmatic need to collect data that would inform revisions to the core curriculum, including an experiential-learning requirement (ELR) that had been in place since 1994. The question was whether it made sense to raise the…
Matching, Demand, Maximization, and Consumer Choice
ERIC Educational Resources Information Center
Wells, Victoria K.; Foxall, Gordon R.
2013-01-01
The use of behavioral economics and behavioral psychology in consumer choice has been limited. The current study extends the study of consumer behavior analysis, a synthesis between behavioral psychology, economics, and marketing, to a larger data set. This article presents the current work and results from the early analysis of the data. We…
Glacier Surface Monitoring by Maximizing Mutual Information
NASA Astrophysics Data System (ADS)
Erten, E.; Rossi, C.; Hajnsek, I.
2012-07-01
The contribution of Polarimetric Synthetic Aperture Radar (PolSAR) images compared with the single-channel SAR in terms of temporal scene characterization has been found and described to add valuable information in the literature. However, despite a number of recent studies focusing on single polarized glacier monitoring, the potential of polarimetry to estimate the surface velocity of glaciers has not been explored due to the complex mechanism of polarization through glacier/snow. In this paper, a new approach to the problem of monitoring glacier surface velocity is proposed by means of temporal PolSAR images, using a basic concept from information theory: Mutual Information (MI). The proposed polarimetric tracking method applies the MI to measure the statistical dependence between temporal polarimetric images, which is assumed to be maximal if the images are geometrically aligned. Since the proposed polarimetric tracking method is very powerful and general, it can be implemented into any kind of multivariate remote sensing data such as multi-spectral optical and single-channel SAR images. The proposed polarimetric tracking is then used to retrieve surface velocity of Aletsch glacier located in Switzerland and of Inyltshik glacier in Kyrgyzstan with two different SAR sensors; Envisat C-band (single polarized) and DLR airborne L-band (fully polarimetric) systems, respectively. The effect of number of channel (polarimetry) into tracking investigations demonstrated that the presence of snow, as expected, effects the location of the phase center in different polarization, such as glacier tracking with temporal HH compared to temporal VV channels. Shortly, a change in polarimetric signature of the scatterer can change the phase center, causing a question of how much of what I am observing is motion then penetration. In this paper, it is shown that considering the multi-channel SAR statistics, it is possible to optimize the separate these contributions.
Rare flavor processes in Maximally Natural Supersymmetry
NASA Astrophysics Data System (ADS)
García, Isabel García; March-Russell, John
2015-01-01
We study CP-conserving rare flavor violating processes in the recently proposed theory of Maximally Natural Supersymmetry (MNSUSY). MNSUSY is an unusual supersymmetric (SUSY) extension of the Standard Model (SM) which, remarkably, is untuned at present LHC limits. It employs Scherk-Schwarz breaking of SUSY by boundary conditions upon compactifying an underlying 5-dimensional (5D) theory down to 4D, and is not well-described by softly-broken SUSY, with much different phenomenology than the Minimal Supersymmetric Standard Model (MSSM) and its variants. The usual CP-conserving SUSY-flavor problem is automatically solved in MNSUSY due to a residual almost exact U(1) R symmetry, naturally heavy and highly degenerate 1st- and 2nd-generation sfermions, and heavy gauginos and Higgsinos. Depending on the exact implementation of MNSUSY there exist important new sources of flavor violation involving gauge boson Kaluza-Klein (KK) excitations. The spatial localization properties of the matter multiplets, in particular the brane localization of the 3rd generation states, imply KK-parity is broken and tree-level contributions to flavor changing neutral currents are present in general. Nevertheless, we show that simple variants of the basic MNSUSY model are safe from present flavor constraints arising from kaon and B-meson oscillations, the rare decays B s, d → μ + μ -, μ → ēee and μ- e conversion in nuclei. We also briefly discuss some special features of the radiative decays μ → eγ and . Future experiments, especially those concerned with lepton flavor violation, should see deviations from SM predictions unless one of the MNSUSY variants with enhanced flavor symmetries is realized.
Expectation maximization applied to GMTI convoy tracking
NASA Astrophysics Data System (ADS)
Koch, Wolfgang
2002-08-01
Collectively moving ground targets are typical of a military ground situation and have to be treated as separate aggregated entities. For a long-range ground surveillance application with airborne GMTI radar we inparticular address the task of track maintenance for ground moving convoys consisting of a small number of individual vehicles. In the proposed approach the identity of the individual vehicles within the convoy is no longer stressed. Their kinematical state vectors are rather treated as internal degrees of freedom characterizing the convoy, which is considered as a collective unit. In this context, the Expectation Maximization technique (EM), originally developed for incomplete data problems in statistical inference and first applied to tracking applications by STREIT et al. seems to be a promising approach. We suggest to embed the EM algorithm into a more traditional Bayesian tracking framework for dealing with false or unwanted sensor returns. The proposed distinction between external and internal data association conflicts (i.e. those among the convoy vehicles) should also enable the application of sequential track extraction techniques introduced by Van Keuk for aircraft formations, providing estimates of the number of the individual convoy vehicles involved. Even with sophisticated signal processing methods (STAP: Space-Time Adaptive Processing), ground moving vehicles can well be masked by the sensor specific clutter notch (Doppler blinding). This physical phenomenon results in interfering fading effects, which can well last over a longer series of sensor updates and therefore will seriously affect the track quality unless properly handled. Moreover, for ground moving convoys the phenomenon of Doppler blindness often superposes the effects induced by the finite resolution capability of the sensor. In many practical cases a separate modeling of resolution phenomena for convoy targets can therefore be omitted, provided the GMTI detection model is used
Maximal exercise performance after adaptation to microgravity.
Levine, B D; Lane, L D; Watenpaugh, D E; Gaffney, F A; Buckey, J C; Blomqvist, C G
1996-08-01
The cardiovascular system appears to adapt well to microgravity but is compromised on reestablishment of gravitational forces leading to orthostatic intolerance and a reduction in work capacity. However, maximal systemic oxygen uptake (Vo2) and transport, which may be viewed as a measure of the functional integrity of the cardiovascular system and its regulatory mechanisms, has not been systematically measured in space or immediately after return to Earth after spaceflight. We studied six astronauts (4 men and 2 women, age 35-50 yr) before, during, and immediately after 9 or 14 days of microgravity on two Spacelab Life Sciences flights (SLS-1 and SLS-2). Peak Vo2 (Vo2peak) was measured with an incremental protocol on a cycle ergometer after prolonged submaximal exercise at 30 and 60% of Vo2peak. We measured gas fractions by mass spectrometer and ventilation via turbine flowmeter for the calculation of breath-by-breath Vo2, heart rate via electrocardiogram, and cardiac output (Qc) via carbon dioxide rebreathing. Peak power and Vo2 were well maintained during spaceflight and not significantly different compared with 2 wk preflight. Vo2peak was reduced by 22% immediately postflight (P < 0.05), entirely because of a decrease in peak stroke volume and Qc. Peak heart rate, blood pressure, and systemic arteriovenous oxygen difference were unchanged. We conclude that systemic Vo2peak is well maintained in the absence of gravity for 9-14 days but is significantly reduced immediately on return to Earth, most likely because of reduced intravascular blood volume, stroke volume, and Qc. PMID:8872635
Maximize, minimize or target - optimization for a fitted response from a designed experiment
Anderson-Cook, Christine M.; Cao, Yongtao; Michaela, Christine
2016-04-01
One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.
Energy Science and Technology Software Center (ESTSC)
1994-12-30
Data-machine independence achieved by using four technologies (ASN.1, XDR, SDS, and ZEBRA) has been evaluated by encoding two different applications in each of the above; and their results compared against the standard programming method using C.
NASA Technical Reports Server (NTRS)
1987-01-01
The work done on the Media Independent Interface (MII) Interface Control Document (ICD) program is described and recommendations based on it were made. Explanations and rationale for the content of the ICD itself are presented.
Who are maximizers? Future oriented and highly numerate individuals.
Misuraca, Raffaella; Teuscher, Ursina; Carmeci, Floriana Antonella
2016-08-01
Two studies investigated cognitive mechanisms that may be associated with people's tendency to maximize. Maximizers are individuals who are spending a great amount of effort in order to find the very best option in a decision situation, rather than stopping the decision process when they encounter a satisfying option. These studies show that maximizers are more future oriented than other people, which may motivate them to invest the extra energy into optimal choices. Maximizers also have higher numerical skills, possibly facilitating the cognitive processes involved with decision trade-offs. PMID:25960435
Explanatory Variance in Maximal Oxygen Uptake
Robert McComb, Jacalyn J.; Roh, Daesung; Williams, James S.
2006-01-01
The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max) from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females), ages 18 - 24 years, underwent the following testing procedures: (a) a 7-site skin fold assessment; (b) a land VO2max running treadmill test; and (c) a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants’ head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF), height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27) of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF). Key Points Body Fat is an important predictor of VO2 max. Individuals with low skill level in water running may shorten their stride length to avoid the onset of fatigue at higher work-loads, therefore, the net oxygen cost of the exercise cannot be controlled in inexperienced individuals in water running at fatiguing workloads. Experiments using water running protocols to predict VO2max should use individuals trained in the mechanics of water running. A submaximal water running protocol is needed in the research literature for individuals trained in the mechanics of water running, given the popularity of water running rehabilitative exercise programs and training programs. PMID:24260003
Maximizing usability: the principles of universal design.
Story, M F
1998-01-01
The Center for Universal Design at North Carolina State University has developed a set of seven Principles of Universal Design that may be used to guide the design process, to evaluate existing or new designs, and to teach students and practitioners. This article presents preceding design guidelines and evaluation criteria, describes the process of developing the Principles, lists The Principles of Universal Design and provides examples of designs that satisfy each, and suggests future developments that would facilitate applying the Principles to assess the usability of all types of products and environments. PMID:10181150
Zimmerman, K; Levitis, D; Addicott, E; Pringle, A
2016-02-01
We present a novel algorithm for the design of crossing experiments. The algorithm identifies a set of individuals (a 'crossing-set') from a larger pool of potential crossing-sets by maximizing the diversity of traits of interest, for example, maximizing the range of genetic and geographic distances between individuals included in the crossing-set. To calculate diversity, we use the mean nearest neighbor distance of crosses plotted in trait space. We implement our algorithm on a real dataset of Neurospora crassa strains, using the genetic and geographic distances between potential crosses as a two-dimensional trait space. In simulated mating experiments, crossing-sets selected by our algorithm provide better estimates of underlying parameter values than randomly chosen crossing-sets. PMID:26419337
A Classroom Tariff-Setting Game
ERIC Educational Resources Information Center
Winchester, Niven
2006-01-01
The author outlines a classroom tariff-setting game that allows students to explore the consequences of import tariffs imposed by large countries (countries able to influence world prices). Groups of students represent countries, which are organized into trading pairs. Each group's objective is to maximize welfare by choosing an appropriate ad…
Maximizing production of Penicillium cyclopium partial acylglycerol lipase.
Vanot, G; Valérie, D; Guilhem, M-C; Phan Tan Luu, R; Comeau, L-C
2002-12-01
Penicillium cyclopium partial acylglycerol lipase production was maximized in shaken batch culture. The effect of inoculum size and substrate concentration on the lipase activity released in the culture medium was visualized using a surface response methodology based on a Doehlert experimental design. The main advantage of this approach is the low number of experiments required to construct a predictive model of the experimental domain. Substrate percentage (corn steep, w/v) ranged from 0.1% to 1.9% and inoculum from 100 spores/ml to 3,200 spores/ml. We determined that an optimal set of experimental conditions for high lipase production was 1.0% substrate and 3,200 spores/ml, with initial pH 5.0, temperature 25 degrees C and shaking speed 120 rpm. Between the conditions giving the minimum and the maximum lipase production, we observed a three-fold increase in both the predicted and the measured values. PMID:12466881
Laboratory stabilization/solidification of tank sludges: maximizing sludge loading.
Spence, R D; Mattus, A J
2004-03-01
Highly radioactive, mixed-waste sludges that have been collected in tanks at Oak Ridge over several decades are being combined for treatment and disposal. Stabilization of the sludges in the different tank sets was tested prior to the proposed combination and treatment. This paper is the third one in a series on the laboratory stabilization/solidification of these tank sludges. It discusses efforts to maximize the sludge loading with no strength criterion for the grout formulation. Grout formulations were tested in the laboratory both with surrogates and with actual samples of tank sludge. Hydrogels eliminated free water generation, even at sludge loadings of >90wt%, albeit strong monoliths did not form at such high loadings. Correlations established the dependence of the chromium and mercury performance in the Toxicity Characteristic Leach Procedure for the surrogates on the slag content of the grout while the lead performance depended on the extract pH. The surrogate sludge loading was limited by the chromate content to about 90wt%, meeting Universal Treatment Standard limits. However, tests with actual sludges at such high loadings revealed problems with lead and silver stabilization that were not experienced with the surrogate testing. PMID:15036695
Entropy maximization and the spatial distribution of species.
Haegeman, Bart; Etienne, Rampal S
2010-04-01
Entropy maximization (EM, also known as MaxEnt) is a general inference procedure that originated in statistical mechanics. It has been applied recently to predict ecological patterns, such as species abundance distributions and species-area relationships. It is well known in physics that the EM result strongly depends on how elementary configurations are described. Here we argue that the same issue is also of crucial importance for EM applications in ecology. To illustrate this, we focus on the EM prediction of species-level spatial abundance distributions. We show that the EM outcome depends on (1) the choice of configuration set, (2) the way constraints are imposed, and (3) the scale on which the EM procedure is applied. By varying these choices in the EM model, we obtain a large range of EM predictions. Interestingly, they correspond to spatial abundance distributions that have been derived previously from mechanistic models. We argue that the appropriate choice of the EM model assumptions is nontrivial and can be determined only by comparison with empirical data. PMID:20166816
Evaluation of anti-hyperglycemic effect of Actinidia kolomikta (Maxim. etRur.) Maxim. root extract.
Hu, Xuansheng; Cheng, Delin; Wang, Linbo; Li, Shuhong; Wang, Yuepeng; Li, Kejuan; Yang, Yingnan; Zhang, Zhenya
2015-05-01
This study aimed to evaluate the anti-hyperglycemic effect of ethanol extract from Actinidia kolomikta (Maxim. etRur.) Maxim. root (AKE).An in vitro evaluation was performed by using rat intestinal α-glucosidase (maltase and sucrase), the key enzymes linked with type 2 diabetes. And an in vivo evaluation was also performed by loading maltose, sucrose, glucose to normal rats. As a result, AKE showed concentration-dependent inhibition effects on rat intestinal maltase and rat intestinal sucrase with IC(50) values of 1.83 and 1.03mg/mL, respectively. In normal rats, after loaded with maltose, sucrose and glucose, administration of AKE significantly reduced postprandial hyperglycemia, which is similar to acarbose used as an anti-diabetic drug. High contents of total phenolics (80.49 ± 0.05mg GAE/g extract) and total flavonoids (430.69 ± 0.91mg RE/g extract) were detected in AKE. In conclusion, AKE possessed anti-hyperglycemic effects and the possible mechanisms were associated with its inhibition on α-glucosidase and the improvement on insulin release and/or insulin sensitivity as well. The anti-hyperglycemic activity possessed by AKE maybe attributable to its high contents of phenolic and flavonoid compounds. PMID:26051735
D2-brane Chern-Simons theories: F -maximization = a-maximization
NASA Astrophysics Data System (ADS)
Fluder, Martin; Sparks, James
2016-01-01
We study a system of N D2-branes probing a generic Calabi-Yau three-fold singularity in the presence of a non-zero quantized Romans mass n. We argue that the low-energy effective c N=2 Chern-Simons quiver gauge theory flows to a superconformal fixed point in the IR, and construct the dual AdS4 solution in massive IIA supergravity. We compute the free energy F of the gauge theory on S 3 using localization. In the large N limit we find F = c ( nN )1/3 a 2/3, where c is a universal constant and a is the a-function of the "parent" four-dimensional N=1 theory on N D3-branes probing the same Calabi-Yau singularity. It follows that maximizing F over the space of admissible R-symmetries is equivalent to maximizing a for this class of theories. Moreover, we show that the gauge theory result precisely matches the holographic free energy of the supergravity solution, and provide a similar matching of the VEV of a BPS Wilson loop operator.
Maximal and sub-maximal functional lifting performance at different platform heights.
Savage, Robert J; Jaffrey, Mark A; Billing, Daniel C; Ham, Daniel J
2015-01-01
Introducing valid physical employment tests requires identifying and developing a small number of practical tests that provide broad coverage of physical performance across the full range of job tasks. This study investigated discrete lifting performance across various platform heights reflective of common military lifting tasks. Sixteen Australian Army personnel performed a discrete lifting assessment to maximal lifting capacity (MLC) and maximal acceptable weight of lift (MAWL) at four platform heights between 1.30 and 1.70 m. There were strong correlations between platform height and normalised lifting performance for MLC (R(2) = 0.76 ± 0.18, p < 0.05) and MAWL (R(2) = 0.73 ± 0.21, p < 0.05). The developed relationship allowed prediction of lifting capacity at one platform height based on lifting capacity at any of the three other heights, with a standard error of < 4.5 kg and < 2.0 kg for MLC and MAWL, respectively. PMID:25420678
Anaerobic capacity: a maximal anaerobic running test versus the maximal accumulated oxygen deficit.
Maxwell, N S; Nimmo, M A
1996-02-01
The present investigation evaluates a maximal anaerobic running test (MART) against the maximal accumulated oxygen deficit (MAOD) for the determination of anaerobic capacity. Essentially, this involved comparing 18 male students performing two randomly assigned supramaximal runs to exhaustion on separate days. Post warm-up and 1, 3, and 6 min postexercise capillary blood samples were taken during both tests for plasma blood lactate (BLa) determination. In the MART only, blood ammonia (BNH3) concentration was measured, while capillary blood samples were additionally taken after every second sprint for BLa determination. Anaerobic capacity, measured as oxygen equivalents in the MART protocol, averaged 112.2 +/- 5.2 ml.kg-1.min-1. Oxygen deficit, representing the anaerobic capacity in the MAOD test, was an average of 74.6 +/- 7.3 ml.kg-1. There was a significant correlation between the MART and MAOD (r = .83, p < .001). BLa values obtained over time in the two tests showed no significant difference, nor was there any difference in the peak BLa recorded. Peak BNH3 concentration recorded was significantly increased from resting levels at exhaustion during the MART. PMID:8664845
Improving information technology to maximize fenestration energyefficiency
Arasteh, Dariush; Mitchell, Robin; Kohler, Christian; Huizenga,Charlie; Curcija, Dragan
2001-06-06
Improving software for the analysis of fenestration product energy efficiency and developing related information technology products that aid in optimizing the use of fenestration products for energy efficiency are essential steps toward ensuring that more efficient products are developed and that existing and emerging products are utilized in the applications where they will produce the greatest energy savings. Given the diversity of building types and designs and the climates in the U.S., no one fenestration product or set of properties is optimal for all applications. Future tools and procedures to analyze fenestration product energy efficiency will need to both accurately analyze fenestration product performance under a specific set of conditions and to look at whole fenestration product energy performance over the course of a yearly cycle and in the context of whole buildings. Several steps have already been taken toward creating fenestration product software that will provide the information necessary to determine which details of a fenestration product's design can be improved to have the greatest impact on energy efficiency, what effects changes in fenestration product design will have on the comfort parameters that are important to consumers, and how specific fenestration product designs will perform in specific applications. Much work remains to be done, but the energy savings potential justifies the effort. Information is relatively cheap compared to manufacturing. Information technology has already been responsible for many improvements in the global economy--it can similarly facilitate many improvements in fenestration product energy efficiency.
Augusiak, Remigiusz; Horodecki, Pawel
2006-07-15
It is shown that Smolin four-qubit bound entangled states [J. A. Smolin, Phys. Rev. A 63, 032306 (2001)] can maximally violate the simple two-setting Bell inequality similar to the standard Clauser-Horne-Shimony-Holt (CHSH) inequality. The simplicity of the setting and the robustness of the entanglement make it promising for current experimental technology. On the other hand, the entanglement does not allow for secure key distillation, so neither entanglement nor maximal violation of Bell inequalities implies directly the presence of a quantum secure key. As a result, one concludes that two tasks--reducing of communication complexity and cryptography--are not (even qualitatively) equivalent in a quantum multipartite scenario.
Maximal zero textures in Linear and Inverse seesaw
NASA Astrophysics Data System (ADS)
Sinha, Roopam; Samanta, Rome; Ghosal, Ambar
2016-08-01
We investigate Linear and Inverse seesaw mechanisms with maximal zero textures of the constituent matrices subjected to the assumption of non-zero eigenvalues for the neutrino mass matrix mν and charged lepton mass matrix me. If we restrict to the minimally parametrized non-singular 'me' (i.e., with maximum number of zeros) it gives rise to only 6 possible textures of me. Non-zero determinant of mν dictates six possible textures of the constituent matrices. We ask in this minimalistic approach, what phenomenologically allowed maximum zero textures are possible. It turns out that Inverse seesaw leads to 7 allowed two-zero textures while the Linear seesaw leads to only one. In Inverse seesaw, we show that 2 is the maximum number of independent zeros that can be inserted into μS to obtain all 7 viable two-zero textures of mν. On the other hand, in Linear seesaw mechanism, the minimal scheme allows maximum 5 zeros to be accommodated in 'm' so as to obtain viable effective neutrino mass matrices (mν). Interestingly, we find that our minimalistic approach in Inverse seesaw leads to a realization of all the phenomenologically allowed two-zero textures whereas in Linear seesaw only one such texture is viable. Next, our numerical analysis shows that none of the two-zero textures give rise to enough CP violation or significant δCP. Therefore, if δCP = π / 2 is established, our minimalistic scheme may still be viable provided we allow larger number of parameters in 'me'.
Note on maximally entangled Eisert-Lewenstein-Wilkens quantum games
NASA Astrophysics Data System (ADS)
Bolonek-Lasoń, Katarzyna; Kosiński, Piotr
2015-12-01
Maximally entangled Eisert-Lewenstein-Wilkens games are analyzed. For a general class of gates defined in the previous papers of the first author, the general conditions are derived which allow to determine the form of gate leading to maximally entangled games. The construction becomes particularly simple provided one does distinguish between games differing by relabeling of strategies. Some examples are presented.
Detrimental Relations of Maximization with Academic and Career Attitudes
ERIC Educational Resources Information Center
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
Pace's Maxims for Homegrown Library Projects. Coming Full Circle
ERIC Educational Resources Information Center
Pace, Andrew K.
2005-01-01
This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…
Minimal Length, Maximal Momentum and the Entropic Force Law
NASA Astrophysics Data System (ADS)
Nozari, Kourosh; Pedram, Pouria; Molkara, M.
2012-04-01
Different candidates of quantum gravity proposal such as string theory, noncommutative geometry, loop quantum gravity and doubly special relativity, all predict the existence of a minimum observable length and/or a maximal momentum which modify the standard Heisenberg uncertainty principle. In this paper, we study the effects of minimal length and maximal momentum on the entropic force law formulated recently by E. Verlinde.
Effect of Age and Other Factors on Maximal Heart Rate.
ERIC Educational Resources Information Center
Londeree, Ben R.; Moeschberger, Melvin L.
1982-01-01
To reduce confusion regarding reported effects of age on maximal exercise heart rate, a comprehensive review of the relevant English literature was conducted. Data on maximal heart rate after exercising with a bicycle, a treadmill, and after swimming were analyzed with regard to physical fitness and to age, sex, and racial differences. (Authors/PP)
NASA Astrophysics Data System (ADS)
Richman, Barbara T.
A proposal to pull the National Oceanic and Atmospheric Administration (NOAA) out of the Department of Commerce and make it an independent agency was the subject of a recent congressional hearing. Supporters within the science community and in Congress said that an independent NOAA will benefit by being more visible and by not being tied to a cabinet-level department whose main concerns lie elsewhere. The proposal's critics, however, cautioned that making NOAA independent could make it even more vulnerable to the budget axe and would sever the agency's direct access to the President.The separation of NOAA from Commerce was contained in a June 1 proposal by President Ronald Reagan that also called for all federal trade functions under the Department of Commerce to be reorganized into a new Department of International Trade and Industry (DITI).
Independent technical review, handbook
Not Available
1994-02-01
Purpose Provide an independent engineering review of the major projects being funded by the Department of Energy, Office of Environmental Restoration and Waste Management. The independent engineering review will address questions of whether the engineering practice is sufficiently developed to a point where a major project can be executed without significant technical problems. The independent review will focus on questions related to: (1) Adequacy of development of the technical base of understanding; (2) Status of development and availability of technology among the various alternatives; (3) Status and availability of the industrial infrastructure to support project design, equipment fabrication, facility construction, and process and program/project operation; (4) Adequacy of the design effort to provide a sound foundation to support execution of project; (5) Ability of the organization to fully integrate the system, and direct, manage, and control the execution of a complex major project.
Almosnino, Sivan; Dvir, Zeevi; Bardana, Davide D
2016-04-01
The purpose of this investigation was to attempt to establish decision rules for determining maximal effort production during isokinetic strength testing of unilateral anterior cruciate ligament-deficient patients based on the degree of strength curve consistency within a set. Thirty-three participants performed six bilateral knee extension and flexion exertions at maximal effort and at 80% of perceived maximum at testing velocities of 60 and 180°s(-1). Within-set consistency was quantified by computation of the variance ratio across strength curves. Tolerance interval-based cutoff scores covering 99% of the population were calculated for declaring efforts as being maximal or not at confidence levels of 90%, 95%, and 99%. The sensitivity percentages attained for the injured knee for both testing velocities ranged between 9.1% and 27.2%, while specificity percentages ranged between 84.8% and 100%. For the non-injured knee, sensitivity values for both testing velocities ranged between 21.2% and 45.0%, while specificity percentages ranged between 97.0% and 100%. The developed decision rules do not effectively discriminate on an individual patient basis between maximal and non-maximal isokinetic knee musculature efforts. Further research is needed for development of methods that would enable to ascertain maximal effort production in this patient population during knee muscle strength testing. PMID:27043046
Supporting independent inventors
Bernard, M.J. III; Whalley, P.; Loyola Univ., Chicago, IL . Dept. of Sociology)
1989-01-01
Independent inventors contribute products to the marketplace despite the well-financed brain trusts at corporate, university, and federal R and D laboratories. But given the environment in which the basement/garage inventor labors, transferring a worthwhile invention into a commercial product is quite difficult. There is a growing effort by many state and local agencies and organizations to improve the inventor's working environment and begin to routinize the process of developing ideas and inventional of independent inventors into commercial products. 4 refs.
Maximal entanglement versus entropy for mixed quantum states
Wei, T.-C.; Goldbart, Paul M.; Kwiat, Paul G.; Nemoto, Kae; Munro, William J.; Verstraete, Frank
2003-02-01
Maximally entangled mixed states are those states that, for a given mixedness, achieve the greatest possible entanglement. For two-qubit systems and for various combinations of entanglement and mixedness measures, the form of the corresponding maximally entangled mixed states is determined primarily analytically. As measures of entanglement, we consider entanglement of formation, relative entropy of entanglement, and negativity; as measures of mixedness, we consider linear and von Neumann entropies. We show that the forms of the maximally entangled mixed states can vary with the combination of (entanglement and mixedness) measures chosen. Moreover, for certain combinations, the forms of the maximally entangled mixed states can change discontinuously at a specific value of the entropy. Along the way, we determine the states that, for a given value of entropy, achieve maximal violation of Bell's inequality.
System performance evaluation of the MAXIM concept with integrated modeling
NASA Astrophysics Data System (ADS)
Lieber, Michael D.; Gallagher, Dennis J.; Cash, Webster C.; Shipley, Ann F.
2003-03-01
The MAXIM (Mico-Arcsecond X-Ray Imaging Mission) and MAXIM Pathfinder, a technology precursor mission, is considered by NASA as 'visionary missions' in space astronomy. Currently the MAXIM mission design would fly multiple spacecraft in formation, each carrying precision optics, to direct x-rays from an astronomical source to collector and imaging spacecrafts. The mission architecture is complex and provides technical challenges in formaiton flying and external metrology, and target acquisition. To further develop the concept, an integrated model (IM) of the MAXIM and MAXIM Pathfinder was developed. Individual subsystem models from disciplines in structural dynamics, optics, controls, signal processing, detector physics and disturbance modelign are seamlessly integrated into one cohesive model to efficiently support system level trades and analysis. The optical system design is a unique combination of optical concepts and therefore results from the IM were extensively compared with ASAP optical software.
A comparison between laddermill and treadmill maximal oxygen consumption.
Montoliu, M A; Gonzalez, V; Rodriguez, B; Palenciano, L
1997-01-01
Maximal O2 consumption (VO2max) is an index of the capacity for work over an 8 h workshift. Running on a treadmill is the most common method of eliciting it, because it is an easy, natural exercise, and also, by engaging large muscle masses, larger values are obtained than by other exercises. It has been claimed, however, that climbing a laddermill elicits a still higher VO2max, probably because more muscle mass is apparently engaged (legs + arms) than on the treadmill (legs only). However, no data in support of this claim have been presented. To see if differences exist, we conducted progressive tests to exhaustion on 44 active coal miners, on a laddermill (slant angle 75 degrees, vertical separation of rungs 25 cm) and on a treadmill set at a 5% gradient. The subjects' mean (range) age was 37.4 (31-47) years, height 174.3 (164-187) cm, body mass 82.2 (64-103) kg. Mean (range) VO2max on the laddermill was 2.83 (2.31-3.64) l x min(-1) and 2.98 (2.03-4.22) l x min(-1) on the treadmill (P < 0.01, Student's paired t-test). Mean (range) of maximal heart rate f(cmax) (beats x min(-1)) on the laddermill and on the treadmill were 181.0 (161-194) and 181.3 (162-195), respectively (NS). Laddermill:treadmill VO2max was negatively related to both treadmill VO2max x kg body mass(-1) (r = -0.410, P < 0.01) and body mass (r = -0.409, P < 0.01). Laddermill:treadmill f(cmax) was negatively related to treadmill VO2max x kg body mass(-1) (r = -0.367, P < 0.02) but not to body mass (r = -0.166, P = 0.28). Our data would suggest that for fitter subjects (VO2max > 2.6 l x min or VO2max kg body mass(-1) > 30 ml x min(-1) x kg(-1)) and/or higher body masses (> 70 kg), exercise on the laddermill is not dynamic enough to elicit a VO2max as high as on the treadmill. For such subjects, treadmill VO2max would overestimate exercise capacity for jobs requiring a fair amount of climbing ladders or ladder-like structures. PMID:9404869
Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H.; Medeiros, Felipe A.; Zangwill, Linda M.; Weinreb, Robert N.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Bowd, Christopher
2016-01-01
Purpose To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM–progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. Methods GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Results Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. Conclusions GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Translational Relevance Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning. PMID:27152250
Postcard from Independence, Mo.
ERIC Educational Resources Information Center
Archer, Jeff
2004-01-01
This article reports results showing that the Independence, Missori school district failed to meet almost every one of its improvement goals under the No Child Left Behind Act. The state accreditation system stresses improvement over past scores, while the federal law demands specified amounts of annual progress toward the ultimate goal of 100…
ERIC Educational Resources Information Center
Roha, Thomas Arden
1999-01-01
Foundations affiliated with public higher education institutions can avoid having to open records for public scrutiny, by having independent boards of directors, occupying leased office space or paying market value for university space, using only foundation personnel, retaining legal counsel, being forthcoming with information and use of public…
ERIC Educational Resources Information Center
James, H. Thomas
Independent schools that are of viable size, well managed, and strategically located to meet competition will survive and prosper past the current financial crisis. We live in a complex technological society with insatiable demands for knowledgeable people to keep it running. The future will be marked by the orderly selection of qualified people,…
Caring about Independent Lives
ERIC Educational Resources Information Center
Christensen, Karen
2010-01-01
With the rhetoric of independence, new cash for care systems were introduced in many developed welfare states at the end of the 20th century. These systems allow local authorities to pay people who are eligible for community care services directly, to enable them to employ their own careworkers. Despite the obvious importance of the careworker's…
Independence, Disengagement, and Discipline
ERIC Educational Resources Information Center
Rubin, Ron
2012-01-01
School disengagement is linked to a lack of opportunities for students to fulfill their needs for independence and self-determination. Young people have little say about what, when, where, and how they will learn, the criteria used to assess their success, and the content of school and classroom rules. Traditional behavior management discourages…
Maximal stochastic transport in the Lorenz equations
NASA Astrophysics Data System (ADS)
Agarwal, Sahil; Wettlaufer, J. S.
2016-01-01
We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh-Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.
On the statistical analysis of maximal magnitude
NASA Astrophysics Data System (ADS)
Holschneider, M.; Zöller, G.; Hainzl, S.
2012-04-01
We show how the maximum expected magnitude within a time horizon [0,T] may be estimated from earthquake catalog data within the context of truncated Gutenberg-Richter statistics. We present the results in a frequentist and in a Bayesian setting. Instead of deriving point estimations of this parameter and reporting its performance in terms of expectation value and variance, we focus on the calculation of confidence intervals based on an imposed level of confidence α. We present an estimate of the maximum magnitude within an observational time interval T in the future, given a complete earthquake catalog for a time period Tc in the past and optionally some paleoseismic events. We argue that from a statistical point of view the maximum magnitude in a time window is a reasonable parameter for probabilistic seismic hazard assessment, while the commonly used maximum possible magnitude for all times does almost certainly not allow the calculation of useful (i.e. non-trivial) confidence intervals. In the context of an unbounded GR law we show, that Jeffreys invariant prior distribtution yields normalizable posteriors. The predictive distribution based on this prior is explicitely computed.
Kettlebell swing training improves maximal and explosive strength.
Lake, Jason P; Lauder, Mike A
2012-08-01
The aim of this study was to establish the effect that kettlebell swing (KB) training had on measures of maximum (half squat-HS-1 repetition maximum [1RM]) and explosive (vertical jump height-VJH) strength. To put these effects into context, they were compared with the effects of jump squat power training (JS-known to improve 1RM and VJH). Twenty-one healthy men (age = 18-27 years, body mass = 72.58 ± 12.87 kg) who could perform a proficient HS were tested for their HS 1RM and VJH pre- and post-training. Subjects were randomly assigned to either a KB or JS training group after HS 1RM testing and trained twice a week. The KB group performed 12-minute bouts of KB exercise (12 rounds of 30-second exercise, 30-second rest with 12 kg if <70 kg or 16 kg if >70 kg). The JS group performed at least 4 sets of 3 JS with the load that maximized peak power-Training volume was altered to accommodate different training loads and ranged from 4 sets of 3 with the heaviest load (60% 1RM) to 8 sets of 6 with the lightest load (0% 1RM). Maximum strength improved by 9.8% (HS 1RM: 165-181% body mass, p < 0.001) after the training intervention, and post hoc analysis revealed that there was no significant difference between the effect of KB and JS training (p = 0.56). Explosive strength improved by 19.8% (VJH: 20.6-24.3 cm) after the training intervention, and post hoc analysis revealed that the type of training did not significantly affect this either (p = 0.38). The results of this study clearly demonstrate that 6 weeks of biweekly KB training provides a stimulus that is sufficient to increase both maximum and explosive strength offering a useful alternative to strength and conditioning professionals seeking variety for their athletes. PMID:22580981
Muscle Damage following Maximal Eccentric Knee Extensions in Males and Females
2016-01-01
Aim To investigate whether there is a sex difference in exercise induced muscle damage. Materials and Method Vastus Lateralis and patella tendon properties were measured in males and females using ultrasonography. During maximal voluntary eccentric knee extensions (12 reps x 6 sets), Vastus Lateralis fascicle lengthening and maximal voluntary eccentric knee extensions torque were recorded every 10° of knee joint angle (20–90°). Isometric torque, Creatine Kinase and muscle soreness were measured pre, post, 48, 96 and 168 hours post damage as markers of exercise induced muscle damage. Results Patella tendon stiffness and Vastus Lateralis fascicle lengthening were significantly higher in males compared to females (p<0.05). There was no sex difference in isometric torque loss and muscle soreness post exercise induced muscle damage (p>0.05). Creatine Kinase levels post exercise induced muscle damage were higher in males compared to females (p<0.05), and remained higher when maximal voluntary eccentric knee extension torque, relative to estimated quadriceps anatomical cross sectional area, was taken as a covariate (p<0.05). Conclusion Based on isometric torque loss, there is no sex difference in exercise induced muscle damage. The higher Creatine Kinase in males could not be explained by differences in maximal voluntary eccentric knee extension torque, Vastus Lateralis fascicle lengthening and patella tendon stiffness. Further research is required to understand the significant sex differences in Creatine Kinase levels following exercise induced muscle damage. PMID:26986066
Preschoolers can recognize violations of the Gricean maxims
Eskritt, Michelle; Whalen, Juanita; Lee, Kang
2010-01-01
Grice (Syntax and semantics: Speech acts, 1975, pp. 41–58, Vol. 3) proposed that conversation is guided by a spirit of cooperation that involves adherence to several conversational maxims. Three types of maxims were explored in the current study: 1) Quality, to be truthful; 2) Relation, to say only what is relevant to a conversation; and 3) Quantity, to provide as much information as required. Three- to five-year-olds were tested to determine the age at which an awareness of these Gricean maxims emerges. Children requested the help of one of two puppets in finding a hidden sticker. One puppet always adhered to the maxim being tested, while the other always violated it. Consistently choosing the puppet that adhered to the maxim was considered indicative of an understanding of that maxim. The results indicate that children were initially only successful in the Relation condition. While in general, children performed better at first in the Quantity condition compared with the Quality condition, 3-year-olds never performed above chance in the Quantity condition. The findings of the present study indicate that preschool children are sensitive to the violation of the Relation, Quality, and Quantity maxims at least under some conditions. PMID:20953298
A taxonomic approach to communicating maxims in interstellar messages
NASA Astrophysics Data System (ADS)
Vakoch, Douglas A.
2011-02-01
Previous discussions of interstellar messages that could be sent to extraterrestrial intelligence have focused on descriptions of mathematics, science, and aspects of human culture and civilization. Although some of these depictions of humanity have implicitly referred to our aspirations, this has not clearly been separated from descriptions of our actions and attitudes as they are. In this paper, a methodology is developed for constructing interstellar messages that convey information about our aspirations by developing a taxonomy of maxims that provide guidance for living. Sixty-six maxims providing guidance for living were judged for degree of similarity to each of other. Quantitative measures of the degree of similarity between all pairs of maxims were derived by aggregating similarity judgments across individual participants. These composite similarity ratings were subjected to a cluster analysis, which yielded a taxonomy that highlights perceived interrelationships between individual maxims and that identifies major classes of maxims. Such maxims can be encoded in interstellar messages through three-dimensional animation sequences conveying narratives that highlight interactions between individuals. In addition, verbal descriptions of these interactions in Basic English can be combined with these pictorial sequences to increase intelligibility. Online projects to collect messages such as the SETI Institute's Earth Speaks and La Tierra Habla, can be used to solicit maxims from participants around the world.
The independent medical examination.
Ameis, Arthur; Zasler, Nathan D
2002-05-01
The physiatrist, owing to expertise in impairment and disability analysis, is able to offer the medicolegal process considerable assistance. This chapter describes the scope and process of the independent medical examination (IME) and provides an overview of its component parts. Practical guidelines are provided for performing a physiatric IME of professional standard, and for serving as an impartial, expert witness. Caveats are described regarding testifying and medicolegal ethical issues along with practice management advice. PMID:12122847
Agent independent task planning
NASA Technical Reports Server (NTRS)
Davis, William S.
1990-01-01
Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.
Opportunities to maximize value with integrated palliative care
Bergman, Jonathan; Laviana, Aaron A
2016-01-01
Palliative care involves aggressively addressing and treating psychosocial, spiritual, religious, and family concerns, as well as considering the overall psychosocial structures supporting a patient. The concept of integrated palliative care removes the either/or decision a patient needs to make: they need not decide if they want either aggressive chemotherapy from their oncologist or symptom-guided palliative care but rather they can be comanaged by several clinicians, including a palliative care clinician, to maximize the benefit to them. One common misconception about palliative care, and supportive care in general, is that it amounts to “doing nothing” or “giving up” on aggressive treatments for patients. Rather, palliative care involves very aggressive care, targeted at patient symptoms, quality-of-life, psychosocial needs, family needs, and others. Integrating palliative care into the care plan for individuals with advanced diseases does not necessarily imply that a patient must forego other treatment options, including those aimed at a cure, prolonging of life, or palliation. Implementing interventions to understand patient preferences and to ensure those preferences are addressed, including preferences related to palliative and supportive care, is vital in improving the patient-centeredness and value of surgical care. Given our aging population and the disproportionate cost of end-of-life care, this holds great hope in bending the cost curve of health care spending, ensuring patient-centeredness, and improving quality and value of care. Level 1 evidence supports this model, and it has been achieved in several settings; the next necessary step is to disseminate such models more broadly. PMID:27226721
Maximizing information exchange between complex networks
NASA Astrophysics Data System (ADS)
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-10-01
Science is not merely the smooth progressive interaction of hypothesis, experiment and theory, although it sometimes has that form. More realistically the scientific study of any given complex phenomenon generates a number of explanations, from a variety of perspectives, that eventually requires synthesis to achieve a deep level of insight and understanding. One such synthesis has created the field of out-of-equilibrium statistical physics as applied to the understanding of complex dynamic networks. Over the past forty years the concept of complexity has undergone a metamorphosis. Complexity was originally seen as a consequence of memory in individual particle trajectories, in full agreement with a Hamiltonian picture of microscopic dynamics and, in principle, macroscopic dynamics could be derived from the microscopic Hamiltonian picture. The main difficulty in deriving macroscopic dynamics from microscopic dynamics is the need to take into account the actions of a very large number of components. The existence of events such as abrupt jumps, considered by the conventional continuous time random walk approach to describing complexity was never perceived as conflicting with the Hamiltonian view. Herein we review many of the reasons why this traditional Hamiltonian view of complexity is unsatisfactory. We show that as a result of technological advances, which make the observation of single elementary events possible, the definition of complexity has shifted from the conventional memory concept towards the action of non-Poisson renewal events. We show that the observation of crucial processes, such as the intermittent fluorescence of blinking quantum dots as well as the brain’s response to music, as monitored by a set of electrodes attached to the scalp, has forced investigators to go beyond the traditional concept of complexity and to establish closer contact with the nascent field of complex networks. Complex networks form one of the most challenging areas of
Independence among People with Disabilities: II. Personal Independence Profile.
ERIC Educational Resources Information Center
Nosek, Margaret A.; And Others
1992-01-01
Developed Personal Independence Profile (PIP) as an instrument to measure aspects of independence beyond physical and cognitive functioning in people with diverse disabilities. PIP was tested for reliability and validity with 185 subjects from 10 independent living centers. Findings suggest that the Personal Independence Profile measures the…
Maximizing Your Investment in Building Automation System Technology.
ERIC Educational Resources Information Center
Darnell, Charles
2001-01-01
Discusses how organizational issues and system standardization can be important factors that determine an institution's ability to fully exploit contemporary building automation systems (BAS). Further presented is management strategy for maximizing BAS investments. (GR)
Maximal slicing of D-dimensional spherically symmetric vacuum spacetime
Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru
2009-10-15
We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D{>=}5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.
Carnot cycle at finite power: attainability of maximal efficiency.
Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G
2013-08-01
We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power. PMID:23952379
Interpreting Negative Results in an Angle Maximization Problem.
ERIC Educational Resources Information Center
Duncan, David R.; Litwiller, Bonnie H.
1995-01-01
Presents a situation in which differential calculus is used with inverse trigonometric tangent functions to maximize an angle measure. A negative distance measure ultimately results, requiring a reconsideration of assumptions inherent in the initial figure. (Author/MKR)
Survival associated pathway identification with group Lp penalized global AUC maximization
2010-01-01
It has been demonstrated that genes in a cell do not act independently. They interact with one another to complete certain biological processes or to implement certain molecular functions. How to incorporate biological pathways or functional groups into the model and identify survival associated gene pathways is still a challenging problem. In this paper, we propose a novel iterative gradient based method for survival analysis with group Lp penalized global AUC summary maximization. Unlike LASSO, Lp (p < 1) (with its special implementation entitled adaptive LASSO) is asymptotic unbiased and has oracle properties [1]. We first extend Lp for individual gene identification to group Lp penalty for pathway selection, and then develop a novel iterative gradient algorithm for penalized global AUC summary maximization (IGGAUCS). This method incorporates the genetic pathways into global AUC summary maximization and identifies survival associated pathways instead of individual genes. The tuning parameters are determined using 10-fold cross validation with training data only. The prediction performance is evaluated using test data. We apply the proposed method to survival outcome analysis with gene expression profile and identify multiple pathways simultaneously. Experimental results with simulation and gene expression data demonstrate that the proposed procedures can be used for identifying important biological pathways that are related to survival phenotype and for building a parsimonious model for predicting the survival times. PMID:20712896
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-01-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph. PMID:25767331
Maximal oxygen uptake during exercise using trained or untrained muscles.
Moreira-da-Costa, M; Russo, A K; Piçarro, I C; Silva, A C; Leite-de-Barros-Neto, T; Tarasantchi, J; Barbosa, A S
1984-01-01
Maximal oxygen uptake, VO2 max, was determined for cyclists, long-distance runners and non-athletes during uphill running (treadmill) and cycling (cycloergometer) to compare trained and untrained muscles. Blood lactate, maximal heart rate and maximal ventilation during work were also measured. VO2 max was higher for runners and non-athletes during exercise on the treadmill and higher for cyclists during exercise on the cycloergometer. For runners and non-athletes, maximal heart rate accompanied the increase in VO2 max, whereas similar values were obtained for cyclists on both ergometers. Maximal ventilation during work accompanied the difference in VO2 max in both groups of athletes but among non-athletes it was similar during exercise on both the cycloergometer and the treadmill. Blood lactate was similar during exercise on both ergometers for all groups. These results suggest that the quantitative effects of training on cardiovascular and respiratory functions may only be properly evaluated by using an ergometer which requires an activity similar to that usually performed by the subjects. Cycle riding may possibly induce significant and specific alterations in the muscles involved in the exercise, thus increasing peripheral O2 uptake even after stabilization of maximal cardiac output, whereas running may well induce an improvement of all factors which are responsible for aerobic work power. PMID:6518340
Evolution of Shanghai STOCK Market Based on Maximal Spanning Trees
NASA Astrophysics Data System (ADS)
Yang, Chunxia; Shen, Ying; Xia, Bingying
2013-01-01
In this paper, using a moving window to scan through every stock price time series over a period from 2 January 2001 to 11 March 2011 and mutual information to measure the statistical interdependence between stock prices, we construct a corresponding weighted network for 501 Shanghai stocks in every given window. Next, we extract its maximal spanning tree and understand the structure variation of Shanghai stock market by analyzing the average path length, the influence of the center node and the p-value for every maximal spanning tree. A further analysis of the structure properties of maximal spanning trees over different periods of Shanghai stock market is carried out. All the obtained results indicate that the periods around 8 August 2005, 17 October 2007 and 25 December 2008 are turning points of Shanghai stock market, at turning points, the topology structure of the maximal spanning tree changes obviously: the degree of separation between nodes increases; the structure becomes looser; the influence of the center node gets smaller, and the degree distribution of the maximal spanning tree is no longer a power-law distribution. Lastly, we give an analysis of the variations of the single-step and multi-step survival ratios for all maximal spanning trees and find that two stocks are closely bonded and hard to be broken in a short term, on the contrary, no pair of stocks remains closely bonded for a long time.
Knudsen, Steen; Rankinen, Tuomo; Koch, Lauren G.; Sarzynski, Mark; Jensen, Thomas; Keller, Pernille; Scheele, Camilla; Vollaard, Niels B. J.; Nielsen, Søren; Åkerström, Thorbjörn; MacDougald, Ormond A.; Jansson, Eva; Greenhaff, Paul L.; Tarnopolsky, Mark A.; van Loon, Luc J. C.; Pedersen, Bente K.; Sundberg, Carl Johan; Wahlestedt, Claes; Britton, Steven L.; Bouchard, Claude
2010-01-01
A low maximal oxygen consumption (V̇o2max) is a strong risk factor for premature mortality. Supervised endurance exercise training increases V̇o2max with a very wide range of effectiveness in humans. Discovering the DNA variants that contribute to this heterogeneity typically requires substantial sample sizes. In the present study, we first use RNA expression profiling to produce a molecular classifier that predicts V̇o2max training response. We then hypothesized that the classifier genes would harbor DNA variants that contributed to the heterogeneous V̇o2max response. Two independent preintervention RNA expression data sets were generated (n = 41 gene chips) from subjects that underwent supervised endurance training: one identified and the second blindly validated an RNA expression signature that predicted change in V̇o2max (“predictor” genes). The HERITAGE Family Study (n = 473) was used for genotyping. We discovered a 29-RNA signature that predicted V̇o2max training response on a continuous scale; these genes contained ∼6 new single-nucleotide polymorphisms associated with gains in V̇o2max in the HERITAGE Family Study. Three of four novel candidate genes from the HERITAGE Family Study were confirmed as RNA predictor genes (i.e., “reciprocal” RNA validation of a quantitative trait locus genotype), enhancing the performance of the 29-RNA-based predictor. Notably, RNA abundance for the predictor genes was unchanged by exercise training, supporting the idea that expression was preset by genetic variation. Regression analysis yielded a model where 11 single-nucleotide polymorphisms explained 23% of the variance in gains in V̇o2max, corresponding to ∼50% of the estimated genetic variance for V̇o2max. In conclusion, combining RNA profiling with single-gene DNA marker association analysis yields a strongly validated molecular predictor with meaningful explanatory power. V̇o2max responses to endurance training can be predicted by measuring a ∼30
Douglas, Julie A.; Sandefur, Conner I.
2010-01-01
Summary In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a genetic study in the Old Order Amish of Lancaster County, Pennsylvania, a population isolate derived from a modest number of founders. Given one or more pedigrees, our program automatically and rapidly extracts a fixed number of maximally unrelated individuals. PMID:18321883
Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2peak)
NASA Technical Reports Server (NTRS)
Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan
2016-01-01
Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.
Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)
NASA Technical Reports Server (NTRS)
Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan
2016-01-01
Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.
An Efficient Algorithm for Maximizing Range Sum Queries in a Road Network
Jung, HaRim; Kim, Ung-Mo
2014-01-01
Given a set of positive-weighted points and a query rectangle r (specified by a client) of given extents, the goal of a maximizing range sum (MaxRS) query is to find the optimal location of r such that the total weights of all the points covered by r are maximized. All existing methods for processing MaxRS queries assume the Euclidean distance metric. In many location-based applications, however, the motion of a client may be constrained by an underlying (spatial) road network; that is, the client cannot move freely in space. This paper addresses the problem of processing MaxRS queries in a road network. We propose the external-memory algorithm that is suited for a large road network database. In addition, in contrast to the existing methods, which retrieve only one optimal location, our proposed algorithm retrieves all the possible optimal locations. Through simulations, we evaluate the performance of the proposed algorithm. PMID:25152915
An independent hydrogen source
Kobzenko, G.F.; Chubenko, M.V.; Kobzenko, N.S.; Senkevich, A.I.; Shkola, A.A.
1985-10-01
Descriptions are given of the design and operation of an independent hydrogen source used in purifying and storing hydrogen. If LaNi/sub 5/ or TiFe is used as the sorbent, one can store about 500 liter of chemically bound hydrogen in a vessel of 0.9 liter. Molecular purification of the desorbed hydrogen is used. The IHS is a safe hydrogen source, since the hydrogen is trapped in the sorbent in the chemically bound state and in equilibrium with LaNi/sub 5/Hx at room temperature. If necessary, the IHS can serve as a compressor and provide higher hydrogen pressures. The device is compact and transportable.
Employee vs independent contractor.
Kolender, Ellen
2012-01-01
Finding qualified personnel for the cancer registry department has become increasingly difficult, as experienced abstractors retire and cancer diagnoses increase. Faced with hiring challenges, managers turn to teleworkers to fill positions and accomplish work in a timely manner. Suddenly, the hospital hires new legal staff and all telework agreements are disrupted. The question arises: Are teleworkers employees or independent contractors? Creating telework positions requires approval from the legal department and human resources. Caught off-guard in the last quarter of the year, I found myself again faced with hiring challenges. PMID:23599033
Cary Potter on Independent Education
ERIC Educational Resources Information Center
Potter, Cary
1978-01-01
Cary Potter was President of the National Association of Independent Schools from 1964-1978. As he leaves NAIS he gives his views on education, on independence, on the independent school, on public responsibility, on choice in a free society, on educational change, and on the need for collective action by independent schools. (Author/RK)
Myth or Truth: Independence Day.
ERIC Educational Resources Information Center
Gardner, Traci
Most Americans think of the Fourth of July as Independence Day, but is it really the day the U.S. declared and celebrated independence? By exploring myths and truths surrounding Independence Day, this lesson asks students to think critically about commonly believed stories regarding the beginning of the Revolutionary War and the Independence Day…
Smith, Des H.V.; Converse, Sarah J.; Gibson, Keith; Moehrenschlager, Axel; Link, William A.; Olsen, Glenn H.; Maguire, Kelly
2011-01-01
Captive breeding is key to management of severely endangered species, but maximizing captive production can be challenging because of poor knowledge of species breeding biology and the complexity of evaluating different management options. In the face of uncertainty and complexity, decision-analytic approaches can be used to identify optimal management options for maximizing captive production. Building decision-analytic models requires iterations of model conception, data analysis, model building and evaluation, identification of remaining uncertainty, further research and monitoring to reduce uncertainty, and integration of new data into the model. We initiated such a process to maximize captive production of the whooping crane (Grus americana), the world's most endangered crane, which is managed through captive breeding and reintroduction. We collected 15 years of captive breeding data from 3 institutions and used Bayesian analysis and model selection to identify predictors of whooping crane hatching success. The strongest predictor, and that with clear management relevance, was incubation environment. The incubation period of whooping crane eggs is split across two environments: crane nests and artificial incubators. Although artificial incubators are useful for allowing breeding pairs to produce multiple clutches, our results indicate that crane incubation is most effective at promoting hatching success. Hatching probability increased the longer an egg spent in a crane nest, from 40% hatching probability for eggs receiving 1 day of crane incubation to 95% for those receiving 30 days (time incubated in each environment varied independently of total incubation period). Because birds will lay fewer eggs when they are incubating longer, a tradeoff exists between the number of clutches produced and egg hatching probability. We developed a decision-analytic model that estimated 16 to be the optimal number of days of crane incubation needed to maximize the number of
Savage, Robert J; Best, Stuart A; Carstairs, Greg L; Ham, Daniel J
2012-07-01
Psychophysical assessments, such as the maximum acceptable lift, have been used to establish worker capability and set safe load limits for manual handling tasks in occupational settings. However, in military settings, in which task demand is set and capable workers must be selected, subjective measurements are inadequate, and maximal capacity testing must be used to assess lifting capability. The aim of this study was to establish and compare the relationship between maximal lifting capacity and a self-determined tolerable lifting limit, maximum acceptable lift, across a range of military-relevant lifting tasks. Seventy male soldiers (age 23.7 ± 6.1 years) from the Australian Army performed 7 strength-based lifting tasks to determine their maximum lifting capacity and maximum acceptable lift. Comparisons were performed to identify maximum acceptable lift relative to maximum lifting capacity for each individual task. Linear regression was used to identify the relationship across all tasks when the data were pooled. Strong correlations existed between all 7 lifting tasks (rrange = 0.87-0.96, p < 0.05). No differences were found in maximum acceptable lift relative to maximum lifting capacity across all tasks (p = 0.46). When data were pooled, maximum acceptable lift was equal to 84 ± 8% of the maximum lifting capacity. This study is the first to illustrate the strong and consistent relationship between maximum lifting capacity and maximum acceptable lift for multiple single lifting tasks. The relationship developed between these indices may be used to help assess self-selected manual handling capability through occupationally relevant maximal performance tests. PMID:22643137
Single- vs. Multiple-Set Strength Training in Women.
ERIC Educational Resources Information Center
Schlumberger, Andreas; Stec, Justyna; Schmidtbleicher, Dietmar
2001-01-01
Compared the effects of single- and multiple-set strength training in women with basic experience in resistance training. Both training groups had significant strength improvements in leg extension. In the seated bench press, only the three-set group showed a significant increase in maximal strength. There were higher strength gains overall in the…
Frame independent cosmological perturbations
Prokopec, Tomislav; Weenink, Jan E-mail: j.g.weenink@uu.nl
2013-09-01
We compute the third order gauge invariant action for scalar-graviton interactions in the Jordan frame. We demonstrate that the gauge invariant action for scalar and tensor perturbations on one physical hypersurface only differs from that on another physical hypersurface via terms proportional to the equation of motion and boundary terms, such that the evolution of non-Gaussianity may be called unique. Moreover, we demonstrate that the gauge invariant curvature perturbation and graviton on uniform field hypersurfaces in the Jordan frame are equal to their counterparts in the Einstein frame. These frame independent perturbations are therefore particularly useful in relating results in different frames at the perturbative level. On the other hand, the field perturbation and graviton on uniform curvature hypersurfaces in the Jordan and Einstein frame are non-linearly related, as are their corresponding actions and n-point functions.
Molecular maximizing characterizes choice on Vaughan's (1981) procedure
Silberberg, Alan; Ziriax, John M.
1985-01-01
Pigeons keypecked on a two-key procedure in which their choice ratios during one time period determined the reinforcement rates assigned to each key during the next period (Vaughan, 1981). During each of four phases, which differed in the reinforcement rates they provided for different choice ratios, the duration of these periods was four minutes, duplicating one condition from Vaughan's study. During the other four phases, these periods lasted six seconds. When these periods were long, the results were similar to Vaughan's and appeared compatible with melioration theory. But when these periods were short, the data were consistent with molecular maximizing (see Silberberg & Ziriax, 1982) and were incompatible with melioration, molar maximizing, and matching. In a simulation, stat birds following a molecular-maximizing algorithm responded on the short- and long-period conditions of this experiment. When the time periods lasted four minutes, the results were similar to Vaughan's and to the results of the four-minute conditions of this study; when the time periods lasted six seconds, the choice data were similar to the data from real subjects for the six-second conditions. Thus, a molecular-maximizing response rule generated choice data comparable to those from the short- and long-period conditions of this experiment. These data show that, among extant accounts, choice on the Vaughan procedure is most compatible with molecular maximizing. PMID:16812409
Ventilatory patterns differ between maximal running and cycling.
Tanner, David A; Duke, Joseph W; Stager, Joel M
2014-01-15
To determine the effect of exercise mode on ventilatory patterns, 22 trained men performed two maximal graded exercise tests; one running on a treadmill and one cycling on an ergometer. Tidal flow-volume (FV) loops were recorded during each minute of exercise with maximal loops measured pre and post exercise. Running resulted in a greater VO2peak than cycling (62.7±7.6 vs. 58.1±7.2mLkg(-1)min(-1)). Although maximal ventilation (VE) did not differ between modes, ventilatory equivalents for O2 and CO2 were significantly larger during maximal cycling. Arterial oxygen saturation (estimated via ear oximeter) was also greater during maximal cycling, as were end-expiratory (EELV; 3.40±0.54 vs. 3.21±0.55L) and end-inspiratory lung volumes, (EILV; 6.24±0.88 vs. 5.90±0.74L). Based on these results we conclude that ventilatory patterns differ as a function of exercise mode and these observed differences are likely due to the differences in posture adopted during exercise in these modes. PMID:24211317
Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision
Column generation algorithms for exact modularity maximization in networks.
Aloise, Daniel; Cafieri, Sonia; Caporossi, Gilles; Hansen, Pierre; Perron, Sylvain; Liberti, Leo
2010-10-01
Finding modules, or clusters, in networks currently attracts much attention in several domains. The most studied criterion for doing so, due to Newman and Girvan [Phys. Rev. E 69, 026113 (2004)], is modularity maximization. Many heuristics have been proposed for maximizing modularity and yield rapidly near optimal solution or sometimes optimal ones but without a guarantee of optimality. There are few exact algorithms, prominent among which is a paper by Xu [Eur. Phys. J. B 60, 231 (2007)]. Modularity maximization can also be expressed as a clique partitioning problem and the row generation algorithm of Grötschel and Wakabayashi [Math. Program. 45, 59 (1989)] applied. We propose to extend both of these algorithms using the powerful column generation methods for linear and non linear integer programming. Performance of the four resulting algorithms is compared on problems from the literature. Instances with up to 512 entities are solved exactly. Moreover, the computing time of previously solved problems are reduced substantially. PMID:21230350
Force Irregularity Following Maximal Effort: The After-Peak Reduction.
Doucet, Barbara M; Mettler, Joni A; Griffin, Lisa; Spirduso, Waneen
2016-08-01
Irregularities in force output are present throughout human movement and can impair task performance. We investigated the presence of a large force discontinuity (after-peak reduction, APR) that appeared immediately following peak in maximal effort ramp contractions performed with the thumb adductor and ankle dorsiflexor muscles in 25 young adult participants (76% males, 24% females; M age 24.4 years, SD = 7.1). The after-peak reduction displayed similar parameters in both muscle groups with comparable drops in force during the after-peak reduction minima (thumb adductor: 27.5 ± 7.5% maximal voluntary contraction; ankle dorsiflexor: 25.8 ± 6.2% maximal voluntary contraction). A trend for the presence of fewer after-peak reductions with successive ramp trials was observed, suggesting a learning effect. Further investigation should explore underlying neural mechanisms contributing to the after-peak reduction. PMID:27502241
A Conceptual Examination of Setting Events
ERIC Educational Resources Information Center
Carter, Mark; Driscoll, Coralie
2007-01-01
Setting events are typically seen as antecedent contextual variables that influence behaviour. They are thought to act independently of Skinner's three-term contingency, which consists of a discriminative stimulus, response, and reinforcing consequence. There has been increasing interest in setting events in education from both a theoretical and…
Setting up Routines To Support Learners.
ERIC Educational Resources Information Center
Gordon, Dale
1999-01-01
Argues that, in addition to teaching reading skills, teachers need to schedule time for children to practice reading. Presents specific guidelines for implementing a Sustained Quiet Uninterrupted and Independent Reading Time (SQUIRT) program which include how to find time, set a purpose, and set procedures. (NH)
Recruitment of some respiratory muscles during three maximal inspiratory manoeuvres.
Nava, S; Ambrosino, N; Crotti, P; Fracchia, C; Rampulla, C
1993-01-01
BACKGROUND--A study was undertaken to determine the level of recruitment of the muscles used in the generation of respiratory muscle force, and to ascertain whether maximal diaphragmatic force and maximal inspiratory muscle force need to be measured by separate tests. The level of activity of three inspiratory muscles and one expiratory muscle during three maximal respiratory manoeuvres was studied: (1) maximal inspiration against a closed airway (Muller manoeuvre or maximal inspiratory pressure (MIP)); (2) maximal inspired manoeuvre followed by a maximal expiratory effort (combined manoeuvre); and (3) maximal inspiratory sniff through the nose (sniff manoeuvre). METHODS--All the manoeuvres were performed from functional residual capacity. The gastric (PGA) and oesophageal (POES) pressures and their difference, transdiaphragmatic pressure (PDI), and the integrated EMG activity of the diaphragm (EDI), the sternomastoid (ESTR), the intercostal parasternals (ERIC), and the rectus abdominis muscles (ERA) were recorded. RESULTS--Mean (SD) PDI values for the Muller, combined, and sniff manoeuvres were: 127.6 (19.4), 162.7 (22.2), and 136.6 (24.8) cm H2O, respectively. The pattern of rib cage muscle recruitment (POES/PDI) was similar for the Muller and sniff manoeuvres (88% and 80% respectively), and was 58% in the combined manoeuvre, confirming data previously reported in the literature. Peak EDI amplitude was greater during the sniff manoeuvre in all subjects (100%) than during the combined (88.1%) and Muller (61.1%) manoeuvres. ESTR and EIC were more active in the Muller and the sniff manoeuvres. The contribution of the expiratory muscle (ERA) to the three manoeuvres was 100% in the combined, 26.1% for the sniff, and 11.5% for the Muller manoeuvre. CONCLUSIONS--Each of these three manoeuvres results in different mechanisms of inspiratory and expiratory muscle activation and the intrathoracic and intra-abdominal pressures generated are a reflection of the interaction
Maximal expiratory flow volume curve in quarry workers.
Subhashini, Arcot Sadagopa; Satchidhanandam, Natesa
2002-01-01
Maximal Expiratory Flow Volume (MEFV) curves were recorded with a computerized Spirometer (Med Spiror). Forced Vital Capacity (FVC), Forced Expiratory Volumes (FEV), mean and maximal flow rates were obtained in 25 quarry workers who were free from respiratory disorders and 20 healthy control subjects. All the functional values are lower in quarry workers than in the control subject, the largest reduction in quarry workers with a work duration of over 15 years, especially for FEF75. The effects are probably due to smoking rather than dust exposure. PMID:12024961
Stability region maximization by decomposition-aggregation method. [Skylab stability
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Cuk, S. M.
1974-01-01
This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.
Projection of two biphoton qutrits onto a maximally entangled state.
Halevy, A; Megidish, E; Shacham, T; Dovrat, L; Eisenberg, H S
2011-04-01
Bell state measurements, in which two quantum bits are projected onto a maximally entangled state, are an essential component of quantum information science. We propose and experimentally demonstrate the projection of two quantum systems with three states (qutrits) onto a generalized maximally entangled state. Each qutrit is represented by the polarization of a pair of indistinguishable photons-a biphoton. The projection is a joint measurement on both biphotons using standard linear optics elements. This demonstration enables the realization of quantum information protocols with qutrits, such as teleportation and entanglement swapping. PMID:21517363
Flouris, Andreas D; Metsios, Giorgos S; Carrillo, Andres E; Carrill, Andres E; Jamurtas, Athanasios Z; Stivaktakis, Polychronis D; Tzatzarakis, Manolis N; Tsatsakis, Aristidis M; Koutedakis, Yiannis
2012-01-01
We assessed the cardiorespiratory and immune response to physical exertion following secondhand smoke (SHS) exposure through a randomized crossover experiment. Data were obtained from 16 (8 women) non-smoking adults during and following a maximal oxygen uptake cycling protocol administered at baseline and at 0-, 1-, and 3- hours following 1-hour of SHS set at bar/restaurant carbon monoxide levels. We found that SHS was associated with a 12% decrease in maximum power output, an 8.2% reduction in maximal oxygen consumption, a 6% increase in perceived exertion, and a 6.7% decrease in time to exhaustion (P<0.05). Moreover, at 0-hours almost all respiratory and immune variables measured were adversely affected (P<0.05). For instance, FEV(1) values at 0-hours dropped by 17.4%, while TNF-α increased by 90.1% (P<0.05). At 3-hours mean values of cotinine, perceived exertion and recovery systolic blood pressure in both sexes, IL4, TNF-α and IFN-γ in men, as well as FEV(1)/FVC, percent predicted FEV(1), respiratory rate, and tidal volume in women remained different compared to baseline (P<0.05). It is concluded that a 1-hour of SHS at bar/restaurant levels adversely affects the cardiorespiratory and immune response to maximal physical exertion in healthy nonsmokers for at least three hours following SHS. PMID:22355401
Steele, Adam; Davis, Alexander; Kim, Joohyung; Loth, Eric; Bayer, Ilker S
2015-06-17
This study presents a new factor that can be used to design materials where desired surface properties must be retained under in-system wear and abrasion. To demonstrate this factor, a synthetic nonwetting coating is presented that retains chemical and geometric performance as material is removed under multiple wear conditions: a coarse vitrified abradant (similar to sanding), a smooth abradant (similar to rubbing), and a mild abradant (a blend of sanding and rubbing). With this approach, such a nonwetting material displays unprecedented mechanical durability while maintaining desired performance under a range of demanding conditions. This performance, herein termed wear independent similarity performance (WISP), is critical because multiple mechanisms and/or modes of wear can be expected to occur in many typical applications, e.g., combinations of abrasion, rubbing, contact fatigue, weathering, particle impact, etc. Furthermore, these multiple wear mechanisms tend to quickly degrade a novel surface's unique performance, and thus many promising surfaces and materials never scale out of research laboratories. Dynamic goniometry and scanning electron microscopy results presented herein provide insight into these underlying mechanisms, which may also be applied to other coatings and materials. PMID:26018058
NASA Astrophysics Data System (ADS)
Liu, GaiYun; Chao, Daniel Yuh
2015-08-01
To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.
Studying the Independent School Library
ERIC Educational Resources Information Center
Cahoy, Ellysa Stern; Williamson, Susan G.
2008-01-01
In 2005, the American Association of School Librarians' Independent Schools Section conducted a national survey of independent school libraries. This article analyzes the results of the survey, reporting specialized data and information regarding independent school library budgets, collections, services, facilities, and staffing. Additionally, the…
Edge covers and independence: Algebraic approach
NASA Astrophysics Data System (ADS)
Kalinina, E. A.; Khitrov, G. M.; Pogozhev, S. V.
2016-06-01
In this paper, linear algebra methods are applied to solve some problems of graph theory. For ordinary connected graphs, edge coverings and independent sets are considered. Some results concerning minimum edge covers and maximum matchings are proved with the help of linear algebraic approach. The problem of finding a maximum matching of a graph is fundamental both practically and theoretically, and has numerous applications, e.g., in computational chemistry and mathematical chemistry.
UpSet: Visualization of Intersecting Sets
Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter
2016-01-01
Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912
Glucose transporters and maximal transport are increased in endurance-trained rat soleus
NASA Technical Reports Server (NTRS)
Slentz, C. A.; Gulve, E. A.; Rodnick, K. J.; Henriksen, E. J.; Youn, J. H.; Holloszy, J. O.
1992-01-01
Voluntary wheel running induces an increase in the concentration of the regulatable glucose transporter (GLUT4) in rat plantaris muscle but not in soleus muscle (K. J. Rodnick, J. O. Holloszy, C. E. Mondon, and D. E. James. Diabetes 39: 1425-1429, 1990). Wheel running also causes hypertrophy of the soleus in rats. This study was undertaken to ascertain whether endurance training that induces enzymatic adaptations but no hypertrophy results in an increase in the concentration of GLUT4 protein in rat soleus (slow-twitch red) muscle and, if it does, to determine whether there is a concomitant increase in maximal glucose transport activity. Female rats were trained by treadmill running at 25 m/min up a 15% grade, 90 min/day, 6 days/wk for 3 wk. This training program induced increases of 52% in citrate synthase activity, 66% in hexokinase activity, and 47% in immunoreactive GLUT4 protein concentration in soleus muscles without causing hypertrophy. Glucose transport activity stimulated maximally with insulin plus contractile activity was increased to roughly the same extent (44%) as GLUT4 protein content in soleus muscle by the treadmill exercise training. In a second set of experiments, we examined whether a swim-training program increases glucose transport activity in the soleus in the presence of a maximally effective concentration of insulin. The swimming program induced a 44% increase in immunoreactive GLUT4 protein concentration. Glucose transport activity maximally stimulated with insulin was 62% greater in soleus muscle of the swimmers than in untrained controls. Training did not alter the basal rate of 2-deoxyglucose uptake.(ABSTRACT TRUNCATED AT 250 WORDS).
Li, Sucheng; Anwar, Shahzad; Lu, Weixin; Hang, Zhi Hong; Hou, Bo E-mail: phyhoubo@gmail.com; Shen, Mingrong; Wang, Chin-Hua
2014-01-15
We study the absorption properties of ultrathin conductive films in the microwave regime, and find a moderate absorption effect which gives rise to maximal absorbance 50% if the sheet (square) resistance of the film meets an impedance matching condition. The maximal absorption exhibits a frequency-independent feature and takes place on an extremely subwavelength scale, the film thickness. As a realistic instance, ∼5 nm thick Au film is predicted to achieve the optimal absorption. In addition, a methodology based on metallic mesh structure is proposed to design the frequency-independent ultrathin absorbers. We perform a design of such absorbers with 50% absorption, which is verified by numerical simulations.
Maximizing the Online Learning Experience: Suggestions for Educators and Students
ERIC Educational Resources Information Center
Cicco, Gina
2011-01-01
This article will discuss ways of maximizing the online course experience for teachers- and counselors-in-training. The widespread popularity of online instruction makes it a necessary learning experience for future teachers and counselors (Ash, 2011). New teachers and counselors take on the responsibility of preparing their students for real-life…
How Managerial Ownership Affects Profit Maximization in Newspaper Firms.
ERIC Educational Resources Information Center
Busterna, John C.
1989-01-01
Explores whether different levels of a manager's ownership of a newspaper affects the manager's profit maximizing attitudes and behavior. Finds that owner-managers tend to place less emphasis on profits than non-owner-controlled newspapers, contrary to economic theory and empirical evidence from other industries. (RS)
Modifying Softball for Maximizing Learning Outcomes in Physical Education
ERIC Educational Resources Information Center
Brian, Ali; Ward, Phillip; Goodway, Jacqueline D.; Sutherland, Sue
2014-01-01
Softball is taught in many physical education programs throughout the United States. This article describes modifications that maximize learning outcomes and that address the National Standards and safety recommendations. The modifications focus on tasks and equipment, developmentally appropriate motor-skill acquisition, increasing number of…
Price of oil and OPEC behavior: a utility maximization model
Adeinat, M.K.
1985-01-01
There is growing evidence that OPEC has neither behaved as a cartel, at least in the last decade, nor maximized the discounted value of its profits as would be suggested by the theory of exhaustible resources. This dissertation attempts to find a way out of this dead end by proposing a utility maximization model. According to the utility maximization model, the decisions of how much crude oil each country produces is determined by a country's budgetary needs. The objective of each country is to choose present consumption and future consumption (which must be financed by its future income which can, in turn, be generated either by its investment out of current income or the proceeds of its oil reserves) at time t to maximize its utility function subject to its budget and absorptive capacity constraints. The model predicted that whenever the amount of savings is greater than the country's absorptive capacity as a result of higher prices of oil, it would respond by cutting back its production of oil. This prediction is supported by the following empirical findings: (1) that the marginal propensity to save (MPS) exceeded the marginal propensity to invest (MPI) during the period of study (1967-1981), implying that OPEC countries were facing an absorptive capacity constraint and (2) the quantity of oil production responded negatively to the permanent income in all three countries, the response being highly significant for those countries with the greatest budget surpluses.
Bernoulli equation and the nonexistence of maximal jets
NASA Astrophysics Data System (ADS)
Zdziarski, Andrzej A.
2016-02-01
We discuss the idea of maximal jets introduced by Falcke & Biermann (1995, A&A, 293, 665). According to it, the maximum possible jet power in its internal energy equals the kinetic power in its rest mass. We show this result is incorrect because of an unfortunate algebraic mistake.
Mentoring as Professional Development for Novice Entrepreneurs: Maximizing the Learning
ERIC Educational Resources Information Center
St-Jean, Etienne
2012-01-01
Mentoring can be seen as relevant if not essential in the continuing professional development of entrepreneurs. In the present study, we seek to understand how to maximize the learning that occurs through the mentoring process. To achieve this, we consider various elements that the literature suggested are associated with successful mentoring and…
Fertilizer placement to maximize nitrogen use by fescue
Technology Transfer Automated Retrieval System (TEKTRAN)
The method of fertilizer nitrogen(N) application can affect N uptake in tall fescue and therefore its yield and quality. Subsurface-banding (knife) of fertilizer maximizes fescue N uptake in the poorly-drained clay–pan soils of southeastern Kansas. This study was conducted to determine if knifed N r...
Density-metric unimodular gravity: Vacuum maximal symmetry
Abbassi, A.H.; Abbassi, A.M.
2011-05-15
We have investigated the vacuum maximally symmetric solutions of recently proposed density-metric unimodular gravity theory. The results are widely different from inflationary scenario. The exponential dependence on time in deSitter space is substituted by a power law. Open space-times with non-zero cosmological constant are excluded.
An effective theory of metrics with maximal proper acceleration
NASA Astrophysics Data System (ADS)
Gallego Torromé, Ricardo
2015-12-01
A geometric theory for spacetimes whose world lines associated with physical particles have an upper bound for the proper acceleration is developed. After some fundamental remarks on the requirements that the classical dynamics for point particles should hold, the notion of a generalized metric and a theory of maximal proper acceleration are introduced. A perturbative approach to metrics of maximal proper acceleration is discussed and we show how it provides a consistent theory where the associated Lorentzian metric corresponds to the limit when the maximal proper acceleration goes to infinity. Then several of the physical and kinematical properties of the maximal acceleration metric are investigated, including a discussion of the rudiments of the causal theory and the introduction of the notions of radar distance and celerity function. We discuss the corresponding modification of the Einstein mass-energy relation when the associated Lorentzian geometry is flat. In such a context it is also proved that the physical dispersion relation is relativistic. Two possible physical scenarios where the modified mass-energy relation could be confronted against the experiment are briefly discussed.
Curriculum and Testing Strategies to Maximize Special Education STAAR Achievement
ERIC Educational Resources Information Center
Johnson, William L.; Johnson, Annabel M.; Johnson, Jared W.
2015-01-01
This document is from a presentation at the 2015 annual conference of the Science Teachers Association of Texas (STAT). The two sessions (each listed as feature sessions at the state conference) examined classroom strategies the presenter used in his chemistry classes to maximize Texas end-of-course chemistry test scores for his special population…
Optimal technique for maximal forward rotating vaults in men's gymnastics.
Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R
2015-08-01
In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible. PMID:26026290
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Maximizing Thermal Efficiency and Optimizing Energy Management (Fact Sheet)
Not Available
2012-03-01
Researchers at the Thermal Test Facility (TTF) on the campus of the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) in Golden, Colorado, are addressing maximizing thermal efficiency and optimizing energy management through analysis of efficient heating, ventilating, and air conditioning (HVAC) strategies, automated home energy management (AHEM), and energy storage systems.
Dynamical generation of maximally entangled states in two identical cavities
Alexanian, Moorad
2011-11-15
The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.
Maximizing Cohesion and Minimizing Conflict in Collaborative Writing Groups.
ERIC Educational Resources Information Center
Nelson, Sandra J.; Smith, Douglas C.
1990-01-01
Presents instructional strategies designed to maximize cohesion and minimize conflict in collaborative writing groups. Argues that an understanding of sources of conflict, conflict management strategies, and group processes allows productive and creative group energy to be channeled into effective business writing. (RS)
Cognitive Somatic Behavioral Interventions for Maximizing Gymnastic Performance.
ERIC Educational Resources Information Center
Ravizza, Kenneth; Rotella, Robert
Psychological training programs developed and implemented for gymnasts of a wide range of age and varying ability levels are examined. The programs utilized strategies based on cognitive-behavioral intervention. The approach contends that mental training plays a crucial role in maximizing performance for most gymnasts. The object of the training…
The Profit-Maximizing Firm: Old Wine in New Bottles.
ERIC Educational Resources Information Center
Felder, Joseph
1990-01-01
Explains and illustrates a simplified use of graphical analysis for analyzing the profit-maximizing firm. Believes that graphical analysis helps college students gain a deeper understanding of marginalism and an increased ability to formulate economic problems in marginalist terms. (DB)
Matching Pupils and Teachers to Maximize Expected Outcomes.
ERIC Educational Resources Information Center
Ward, Joe H., Jr.; And Others
To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…
Estimation of the Maximal Lactate Steady State in Endurance Runners.
Llodio, I; Gorostiaga, E M; Garcia-Tabar, I; Granados, C; Sánchez-Medina, L
2016-06-01
This study aimed to predict the velocity corresponding to the maximal lactate steady state (MLSSV) from non-invasive variables obtained during a maximal multistage running field test (modified University of Montreal Track Test, UMTT), and to determine whether a single constant velocity test (CVT), performed several days after the UMTT, could estimate the MLSSV. Within 4-5 weeks, 20 male runners performed: 1) a modified UMTT, and 2) several 30 min CVTs to determine MLSSV to a precision of 0.25 km·h(-1). Maximal aerobic velocity (MAV) was the best predictor of MLSSV. A regression equation was obtained: MLSSV=1.425+(0.756·MAV); R(2)=0.63. Running velocity during the CVT (VCVT) and blood lactate at 6 (La6) and 30 (La30) min further improved the MLSSV prediction: MLSSV=VCVT+0.503 - (0.266·ΔLa30-6); R(2)=0.66. MLSSV can be estimated from MAV during a single maximal multistage running field test among a homogeneous group of trained runners. This estimation can be further improved by performing an additional CVT. In terms of accuracy, simplicity and cost-effectiveness, the reported regression equations can be used for the assessment and training prescription of endurance runners. PMID:27116348
Nursing Students' Awareness and Intentional Maximization of Their Learning Styles
ERIC Educational Resources Information Center
Mayfield, Linda Riggs
2012-01-01
This small, descriptive, pilot study addressed survey data from four levels of nursing students who had been taught to maximize their learning styles in a first-semester freshman success skills course. Bandura's Agency Theory supports the design. The hypothesis was that without reinforcing instruction, the students' recall and application of that…
Optoelectronic plethysmography compared to spirometry during maximal exercise.
Layton, Aimee M; Moran, Sienna L; Garber, Carol Ewing; Armstrong, Hilary F; Basner, Robert C; Thomashow, Byron M; Bartels, Matthew N
2013-01-15
The purpose of this study was to compare simultaneous measurements of tidal volume (Vt) by optoelectronic plethysmography (OEP) and spirometry during a maximal cycling exercise test to quantify possible differences between methods. Vt measured simultaneously by OEP and spirometry was collected during a maximal exercise test in thirty healthy participants. The two methods were compared by linear regression and Bland-Altman analysis at submaximal and maximal exercise. The average difference between the two methods and the mean percentage discrepancy were calculated. Submaximal exercise (SM) and maximal exercise (M) Vt measured by OEP and spirometry had very good correlation, SM R=0.963 (p<0.001), M R=0.982 (p<0.001) and high degree of common variance, SM R(2)=0.928, M R(2)=0.983. Bland-Altman analysis demonstrated that during SM, OEP could measure exercise Vt as much as 0.134 L above and -0.025 L below that of spirometry. OEP could measure exercise Vt as much as 0.188 L above and -0.017 L below that of spirometry. The discrepancy between measurements was -2.0 ± 7.2% at SM and -2.4 ± 3.9% at M. In conclusion, Vt measurements at during exercise by OEP and spirometry are closely correlated and the difference between measurements was insignificant. PMID:23022440
How to Maximize Learning for Gifted Math Students
ERIC Educational Resources Information Center
Chamberlin, Scott A.
2008-01-01
Having a gifted math or science student in the family or classroom is a fascination as well as a significant challenge and responsibility for many parents and teachers. In order to help maximize student learning, several questions need to be asked. What should be the role of technology? How well do traditional schools serve gifted students? What…
PROFIT-MAXIMIZING PRINCIPLES, INSTRUCTIONAL UNITS FOR VOCATIONAL AGRICULTURE.
ERIC Educational Resources Information Center
BARKER, RICHARD L.
THE PURPOSE OF THIS GUIDE IS TO ASSIST VOCATIONAL AGRICULTURE TEACHERS IN STIMULATING JUNIOR AND SENIOR HIGH SCHOOL STUDENT THINKING, UNDERSTANDING, AND DECISION MAKING AS ASSOCIATED WITH PROFIT-MAXIMIZING PRINCIPLES OF FARM OPERATION FOR USE IN FARM MANAGEMENT. IT WAS DEVELOPED UNDER A U.S. OFFICE OF EDUCATION GRANT BY TEACHER-EDUCATORS, A FARM…
Maximizing plant density affects broccoli yield and quality
Technology Transfer Automated Retrieval System (TEKTRAN)
Increased demand for fresh market bunch broccoli (Brassica oleracea L. var. italica) has led to increased production along the United States east coast. Maximizing broccoli yields is a primary concern for quickly expanding southeastern commercial markets. This broccoli plant density study was carr...
Emotional Control and Instructional Effectiveness: Maximizing a Timeout
ERIC Educational Resources Information Center
Andrews, Staci R.
2015-01-01
This article provides recommendations for best practices for basketball coaches to maximize the instructional effectiveness of a timeout during competition. Practical applications are derived from research findings linking emotional intelligence to effective coaching behaviors. Additionally, recommendations are based on the implications of the…
Using Debate to Maximize Learning Potential: A Case Study
ERIC Educational Resources Information Center
Firmin, Michael W.; Vaughn, Aaron; Dye, Amanda
2007-01-01
Following a review of the literature, an educational case study is provided for the benefit of faculty preparing college courses. In particular, we provide a transcribed debate utilized in a General Psychology course as a best practice example of how to craft a debate which maximizes student learning. The work is presented as a model for the…
Maximality and Idealized Cognitive Models: The Complementation of Spanish "Tener."
ERIC Educational Resources Information Center
Hilferty, Joseph; Valenzuela, Javier
2001-01-01
Discusses the bare-noun phrase (NP) complementation pattern of the Spanish verb "tener" (have). Shows that the maximality of the complement NP is dependent upon three factors: (1) idiosyncratic valence requirements; (2) encyclopedic knowledge related to possession; and (3) contextualized semantic construal. (Author/VWL)
Maximizing grain sorghum water use efficiency under deficit irrigation
Technology Transfer Automated Retrieval System (TEKTRAN)
Development and evaluation of sustainable and efficient irrigation strategies is a priority for producers faced with water shortages resulting from aquifer depletion, reduced base flows, and reallocation of water to non-agricultural sectors. Under a limited water supply, yield maximization may not b...
Do Speakers and Listeners Observe the Gricean Maxim of Quantity?
ERIC Educational Resources Information Center
Engelhardt, Paul E.; Bailey, Karl G. D.; Ferreira, Fernanda
2006-01-01
The Gricean Maxim of Quantity is believed to govern linguistic performance. Speakers are assumed to provide as much information as required for referent identification and no more, and listeners are believed to expect unambiguous but concise descriptions. In three experiments we examined the extent to which naive participants are sensitive to the…
Maximally entangled mixed-state generation via local operations
Aiello, A.; Puentes, G.; Voigt, D.; Woerdman, J. P.
2007-06-15
We present a general theoretical method to generate maximally entangled mixed states of a pair of photons initially prepared in the singlet polarization state. This method requires only local operations upon a single photon of the pair and exploits spatial degrees of freedom to induce decoherence. We report also experimental confirmation of these theoretical results.
"Independence" and the nonprofit board: a general counsel's guide.
Peregrine, Michael W; Broccolo, Bernadette M
2006-01-01
In the wake of the Sarbanes-Oxley Act regulations that govern the public company sector, standards are emerging to assure that nonprofit corporate boards are maintaining appropriate levels of independence. This Article provides a summation of the current trends in the development of independence standards for nonprofit corporate governance, from both tax and corporate law perspectives. The authors consider independence standards for nonprofit boards of governance and discuss the evolution of independence standards as they relate to the duty of good faith, and the distinction between independence and conflicts of interest. The authors also seek to examine the evolution of current federal regulations and study state models that have been successfully implemented to insure the independence of nonprofit corporations. Finally, the authors propose a set of core guidelines to be considered when addressing board and committee independence issues. PMID:17402658
Performance of device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhao, Qi; Ma, Xiongfeng
2016-07-01
Quantum key distribution provides information-theoretically-secure communication. In practice, device imperfections may jeopardise the system security. Device-independent quantum key distribution solves this problem by providing secure keys even when the quantum devices are untrusted and uncharacterized. Following a recent security proof of the device-independent quantum key distribution, we improve the key rate by tightening the parameter choice in the security proof. In practice where the system is lossy, we further improve the key rate by taking into account the loss position information. From our numerical simulation, our method can outperform existing results. Meanwhile, we outline clear experimental requirements for implementing device-independent quantum key distribution. The maximal tolerable error rate is 1.6%, the minimal required transmittance is 97.3%, and the minimal required visibility is 96.8 % .
ERIC Educational Resources Information Center
Redman, Christine
2001-01-01
Points out the potential of the moon as a rich teaching resource for subject areas like astronomy, physics, and biology. Presents historical, scientific, technological, and interesting facts about the moon. Includes suggestions for maximizing student interest and learning about the moon. (YDS)
Learning to maximize reward rate: a model based on semi-Markov decision processes
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.
2014-01-01
When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252
Weak values and weak coupling maximizing the output of weak measurements
Di Lorenzo, Antonio
2014-06-15
In a weak measurement, the average output 〈o〉 of a probe that measures an observable A{sup -hat} of a quantum system undergoing both a preparation in a state ρ{sub i} and a postselection in a state E{sub f} is, to a good approximation, a function of the weak value A{sub w}=Tr[E{sub f}A{sup -hat} ρ{sub i}]/Tr[E{sub f}ρ{sub i}], a complex number. For a fixed coupling λ, when the overlap Tr[E{sub f}ρ{sub i}] is very small, A{sub w} diverges, but 〈o〉 stays finite, often tending to zero for symmetry reasons. This paper answers the questions: what is the weak value that maximizes the output for a fixed coupling? What is the coupling that maximizes the output for a fixed weak value? We derive equations for the optimal values of A{sub w} and λ, and provide the solutions. The results are independent of the dimensionality of the system, and they apply to a probe having a Hilbert space of arbitrary dimension. Using the Schrödinger–Robertson uncertainty relation, we demonstrate that, in an important case, the amplification 〈o〉 cannot exceed the initial uncertainty σ{sub o} in the observable o{sup -hat}, we provide an upper limit for the more general case, and a strategy to obtain 〈o〉≫σ{sub o}. - Highlights: •We have provided a general framework to find the extremal values of a weak measurement. •We have derived the location of the extremal values in terms of preparation and postselection. •We have devised a maximization strategy going beyond the limit of the Schrödinger–Robertson relation.
Irregular sets of two-sided Birkhoff averages and hyperbolic sets
NASA Astrophysics Data System (ADS)
Barreira, Luis; Li, Jinjun; Valls, Claudia
2016-04-01
For two-sided topological Markov chains, we show that the set of points for which the two-sided Birkhoff averages of a continuous function diverge is residual. We also show that the set of points for which the Birkhoff averages have a given set of accumulation points other than a singleton is residual. A nontrivial consequence of our results is that the set of points for which the local entropies of an invariant measure on a locally maximal hyperbolic set does not exist is residual. This strongly contrasts to the Shannon-McMillan-Breiman theorem in the context of ergodic theory, which says that local entropies exist on a full measure set.
Multivariate streamflow forecasting using independent component analysis
NASA Astrophysics Data System (ADS)
Westra, Seth; Sharma, Ashish; Brown, Casey; Lall, Upmanu
2008-02-01
Seasonal forecasting of streamflow provides many benefits to society, by improving our ability to plan and adapt to changing water supplies. A common approach to developing these forecasts is to use statistical methods that link a set of predictors representing climate state as it relates to historical streamflow, and then using this model to project streamflow one or more seasons in advance based on current or a projected climate state. We present an approach for forecasting multivariate time series using independent component analysis (ICA) to transform the multivariate data to a set of univariate time series that are mutually independent, thereby allowing for the much broader class of univariate models to provide seasonal forecasts for each transformed series. Uncertainty is incorporated by bootstrapping the error component of each univariate model so that the probability distribution of the errors is maintained. Although all analyses are performed on univariate time series, the spatial dependence of the streamflow is captured by applying the inverse ICA transform to the predicted univariate series. We demonstrate the technique on a multivariate streamflow data set in Colombia, South America, by comparing the results to a range of other commonly used forecasting methods. The results show that the ICA-based technique is significantly better at representing spatial dependence, while not resulting in any loss of ability in capturing temporal dependence. As such, the ICA-based technique would be expected to yield considerable advantages when used in a probabilistic setting to manage large reservoir systems with multiple inflows or data collection points.
Yu, Chao; Sharma, Gaurav
2010-08-01
We explore camera scheduling and energy allocation strategies for lifetime optimization in image sensor networks. For the application scenarios that we consider, visual coverage over a monitored region is obtained by deploying wireless, battery-powered image sensors. Each sensor camera provides coverage over a part of the monitored region and a central processor coordinates the sensors in order to gather required visual data. For the purpose of maximizing the network operational lifetime, we consider two problems in this setting: a) camera scheduling, i.e., the selection, among available possibilities, of a set of cameras providing the desired coverage at each time instance, and b) energy allocation, i.e., the distribution of total available energy between the camera sensor nodes. We model the network lifetime as a stochastic random variable that depends upon the coverage geometry for the sensors and the distribution of data requests over the monitored region, two key characteristics that distinguish our problem from other wireless sensor network applications. By suitably abstracting this model of network lifetime and utilizing asymptotic analysis, we propose lifetime-maximizing camera scheduling and energy allocation strategies. The effectiveness of the proposed camera scheduling and energy allocation strategies is validated by simulations. PMID:20350857
The optimal number of lymph nodes removed in maximizing the survival of breast cancer patients
NASA Astrophysics Data System (ADS)
Peng, Lim Fong; Taib, Nur Aishah; Mohamed, Ibrahim; Daud, Noorizam
2014-07-01
The number of lymph nodes removed is one of the important predictors for survival in breast cancer study. Our aim is to determine the optimal number of lymph nodes to be removed for maximizing the survival of breast cancer patients. The study population consists of 873 patients with at least one of axillary nodes involved among 1890 patients from the University of Malaya Medical Center (UMMC) breast cancer registry. For this study, the Chi-square test of independence is performed to determine the significant association between prognostic factors and survival status, while Wilcoxon test is used to compare the estimates of the hazard functions of the two or more groups at each observed event time. Logistic regression analysis is then conducted to identify important predictors of survival. In particular, Akaike's Information Criterion (AIC) are calculated from the logistic regression model for all thresholds of node involved, as an alternative measure for the Wald statistic (χ2), in order to determine the optimal number of nodes that need to be removed to obtain the maximum differential in survival. The results from both measurements are compared. It is recommended that, for this particular group, the minimum of 10 nodes should be removed to maximize survival of breast cancer patients.
Gamma loop contributing to maximal voluntary contractions in man.
Hagbarth, K E; Kunesch, E J; Nordin, M; Schmidt, R; Wallin, E U
1986-11-01
A local anaesthetic drug was injected around the peroneal nerve in healthy subjects in order to investigate whether the resulting loss in foot dorsiflexion power in part depended on a gamma-fibre block preventing 'internal' activation of spindle end-organs and thereby depriving the alpha-motoneurones of an excitatory spindle inflow during contraction. The motor outcome of maximal dorsiflexion efforts was assessed by measuring firing rates of individual motor units in the anterior tibial (t.a.) muscle, mean voltage e.m.g. from the pretibial muscles, dorsiflexion force and range of voluntary foot dorsiflexion movements. The tests were performed with and without peripheral conditioning stimuli, such as agonist or antagonist muscle vibration or imposed stretch of the contracting muscles. As compared to control values of t.a. motor unit firing rates in maximal isometric voluntary contractions, the firing rates were lower and more irregular during maximal dorsiflexion efforts performed during subtotal peroneal nerve blocks. During the development of paresis a gradual reduction of motor unit firing rates was observed before the units ceased responding to the voluntary commands. This change in motor unit behaviour was accompanied by a reduction of the mean voltage e.m.g. activity in the pretibial muscles. At a given stage of anaesthesia the e.m.g. responses to maximal voluntary efforts were more affected than the responses evoked by electric nerve stimuli delivered proximal to the block, indicating that impaired impulse transmission in alpha motor fibres was not the sole cause of the paresis. The inability to generate high and regular motor unit firing rates during peroneal nerve blocks was accentuated by vibration applied over the antagonistic calf muscles. By contrast, in eight out of ten experiments agonist stretch or vibration caused an enhancement of motor unit firing during the maximal force tasks. The reverse effects of agonist and antagonist vibration on the
Clustering performance comparison using K-means and expectation maximization algorithms
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-01-01
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610
Maximal radius of the aftershock zone in earthquake networks
NASA Astrophysics Data System (ADS)
Mezentsev, A. Yu.; Hayakawa, M.
2009-09-01
In this paper, several seismoactive regions were investigated (Japan, Southern California and two tectonically distinct Japanese subregions) and structural seismic constants were estimated for each region. Using the method for seismic clustering detection proposed by Baiesi and Paczuski [M. Baiesi, M. Paczuski, Phys. Rev. E 69 (2004) 066106; M. Baiesi, M. Paczuski, Nonlin. Proc. Geophys. (2005) 1607-7946], we obtained the equation of the aftershock zone (AZ). It was shown that the consideration of a finite velocity of seismic signal leads to the natural appearance of maximal possible radius of the AZ. We obtained the equation of maximal radius of the AZ as a function of the magnitude of the main event and estimated its values for each region.
Safety factor maximization for trusses subjected to fatigue stresses
NASA Astrophysics Data System (ADS)
Hedaya, Mohammed Mohammed; Moneeb Elsabbagh, Adel; Hussein, Ahmed Mohamed
2015-08-01
This article presents a mathematical model for sizing optimization of undamped trusses subjected to dynamic loading leading to fatigue. The combined effect of static and dynamic loading, at steady state, is considered. An optimization model, whose objective is the maximization of the safety factor of these trusses, is developed. A new quantity (equivalent fatigue strain energy) combining the effects of static and dynamic stresses is presented. This quantity is used as a global measure of the proximity of fatigue failure. Therefore, the equivalent fatigue strain energy is minimized, and this seems to give a good value for the maximal equivalent static stress. This assumption is verified through two simple examples. The method of moving asymptotes is used in the optimization of trusses. The applicability of the proposed approach is demonstrated through two numerical examples; a 10-bar truss with different loading cases and a helicopter tail subjected to dynamic loading.
Magellan Project: Evolving enhanced operations efficiency to maximize science value
NASA Technical Reports Server (NTRS)
Cheuvront, Allan R.; Neuman, James C.; Mckinney, J. Franklin
1994-01-01
Magellan has been one of NASA's most successful spacecraft, returning more science data than all planetary spacecraft combined. The Magellan Spacecraft Team (SCT) has maximized the science return with innovative operational techniques to overcome anomalies and to perform activities for which the spacecraft was not designed. Commanding the spacecraft was originally time consuming because the standard development process was envisioned as manual tasks. The Program understood that reducing mission operations costs were essential for an extended mission. Management created an environment which encouraged automation of routine tasks, allowing staff reduction while maximizing the science data returned. Data analysis and trending, command preparation, and command reviews are some of the tasks that were automated. The SCT has accommodated personnel reductions by improving operations efficiency while returning the maximum science data possible.
Osthole suppresses seizures in the mouse maximal electroshock seizure model.
Luszczki, Jarogniew J; Andres-Mach, Marta; Cisowski, Wojciech; Mazol, Irena; Glowniak, Kazimierz; Czuczwar, Stanislaw J
2009-04-01
The aim of this study was to determine the anticonvulsant effects of osthole {[7-methoxy-8-(3-methyl-2-butenyl)-2H-1-benzopyran-2-one]--a natural coumarin derivative} in the mouse maximal electroshock-induced seizure model. The antiseizure effects of osthole were determined at 15, 30, 60, and 120 min after its systemic (i.p.) administration. Time course of anticonvulsant action of osthole revealed that the natural coumarin derivative produced a clear-cut antielectroshock activity in mice and the experimentally-derived ED(50) values for osthole ranged from 259 to 631 mg/kg. In conclusion, osthole suppresses seizure activity in the mouse maximal electroshock-induced seizure model. It may become a novel treatment option following further investigation in other animal models of epilepsy and preclinical studies. PMID:19236860
Controlled Dense Coding Using the Maximal Slice States
NASA Astrophysics Data System (ADS)
Liu, Jun; Mo, Zhi-wen; Sun, Shu-qin
2016-04-01
In this paper we investigate the controlled dense coding with the maximal slice states. Three schemes are presented. Our schemes employ the maximal slice states as quantum channel, which consists of the tripartite entangled state from the first party(Alice), the second party(Bob), the third party(Cliff). The supervisor(Cliff) can supervises and controls the channel between Alice and Bob via measurement. Through carrying out local von Neumann measurement, controlled-NOT operation and positive operator-valued measure(POVM), and introducing an auxiliary particle, we can obtain the success probability of dense coding. It is shown that the success probability of information transmitted from Alice to Bob is usually less than one. The average amount of information for each scheme is calculated in detail. These results offer deeper insight into quantum dense coding via quantum channels of partially entangled states.
Systematic Independent Validation of Inner Heliospheric Models
NASA Technical Reports Server (NTRS)
MacNeice, P. J.; Takakishvili, Alexandre
2008-01-01
This presentation is the first in a series which will provide independent validation of community models of the outer corona and inner heliosphere. In this work we establish a set of measures to be used in validating this group of models. We use these procedures to generate a comprehensive set of results from the Wang- Sheeley-Arge (WSA) model which will be used as a baseline, or reference, against which to compare all other models. We also run a test of the validation procedures by applying them to a small set of results produced by the ENLIL Magnetohydrodynamic (MHD) model. In future presentations we will validate other models currently hosted by the Community Coordinated Modeling Center(CCMC), including a comprehensive validation of the ENLIL model. The Wang-Sheeley-Arge (WSA) model is widely used to model the Solar wind, and is used by a number of agencies to predict Solar wind conditions at Earth as much as four days into the future. Because it is so important to both the research and space weather forecasting communities, it is essential that its performance be measured systematically, and independently. In this paper we offer just such an independent and systematic validation. We report skill scores for the model's predictions of wind speed and IMF polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests line of sight magnetograms. For this study we generated model results for monthly magnetograms from the National Solar Observatory (SOLIS), Mount Wilson Observatory and the GONG network, spanning the Carrington rotation range from 1650 to 2068. We compare the influence of the different magnetogram sources, performance at quiet and active times, and estimate the effect of different empirical wind speed tunings. We also consider the ability of the WSA model to identify sharp transitions in wind speed from slow to fast wind. These results will serve as a baseline against which to compare future
About closedness by convolution of the Tsallis maximizers
NASA Astrophysics Data System (ADS)
Vignat, C.; Hero, A. O., III; Costa, J. A.
2004-09-01
In this paper, we study the stability under convolution of the maximizing distributions of the Tsallis entropy under energy constraint (called hereafter Tsallis distributions). These distributions are shown to obey three important properties: a stochastic representation property, an orthogonal invariance property and a duality property. As a consequence of these properties, the behavior of Tsallis distributions under convolution is characterized. At last, a special random convolution, called Kingman convolution, is shown to ensure the stability of Tsallis distributions.
Planning for partnerships: Maximizing surge capacity resources through service learning.
Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B
2015-01-01
Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts. PMID:26750818
Letters to the editor : Cosmological constant in broken maximal supergravities.
Chalmers, G.; High Energy Physics
2002-12-01
We examine the form of the cosmological constant in the loop expansion of broken maximally supersymmetric supergravity theories, and after embedding, within superstring and M-theory. Supersymmetry breaking at the TeV scale generates values of the cosmological constant that are in agreement with current astrophysical data. The form of perturbative quantum effects in the loop expansion is consistent with this parameter regime.
Cardiovascular changes during maximal breath-holding in elite divers.
Guaraldi, Pietro; Serra, Maria; Barletta, Giorgio; Pierangeli, Giulia; Terlizzi, Rossana; Calandra-Buonaura, Giovanna; Cialoni, Danilo; Cortelli, Pietro
2009-12-01
During maximal breath-holding six healthy elite breath-hold divers, after an initial "easy-going" phase in which cardiovascular changes resembled the so-called "diving response", exhibited a sudden and severe rise in blood pressure during the "struggle" phase of the maneuver. These changes may represent the first tangible expression of a defense reaction, which overrides the classic diving reflex, aiming to reduce the hypoxic damage and to break the apnea before the loss of consciousness. PMID:19655193
Oncoplastic Breast Reduction: Maximizing Aesthetics and Surgical Margins
Chang, Michelle Milee; Huston, Tara; Ascherman, Jeffrey; Rohde, Christine
2012-01-01
Oncoplastic breast reduction combines oncologically sound concepts of cancer removal with aesthetically maximized approaches for breast reduction. Numerous incision patterns and types of pedicles can be used for purposes of oncoplastic reduction, each tailored for size and location of tumor. A team approach between reconstructive and breast surgeons produces positive long-term oncologic results as well as satisfactory cosmetic and functional outcomes, rendering oncoplastic breast reduction a favorable treatment option for certain patients with breast cancer. PMID:23209890
Gatterer, Hannes; Klarod, Kultida; Heinrich, Dieter; Schlemmer, Philipp; Dilitz, Stefan; Burtscher, Martin
2015-08-01
The purpose of this study was to investigate the effect of a maximal shuttle-run shock microcycle in hypoxia on repeated sprint ability (RSA, 6 × 40-m (6 × 20 m back and forth, 20" rest in between)), Yo-Yo-intermittent-recovery (YYIR) test performance, and redox-status. Fourteen soccer players (age: 23.9 ± 2.1 years), randomly assigned to hypoxia (∼ 3300 m) or normoxia training, performed 8 maximal shuttle-run training sessions within 12 days. YYIR test performance and RSA fatigue-slope improved independently of the hypoxia stimulus (p < 0.05). Training reduced the oxidative stress level (-7.9%, p < 0.05), and the reduction was associated with performance improvements (r = 0.761, ΔRSA; r = -0.575, ΔYYIR, p < 0.05). PMID:26212372
Hild, Kenneth E.; Attias, Hagai T.; Nagarajan, Srikantan S.
2009-01-01
In this paper, we develop a maximum-likelihood (ML) spatio-temporal blind source separation (BSS) algorithm, where the temporal dependencies are explained by assuming that each source is an autoregressive (AR) process and the distribution of the associated independent identically distributed (i.i.d.) inovations process is described using a mixture of Gaussians. Unlike most ML methods, the proposed algorithm takes into account both spatial and temporal information, optimization is performed using the expectation-maximization (EM) method, the source model is adapted to maximize the likelihood, and the update equations have a simple, analytical form. The proposed method, which we refer to as autoregressive mixture of Gaussians (AR-MOG), outperforms nine other methods for artificial mixtures of real audio. We also show results for using AR-MOG to extract the fetal cardiac signal from real magnetocardiographic (MCG) data. PMID:18334368
Formation Control of the MAXIM L2 Libration Orbit Mission
NASA Technical Reports Server (NTRS)
Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda
2004-01-01
The Micro-Arcsecond X-ray Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imager mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 micro-arcsecond imaging. Currently the mission architecture comprises 25 spacecraft, 24 as optics modules and one as the detector, which will form sparse sub-apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. To achieve these mission goals, the formation is required to cooperatively point at desired targets. Once pointed, the individual elements of the MAXIM formation must remain stable, maintaining their relative positions and attitudes below a critical threshold. These pointing and formation stability requirements impact the control and design of the formation. In this paper, we provide analysis of control efforts that are dependent upon the stability and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions to minimize the control efforts and we address continuous control via input feedback linearization (IFL). Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.
Formation Control of the MAXIM L2 Libration Orbit Mission
NASA Technical Reports Server (NTRS)
Folta, David; Hartman, Kate; Howell, Kathleen; Marchand, Belinda
2004-01-01
The Micro-Arcsecond Imaging Mission (MAXIM), a proposed concept for the Structure and Evolution of the Universe (SEU) Black Hole Imaging mission, is designed to make a ten million-fold improvement in X-ray image clarity of celestial objects by providing better than 0.1 microarcsecond imaging. To achieve mission requirements, MAXIM will have to improve on pointing by orders of magnitude. This pointing requirement impacts the control and design of the formation. Currently the architecture is comprised of 25 spacecraft, which will form the sparse apertures of a grazing incidence X-ray interferometer covering the 0.3-10 keV bandpass. This configuration will deploy 24 spacecraft as optics modules and one as the detector. The formation must allow for long duration continuous science observations and also for reconfiguration that permits re-pointing of the formation. In this paper, we provide analysis and trades of several control efforts that are dependent upon the pointing requirements and the configuration and dimensions of the MAXIM formation. We emphasize the utilization of natural motions in the Lagrangian regions that minimize the control efforts and we address both continuous and discrete control via LQR and feedback linearization. Results provide control cost, configuration options, and capabilities as guidelines for the development of this complex mission.
Reference Values of Maximal Oxygen Uptake for Polish Rowers
Klusiewicz, Andrzej; Starczewski, Michał; Ładyga, Maria; Długołęcka, Barbara; Braksator, Wojciech; Mamcarz, Artur; Sitkowski, Dariusz
2014-01-01
The aim of this study was to characterize changes in maximal oxygen uptake over several years and to elaborate current reference values of this index based on determinations carried out in large and representative groups of top Polish rowers. For this study 81 female and 159 male rowers from the sub-junior to senior categories were recruited from the Polish National Team and its direct backup. All the subjects performed an incremental exercise test on a rowing ergometer. During the test maximal oxygen uptake was measured with the BxB method. The calculated reference values for elite Polish junior and U23 rowers allowed to evaluate the athletes’ fitness level against the respective reference group and may aid the coach in controlling the training process. Mean values of VO2max achieved by members of the top Polish rowing crews who over the last five years competed in the Olympic Games or World Championships were also presented. The results of the research on the “trainability” of the maximal oxygen uptake may lead to a conclusion that the growth rate of the index is larger in case of high-level athletes and that the index (in absolute values) increases significantly between the age of 19–22 years (U23 category). PMID:25713672
Random effects structure for confirmatory hypothesis testing: Keep it maximal
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.
2013-01-01
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F1 and F2 tests, and in many cases, even worse than F1 alone. Maximal LMEMs should be the ‘gold standard’ for confirmatory hypothesis testing in psycholinguistics and beyond. PMID:24403724
Pore space morphology analysis using maximal inscribed spheres
NASA Astrophysics Data System (ADS)
Silin, Dmitriy; Patzek, Tad
2006-11-01
A new robust algorithm analyzing the geometry and connectivity of the pore space of sedimentary rock is based on fundamental concepts of mathematical morphology. The algorithm distinguishes between the “pore bodies” and “pore throats,” and establishes their respective volumes and connectivity. The proposed algorithm also produces a stick-and-ball diagram of the rock pore space. The tests on a pack of equal spheres, for which the results are verifiable, confirm its stability. The impact of image resolution on the algorithm output is investigated on the images of computer-generated pore space. One of distinctive features of our approach is that no image thinning is applied. Instead, the information about the skeleton is stored through the maximal inscribed balls or spheres (MIS) associated with each voxel. These maximal balls retain information about the entire pore space. Comparison with the results obtained by a thinning procedure preserving some topological properties of the pore space shows that our method produces more realistic estimates of the number and shapes of pore bodies and pore throats, and the pore coordination numbers. The distribution of maximal inscribed spheres makes possible simulation of mercury injection and computation of the corresponding dimensionless capillary pressure curve. It turns out that the calculated capillary pressure curve is a robust descriptor of the pore space geometry and, in particular, can be used to determine the quality of computer-based rock reconstruction.
Echocardiographic dimensions and maximal oxygen uptake in oarsmen during training.
Wieling, W; Borghols, E A; Hollander, A P; Danner, S A; Dunning, A J
1981-01-01
We studied nine freshmen and 14 senior oarsmen undergraduates during seven months of training and compared them with 17 age and sex-matched sedentary control subjects in order to assess the influence of heavy physical exercise on cardiac dimensions and maximal oxygen uptake. Standard M-mode echocardiographic techniques were used. At the start of the season senior oarsmen had a greater left ventricular end-diastolic dimension, and a thicker interventricular septum and posterior left ventricular wall than control subjects and freshmen oarsmen. The two latter groups did not differ from each other. During the training period there was a slight and gradual increase in left ventricular end-diastolic dimension, and interventricular septum and posterior wall thickness in freshmen. In seniors only left ventricular end-diastolic dimension increased significantly. Maximal oxygen uptake showed a distinct increase between the fourth and seventh month during the period of intensive rowing training. There was no relation between echocardiographic variables and maximal oxygen uptake. A combination of heavy dynamic and static exercise can thus lead to significant changes in both left ventricular wall thickness and chamber size within months. Echocardiographic variables measured at rest cannot be used as a suitable index of performance capacity. PMID:7272130
Reference values of maximal oxygen uptake for polish rowers.
Klusiewicz, Andrzej; Starczewski, Michał; Ładyga, Maria; Długołęcka, Barbara; Braksator, Wojciech; Mamcarz, Artur; Sitkowski, Dariusz
2014-12-01
The aim of this study was to characterize changes in maximal oxygen uptake over several years and to elaborate current reference values of this index based on determinations carried out in large and representative groups of top Polish rowers. For this study 81 female and 159 male rowers from the sub-junior to senior categories were recruited from the Polish National Team and its direct backup. All the subjects performed an incremental exercise test on a rowing ergometer. During the test maximal oxygen uptake was measured with the BxB method. The calculated reference values for elite Polish junior and U23 rowers allowed to evaluate the athletes' fitness level against the respective reference group and may aid the coach in controlling the training process. Mean values of VO2max achieved by members of the top Polish rowing crews who over the last five years competed in the Olympic Games or World Championships were also presented. The results of the research on the "trainability" of the maximal oxygen uptake may lead to a conclusion that the growth rate of the index is larger in case of high-level athletes and that the index (in absolute values) increases significantly between the age of 19-22 years (U23 category). PMID:25713672
Swanson, David L; Thomas, Nathan E; Liknes, Eric T; Cooper, Sheldon J
2012-01-01
The underlying assumption of the aerobic capacity model for the evolution of endothermy is that basal (BMR) and maximal aerobic metabolic rates are phenotypically linked. However, because BMR is largely a function of central organs whereas maximal metabolic output is largely a function of skeletal muscles, the mechanistic underpinnings for their linkage are not obvious. Interspecific studies in birds generally support a phenotypic correlation between BMR and maximal metabolic output. If the aerobic capacity model is valid, these phenotypic correlations should also extend to intraspecific comparisons. We measured BMR, M(sum) (maximum thermoregulatory metabolic rate) and MMR (maximum exercise metabolic rate in a hop-flutter chamber) in winter for dark-eyed juncos (Junco hyemalis), American goldfinches (Carduelis tristis; M(sum) and MMR only), and black-capped chickadees (Poecile atricapillus; BMR and M(sum) only) and examined correlations among these variables. We also measured BMR and M(sum) in individual house sparrows (Passer domesticus) in both summer, winter and spring. For both raw metabolic rates and residuals from allometric regressions, BMR was not significantly correlated with either M(sum) or MMR in juncos. Moreover, no significant correlation between M(sum) and MMR or their mass-independent residuals occurred for juncos or goldfinches. Raw BMR and M(sum) were significantly positively correlated for black-capped chickadees and house sparrows, but mass-independent residuals of BMR and M(sum) were not. These data suggest that central organ and exercise organ metabolic levels are not inextricably linked and that muscular capacities for exercise and shivering do not necessarily vary in tandem in individual birds. Why intraspecific and interspecific avian studies show differing results and the significance of these differences to the aerobic capacity model are unknown, and resolution of these questions will require additional studies of potential mechanistic
Worrell, R.B.
1985-05-01
The Set Equation Transformation System (SETS) is used to achieve the symbolic manipulation of Boolean equations. Symbolic manipulation involves changing equations from their original forms into more useful forms - particularly by applying Boolean identities. The SETS program is an interpreter which reads, interprets, and executes SETS user programs. The user writes a SETS user program specifying the processing to be achieved and submits it, along with the required data, for execution by SETS. Because of the general nature of SETS, i.e., the capability to manipulate Boolean equations regardless of their origin, the program has been used for many different kinds of analysis.
Marketing Handbook for Independent Schools.
ERIC Educational Resources Information Center
Boarding Schools, Boston, MA.
This publication is a resource to help independent schools attract more familites to their institutions and to increase the voluntary support by the larger community surrounding the school. The first chapter attempts to dispel misconceptions, define pertinent marketing terms, and relate their importance to independent schools. The rest of the book…
Independent Learning Models: A Comparison.
ERIC Educational Resources Information Center
Wickett, R. E. Y.
Five models of independent learning are suitable for use in adult education programs. The common factor is a facilitator who works in some way with the student in the learning process. They display different characteristics, including the extent of independence in relation to content and/or process. Nondirective tutorial instruction and learning…
Wang, Anran; Yang, Lin; Liu, Chengyu; Cui, Jingxuan; Li, Yao; Yang, Xingxing; Zhang, Song
2015-01-01
This study aimed to investigate the athletic differences in the characteristics of the photoplethysmographic (PPG) pulse shape. 304 athletes were enrolled and divided into three subgroups according to a typical sport classification in terms of the maximal oxygen uptake (MaxO2_low, MaxO2_middle and MaxO2_high groups) or the maximal muscular voluntary contraction (MMVC_low, MMVC_middle, and MMVC_high groups). Finger PPG pulses were digitally recorded and then normalized to derive the pulse area, pulse peak time Tp, dicrotic notch time Tn, and pulse reflection index (RI). The four parameters were finally compared between the three subgroups categorized by MaxO2 or by MMVC. In conclusion, it has been demonstrated by quantifying the characteristics of the PPG pulses in different athletes that MaxO2, but not MMVC, had significant effect on the arterial properties. PMID:25710022
Steding-Ehrenborg, Katarina; Boushel, Robert C; Calbet, José A; Åkeson, Per; Mortensen, Stefan P
2015-12-01
Age-related decline in cardiac function can be prevented or postponed by lifelong endurance training. However, effects of normal ageing as well as of lifelong endurance exercise on longitudinal and radial contribution to stroke volume are unknown. The aim of this study was to determine resting longitudinal and radial pumping in elderly athletes, sedentary elderly and young sedentary subjects. Furthermore, we aimed to investigate determinants of maximal cardiac output in elderly. Eight elderly athletes (63 ± 4 years), seven elderly sedentary (66 ± 4 years) and ten young sedentary subjects (29 ± 4 years) underwent cardiac magnetic resonance imaging. All subjects underwent maximal exercise testing and for elderly subjects maximal cardiac output during cycling was determined using a dye dilution technique. Longitudinal and radial contribution to stroke volume did not differ between groups (longitudinal left ventricle (LV) 52-65%, P = 0.12, right ventricle (RV) 77-87%, P = 0.16, radial 7.9-8.6%, P = 1.0). Left ventricular atrioventricular plane displacement (LVAVPD) was higher in elderly athletes and young sedentary compared with elderly sedentary subjects (14 ± 3, 15 ± 2 and 11 ± 1 mm, respectively, P < 0.05). There was no difference between groups for RVAVPD (P = 0.2). LVAVPD was an independent predictor of maximal cardiac output (R(2) = 0.61, P < 0.01, β = 0.78). Longitudinal and radial contributions to stroke volume did not differ between groups. However, how longitudinal pumping was achieved differed; elderly athletes and young sedentary subjects showed similar AVPD whereas this was significantly lower in elderly sedentary subjects. Elderly sedentary subjects achieved longitudinal pumping through increased short-axis area of the ventricle. Large AVPD was a determinant of maximal cardiac output and exercise capacity. PMID:26496146
Hillman, Stanley S; Hancock, Thomas V; Hedrick, Michael S
2013-02-01
Maximal aerobic metabolic rates (MMR) in vertebrates are supported by increased conductive and diffusive fluxes of O(2) from the environment to the mitochondria necessitating concomitant increases in CO(2) efflux. A question that has received much attention has been which step, respiratory or cardiovascular, provides the principal rate limitation to gas flux at MMR? Limitation analyses have principally focused on O(2) fluxes, though the excess capacity of the lung for O(2) ventilation and diffusion remains unexplained except as a safety factor. Analyses of MMR normally rely upon allometry and temperature to define these factors, but cannot account for much of the variation and often have narrow phylogenetic breadth. The unique aspect of our comparative approach was to use an interclass meta-analysis to examine cardio-respiratory variables during the increase from resting metabolic rate to MMR among vertebrates from fish to mammals, independent of allometry and phylogeny. Common patterns at MMR indicate universal principles governing O(2) and CO(2) transport in vertebrate cardiovascular and respiratory systems, despite the varied modes of activities (swimming, running, flying), different cardio-respiratory architecture, and vastly different rates of metabolism (endothermy vs. ectothermy). Our meta-analysis supports previous studies indicating a cardiovascular limit to maximal O(2) transport and also implicates a respiratory system limit to maximal CO(2) efflux, especially in ectotherms. Thus, natural selection would operate on the respiratory system to enhance maximal CO(2) excretion and the cardiovascular system to enhance maximal O(2) uptake. This provides a possible evolutionary explanation for the conundrum of why the respiratory system appears functionally over-designed from an O(2) perspective, a unique insight from previous work focused solely on O(2) fluxes. The results suggest a common gas transport blueprint, or Bauplan, in the vertebrate clade. PMID
Kirk, Eric A; Copithorne, Dave B; Dalton, Brian H; Rice, Charles L
2016-08-25
The triceps surae comprises an important group of muscles for human posture and gait. The soleus unlike other limb muscles shows atypical lower firing rates in both old and young adults across various voluntary strength levels, including maximal contractions. The other portion of the triceps surae, the gastrocnemii has not been explored in aging, and despite anatomic, histochemical and age-related morphological differences, they share many common functions with soleus. During multiple visits, 10 active young (23-33years) and 10 active old participants (76-86years) performed a series of plantar flexor isometric contractions at a range of contraction intensities including maximal voluntary contraction (MVC) with tungsten microelectrodes inserted into the lateral (LG) and medial (MG) gastrocnemius. Despite equal and near maximal voluntary activation (VA) (∼98%), MVC torque was ∼46% lower, twitch tension was ∼34% lower, and contractile speed was ∼15% slower in the old men compared with the young. At all isometric torque levels tested (25, 50, 75 and 100% MVC) there were no statistically significant differences in mean motor unit firing rates (MUFRs) between young and old men. In both groups, the range of mean MU firing rates was similar (∼8Hz at 25% MVC to ∼22Hz at 100% MVC). The structural age-related changes in the gastrocnemii are not reflected in neural drive adaptations, indicating that MUFRs may not be a common feature with aging and other factors such as habitual use or anatomical location may be influential. PMID:27298006
Garcia-Tabar, Ibai; Eclache, Jean P.; Aramendi, José F.; Gorostiaga, Esteban M.
2015-01-01
The aim was to examine the drift in the measurements of fractional concentration of oxygen (FO2) and carbon dioxide (FCO2) of a Nafion-using metabolic cart during incremental maximal exercise in 18 young and 12 elderly males, and to propose a way in which the drift can be corrected. The drift was verified by comparing the pre-test calibration values with the immediate post-test verification values of the calibration gases. The system demonstrated an average downscale drift (P < 0.001) in FO2 and FCO2 of −0.18% and −0.05%, respectively. Compared with measured values, corrected average maximal oxygen uptakevalues were 5–6% lower (P < 0.001) whereas corrected maximal respiratory exchange ratio values were 8–9% higher (P < 0.001). The drift was not due to an electronic instability in the analyzers because it was reverted after 20 min of recovery from the end of the exercise. The drift may be related to an incomplete removal of water vapor from the expired gas during transit through the Nafion conducting tube. These data demonstrate the importance of checking FO2 and FCO2 values by regular pre-test calibrations and post-test verifications, and also the importance of correcting a possible shift immediately after exercise. PMID:26578980
Fast independent component analysis algorithm for quaternion valued signals.
Javidi, Soroush; Took, Clive Cheong; Mandic, Danilo P
2011-12-01
An extension of the fast independent component analysis algorithm is proposed for the blind separation of both Q-proper and Q-improper quaternion-valued signals. This is achieved by maximizing a negentropy-based cost function, and is derived rigorously using the recently developed HR calculus in order to implement Newton optimization in the augmented quaternion statistics framework. It is shown that the use of augmented statistics and the associated widely linear modeling provides theoretical and practical advantages when dealing with general quaternion signals with noncircular (rotation-dependent) distributions. Simulations using both benchmark and real-world quaternion-valued signals support the approach. PMID:22027374
Maximal-intensity isometric and dynamic exercise performance after eccentric muscle actions.
Byrne, Christopher; Eston, Roger
2002-12-01
A well-documented observation after eccentric exercise is a reduction in maximal voluntary force. However, little is known about the ability to maintain maximal isometric force or generate and maintain dynamic peak power. These aspects of muscle function were studied in seven participants (5 males, 2 females). Knee extensor isometric strength and rate of fatigue were assessed by a sustained 60 s maximal voluntary contraction at 80 degrees and 40 degrees knee flexion, corresponding to an optimal and a shortened muscle length, respectively. Dynamic peak power and rate of fatigue were assessed during a 30 s Wingate cycle test. Plasma creatine kinase was measured from a fingertip blood sample. These variables were measured before, 1 h after and 1, 2, 3 and 7 days after 100 repetitions of the eccentric phase of the barbell squat exercise (10 sets x 10 reps at 80% concentric one-repetition maximum). Eccentric exercise resulted in elevations in creatine kinase activity above baseline (274+/-109 U x l(-1); mean +/- s(x)) after 1 h (506+/-116 U x l(-1), P < 0.05) and 1 day (808+/-117 U x l(-1), P < 0.05). Isometric strength was reduced (P < 0.05) for 7 days (35% at 1 h, 5% at day 7) and the rate of fatigue was lower (P < 0.05) for 3 days at 80 degrees and for 1 day at 40 degrees. Wingate peak power was reduced to a lesser extent (P < 0.05) than isometric strength at 1 h (13%) and, although the time course of recovery was equal, the two variables differed in their pattern of recovery. Eccentrically exercised muscle was characterized by an inability to generate high force and power, but an improved ability to maintain force and power. Such functional outcomes are consistent with the proposition that type II fibres are selectively recruited or damaged during eccentric exercise. PMID:12477004
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Infant Mental Health Home Visitation: Setting and Maintaining Professional Boundaries
ERIC Educational Resources Information Center
Barron, Carla; Paradis, Nichole
2010-01-01
Relationship-based infant mental health home visiting services for infants, toddlers, and their families intensify the connection between the personal and professional. To promote the therapeutic relationship and maximize the effectiveness of the intervention, home visitors must exercise good judgment, in the field and in the moment, to set and…
Experimental Measurement-Device-Independent Entanglement Detection
Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed
2015-01-01
Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols. PMID:25649664
Experimental measurement-device-independent entanglement detection.
Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed
2015-01-01
Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols. PMID:25649664
Sequence independent amplification of DNA
Bohlander, S.K.
1998-03-24
The present invention is a rapid sequence-independent amplification procedure (SIA). Even minute amounts of DNA from various sources can be amplified independent of any sequence requirements of the DNA or any a priori knowledge of any sequence characteristics of the DNA to be amplified. This method allows, for example, the sequence independent amplification of microdissected chromosomal material and the reliable construction of high quality fluorescent in situ hybridization (FISH) probes from YACs or from other sources. These probes can be used to localize YACs on metaphase chromosomes but also--with high efficiency--in interphase nuclei. 25 figs.
Biographical factors of occupational independence.
Müller, G F
2001-10-01
The present study examined biographical factors of occupational independence including any kind of nonemployed profession. Participants were 59 occupationally independent and 58 employed persons of different age (M = 36.3 yr.), sex, and profession. They were interviewed on variables like family influence, educational background, occupational role models, and critical events for choosing a particular type of occupational career. The obtained results show that occupationally independent people reported stronger family ties, experienced fewer restrictions of formal education, and remembered fewer negative role models than the employed people. Implications of these results are discussed. PMID:11783553
Sequence independent amplification of DNA
Bohlander, Stefan K.
1998-01-01
The present invention is a rapid sequence-independent amplification procedure (SIA). Even minute amounts of DNA from various sources can be amplified independent of any sequence requirements of the DNA or any a priori knowledge of any sequence characteristics of the DNA to be amplified. This method allows, for example the sequence independent amplification of microdissected chromosomal material and the reliable construction of high quality fluorescent in situ hybridization (FISH) probes from YACs or from other sources. These probes can be used to localize YACs on metaphase chromosomes but also--with high efficiency--in interphase nuclei.
Method for maximizing shale oil recovery from an underground formation
Sisemore, Clyde J.
1980-01-01
A method for maximizing shale oil recovery from an underground oil shale formation which has previously been processed by in situ retorting such that there is provided in the formation a column of substantially intact oil shale intervening between adjacent spent retorts, which method includes the steps of back filling the spent retorts with an aqueous slurry of spent shale. The slurry is permitted to harden into a cement-like substance which stabilizes the spent retorts. Shale oil is then recovered from the intervening column of intact oil shale by retorting the column in situ, the stabilized spent retorts providing support for the newly developed retorts.
Maximizing Shelf Life of Paneer-A Review.
Goyal, Sumit; Goyal, Gyanendra Kumar
2016-06-10
Paneer resembling soft cheese is a well-known heat- and acid-coagulated milk product. It is very popular in the Indian subcontinent and has appeared in the western and Middle East markets. The shelf life of paneer is quite low and it loses freshness after two to three days when stored under refrigeration. Various preservation techniques, including chemical additives, packaging, thermal processing, and low-temperature storage, have been proposed by researchers for enhancing its shelf life. The use of antimicrobial additives is not preferred because of perceived toxicity risks. Modified atmosphere packaging has been recommended as one of the best techniques for maximizing the shelf life of paneer. PMID:25679043
Maximizing the return on taxpayers' investments in fundamental biomedical research
Lorsch, Jon R.
2015-01-01
The National Institute of General Medical Sciences (NIGMS) at the U.S. National Institutes of Health has an annual budget of more than $2.3 billion. The institute uses these funds to support fundamental biomedical research and training at universities, medical schools, and other institutions across the country. My job as director of NIGMS is to work to maximize the scientific returns on the taxpayers' investments. I describe how we are optimizing our investment strategies and funding mechanisms, and how, in the process, we hope to create a more efficient and sustainable biomedical research enterprise. PMID:25926703
Informatics knowledge: the key to maximizing performance and productivity.
Sinclair, V G
1997-06-01
Nurse managers face a competitive and complex market-place that demands greater attention to customer satisfaction along with continual improvements in quality and cost-effectiveness. Without information technology, the manager cannot hope to deliver ever greater quality at less cost. The nurse managers' effective promotion and use of computer applications can have enormous impact on the performance and productivity of their health care facilities. This article explores the potential of information technology to maximize clinical and cost outcomes, optimize decision making, and enhance administrative productivity. PMID:9220900
Maximizing the return on taxpayers' investments in fundamental biomedical research.
Lorsch, Jon R
2015-05-01
The National Institute of General Medical Sciences (NIGMS) at the U.S. National Institutes of Health has an annual budget of more than $2.3 billion. The institute uses these funds to support fundamental biomedical research and training at universities, medical schools, and other institutions across the country. My job as director of NIGMS is to work to maximize the scientific returns on the taxpayers' investments. I describe how we are optimizing our investment strategies and funding mechanisms, and how, in the process, we hope to create a more efficient and sustainable biomedical research enterprise. PMID:25926703
Deformations with maximal supersymmetries part 2: off-shell formulation
NASA Astrophysics Data System (ADS)
Chang, Chi-Ming; Lin, Ying-Hsuan; Wang, Yifan; Yin, Xi
2016-04-01
Continuing our exploration of maximally supersymmetric gauge theories (MSYM) deformed by higher dimensional operators, in this paper we consider an off-shell approach based on pure spinor superspace and focus on constructing supersymmetric deformations beyond the first order. In particular, we give a construction of the Batalin-Vilkovisky action of an all-order non-Abelian Born-Infeld deformation of MSYM in the non-minimal pure spinor formalism. We also discuss subtleties in the integration over the pure spinor superspace and the relevance of Berkovits-Nekrasov regularization.
Maximally exposed offsite individual location determination for NESHAPS compliance
Simpkins, A.A.
2000-03-13
The Environmental Protection Agency (EPA) requires the use of the computer program CAP88 for demonstrating compliance with the National Emission Standard for Hazardous Air Pollutants (NESHAPS.) One of the inputs required for CAP88 is the location of the maximally exposed individual (MEI) by sector and distance. Distances to the MEI have been determined for 15 different potential release locations at SRS. These locations were compared with previous work and differences were analyzed. Additionally, SREL Conference Center was included as a potential offsite location since in the future it may be used as a dormitory. Worst sectors were then determined based on the distances.
[Research advance in rare and endemic plant Tetraena mongolica Maxim].
Zhen, Jiang-Hong; Liu, Guo-Hou
2008-02-01
In this paper, the research advance in rare and endemic plant Tetraena mongolica Maxim. was summarized from the aspects of morphology, anatomy, palynology, cytology, seed-coat micro-morphology, embryology, physiology, biology, ecology, genetic diversity, chemical constituents, endangered causes, and conservation approaches, and the further research directions were prospected. It was considered that population viability, idioplasm conservation and artificial renewal, molecular biology of ecological adaptability, and assessment of habitat suitability should be the main aspects for the future study of T. mongolica. PMID:18464654
Exact Distribution of the Maximal Height of p Vicious Walkers
NASA Astrophysics Data System (ADS)
Schehr, Grégory; Majumdar, Satya N.; Comtet, Alain; Randon-Furling, Julien
2008-10-01
Using path-integral techniques, we compute exactly the distribution of the maximal height Hp of p nonintersecting Brownian walkers over a unit time interval in one dimension, both for excursions p watermelons with a wall, and bridges p watermelons without a wall, for all integer p≥1. For large p, we show that ⟨Hp⟩˜2p (excursions) whereas ⟨Hp⟩˜p (bridges). Our exact results prove that previous numerical experiments only measured the preasymptotic behaviors and not the correct asymptotic ones. In addition, our method establishes a physical connection between vicious walkers and random matrix theory.
Wave number of maximal growth in viscous ferrofluids.
NASA Astrophysics Data System (ADS)
Lange, A.; Reimann, B.; Richter, R.
2001-09-01
Within the frame of linear stability theory an analytical method is presented for the normal field instability in magnetic fluids. It allows to calculate the maximal growth rate and the corresponding wave number for arbitrary values of the layer thickness and viscosity. Applying this method to magnetic fluids of finite depth, the results are quantitatively compared to the wave number of the transient pattern observed experimentally after a jumplike increase of the field. The wave number grows linearly with increasing induction where the theoretical and the experimental data agree well. Figs 2, Refs 13.
How to Maximize Patient Safety When Prescribing Opioids.
Kirpalani, Dhiruj
2015-11-01
Opioid prescribing and deaths in the United States have increased exponentially in the past couple of decades. This increase has occurred amidst growing awareness of the lack of long-term efficacy of opioids, as well as the significant long- and short-term risks associated with these medications. The scope of the opioid epidemic has led to the development of extensive clinical screening and monitoring tools recommended for health care providers who prescribe opioids to patients for chronic nonmalignant pain. The purpose of this review is to summarize the latest guidelines and evidence that will assist in maximizing patient safety while using chronic opioid therapy as part of pain management. PMID:26568502
Krantz, R J; Douthett, J
2000-05-01
Using recent developments in music theory, which are generalizations of the well-known properties of the familiar 12-tone, equal-tempered musical scale, an approach is described for constructing equal-tempered musical scales (with "diatonic" scales and the associated chord structure) based on good-fitting intervals and a generalization of the modulation properties of the circle of fifths. An analysis of the usual 12-tone equal-tempered system is provided as a vehicle to introduce the mathematical details of these recent music-theoretic developments and to articulate the approach for constructing musical scales. The formalism is extended to describe equal-tempered musical scales with nonoctave closure. Application of the formalism to a system with closure at an octave plus a perfect fifth generates the Bohlen-Pierce scale originally developed for harmonic properties similar to traditional chords but without the perceptual biases of these familiar chords. Subsequently, the formalism is applied to the group-theory-based 20-fold microtonal system of Balzano. It is shown that with an appropriate choice of nonoctave closure (6:1 in this case), determined by the formalism combined with continued fraction analysis, that this group-theoretic-generated system may be interpreted in terms of the frequency ratios 21:56:88:126. Although contrary to the spirit of the group-theoretic approach to generating scales, this analysis may be applicable for discovering the ratio basis of unusual tunings common in non-Western music. PMID:10830394
The Independent Payment Advisory Board.
Manchikanti, Laxmaiah; Falco, Frank J E; Singh, Vijay; Benyamin, Ramsin M; Hirsch, Joshua A
2011-01-01
The Independent Payment Advisory Board (IPAB) is a vastly powerful component of the president's health care reform law, with authority to issue recommendations to reduce the growth in Medicare spending, providing recommendations to be considered by Congress and implemented by the administration on a fast track basis. Ever since its inception, IPAB has been one of the most controversial issues of the Patient Protection and Affordable Care Act (ACA), even though the powers of IPAB are restricted and multiple sectors of health care have been protected in the law. IPAB works by recommending policies to Congress to help Medicare provide better care at a lower cost, which would include ideas on coordinating care, getting rid of waste in the system, providing incentives for best practices, and prioritizing primary care. Congress then has the power to accept or reject these recommendations. However, Congress faces extreme limitations, either to enact policies that achieve equivalent savings, or let the Secretary of Health and Human Services (HHS) follow IPAB's recommendations. IPAB has strong supporters and opponents, leading to arguments in favor of or against to the extreme of introducing legislation to repeal IPAB. The origins of IPAB are found in the ideology of the National Institute for Health and Clinical Excellence (NICE) and the impetus of exploring health care costs, even though IPAB's authority seems to be limited to Medicare only. The structure and operation of IPAB differs from Medicare and has been called the Medicare Payment Advisory Commission (MedPAC) on steroids. The board membership consists of 15 full-time members appointed by the president and confirmed by the Senate with options for recess appointments. The IPAB statute sets target growth rates for Medicare spending. The applicable percent for maximum savings appears to be 0.5% for year 2015, 1% for 2016, 1.25% for 2017, and 1.5% for 2018 and later. The IPAB Medicare proposal process involves
Maximizing Conservation and Production with Intensive Forest Management: It's All About Location.
Tittler, Rebecca; Filotas, Élise; Kroese, Jasmin; Messier, Christian
2015-11-01
Functional zoning has been suggested as a way to balance the needs of a viable forest industry with those of healthy ecosystems. Under this system, part of the forest is set aside for protected areas, counterbalanced by intensive and extensive management of the rest of the forest. Studies indicate this may provide adequate timber while minimizing road construction and favoring the development of large mature and old stands. However, it is unclear how the spatial arrangement of intensive management areas may affect the success of this zoning. Should these areas be agglomerated or dispersed throughout the forest landscape? Should managers prioritize (a) proximity to existing roads, (b) distance from protected areas, or (c) site-specific productivity? We use a spatially explicit landscape simulation model to examine the effects of different spatial scenarios on landscape structure, connectivity for native forest wildlife, stand diversity, harvest volume, and road construction: (1) random placement of intensive management areas, and (2-8) all possible combinations of rules (a)-(c). Results favor the agglomeration of intensive management areas. For most wildlife species, connectivity was the highest when intensive management was far from the protected areas. This scenario also resulted in relatively high harvest volumes. Maximizing distance of intensive management areas from protected areas may therefore be the best way to maximize the benefits of intensive management areas while minimizing their potentially negative effects on forest structure and biodiversity. PMID:26076893
Maximizing Information Diffusion in the Cyber-physical Integrated Network †
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-01-01
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254
CSMMI: class-specific maximization of mutual information for action and gesture recognition.
Wan, Jun; Athitsos, Vassilis; Jangyodsuk, Pat; Escalante, Hugo Jair; Ruan, Qiuqi; Guyon, Isabelle
2014-07-01
In this paper, we propose a novel approach called class-specific maximization of mutual information (CSMMI) using a submodular method, which aims at learning a compact and discriminative dictionary for each class. Unlike traditional dictionary-based algorithms, which typically learn a shared dictionary for all of the classes, we unify the intraclass and interclass mutual information (MI) into an single objective function to optimize class-specific dictionary. The objective function has two aims: 1) maximizing the MI between dictionary items within a specific class (intrinsic structure) and 2) minimizing the MI between the dictionary items in a given class and those of the other classes (extrinsic structure). We significantly reduce the computational complexity of CSMMI by introducing an novel submodular method, which is one of the important contributions of this paper. This paper also contributes a state-of-the-art end-to-end system for action and gesture recognition incorporating CSMMI, with feature extraction, learning initial dictionary per each class by sparse coding, CSMMI via submodularity, and classification based on reconstruction errors. We performed extensive experiments on synthetic data and eight benchmark data sets. Our experimental results show that CSMMI outperforms shared dictionary methods and that our end-to-end system is competitive with other state-of-the-art approaches. PMID:24983106
Maximizing Conservation and Production with Intensive Forest Management: It's All About Location
NASA Astrophysics Data System (ADS)
Tittler, Rebecca; Filotas, Élise; Kroese, Jasmin; Messier, Christian
2015-11-01
Functional zoning has been suggested as a way to balance the needs of a viable forest industry with those of healthy ecosystems. Under this system, part of the forest is set aside for protected areas, counterbalanced by intensive and extensive management of the rest of the forest. Studies indicate this may provide adequate timber while minimizing road construction and favoring the development of large mature and old stands. However, it is unclear how the spatial arrangement of intensive management areas may affect the success of this zoning. Should these areas be agglomerated or dispersed throughout the forest landscape? Should managers prioritize (a) proximity to existing roads, (b) distance from protected areas, or (c) site-specific productivity? We use a spatially explicit landscape simulation model to examine the effects of different spatial scenarios on landscape structure, connectivity for native forest wildlife, stand diversity, harvest volume, and road construction: (1) random placement of intensive management areas, and (2-8) all possible combinations of rules (a)-(c). Results favor the agglomeration of intensive management areas. For most wildlife species, connectivity was the highest when intensive management was far from the protected areas. This scenario also resulted in relatively high harvest volumes. Maximizing distance of intensive management areas from protected areas may therefore be the best way to maximize the benefits of intensive management areas while minimizing their potentially negative effects on forest structure and biodiversity.
Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization
Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu
2016-01-01
In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications. PMID:26771830
Time-Course of Neuromuscular Changes during and after Maximal Eccentric Contractions
Doguet, Valentin; Jubeau, Marc; Dorel, Sylvain; Couturier, Antoine; Lacourpaille, Lilian; Guével, Arnaud; Guilhem, Gaël
2016-01-01
This study tested the relationship between the magnitude of muscle damage and both central and peripheral modulations during and after eccentric contractions of plantar flexors. Eleven participants performed 10 sets of 30 maximal eccentric contractions of the plantar flexors at 45°·s−1. Maximal voluntary torque, evoked torque (peripheral component) and voluntary activation (central component) were assessed before, during, immediately after (POST) and 48 h after (48 h) the eccentric exercise. Voluntary eccentric torque progressively decreased (up to −36%) concomitantly to a significant alteration of evoked torque (up to −34%) and voluntary activation (up to −13%) during the exercise. Voluntary isometric torque (−48 ± 7%), evoked torque (−41 ± 14%) and voluntary activation (−13 ± 11%) decreased at POST, but only voluntary isometric torque (−19 ± 6%) and evoked torque (−10 ± 18%) remained depressed at 48 h. Neither changes in voluntary activation nor evoked torque during the exercise were related to the magnitude of muscle damage markers, but the evoked torque decrement at 48 h was significantly correlated with the changes in voluntary activation (r = −0.71) and evoked torque (r = 0.77) at POST. Our findings show that neuromuscular responses observed during eccentric contractions were not associated with muscle damage. Conversely, central and peripheral impairments observed immediately after the exercise reflect the long-lasting reduction in force-generating capacity. PMID:27148075
Dieli-Conwright, Christina M; Spektor, Tanya M; Rice, Judd C; Sattler, Fred R; Schroeder, E Todd
2012-05-01
We sought to evaluate baseline mRNA values and changes in gene expression of myostatin-related factors in postmenopausal women taking hormone therapy (HT) and not taking HT after eccentric exercise. Fourteen postmenopausal women participated including 6 controls not using HT (59 ± 4 years, 63 ± 17 kg) and 8 women using HT (59 ± 4 years, 89 ± 24 kg). The participants performed 10 sets of 10 maximal eccentric repetitions of single-leg extension on a dynamometer. Muscle biopsies from the vastus lateralis were obtained from the exercised leg at baseline and 4 hours after the exercise bout. Gene expression was determined using reverse transcriptase polymerase chain reaction for myostatin, activin receptor IIb (ActRIIb), follistatin, follistatin-related gene (FLRG), follistatin-like-3 (FSTL3), and GDF serum-associated protein-1 (GASP-1). In response to the exercise bout, myostatin and ActRIIb significantly decreased (p < 0.05), and follistatin, FLRG, FSTL3, and GASP-1 significantly increased in both groups (p < 0.05). Significantly greater changes in gene expression of all genes occurred in the HT group than in the control group after the acute eccentric exercise bout (p < 0.05). These data suggest that postmenopausal women using HT express greater myostatin-related gene expression, which may reflect a mechanism by which estrogen influences the preservation of muscle mass. Further, postmenopausal women using HT experienced a profoundly greater myostatin-related response to maximal eccentric exercise. PMID:22395277
Reliability of heart rate measures during walking before and after running maximal efforts.
Boullosa, D A; Barros, E S; del Rosso, S; Nakamura, F Y; Leicht, A S
2014-11-01
Previous studies on HR recovery (HRR) measures have utilized the supine and the seated postures. However, the most common recovery mode in sport and clinical settings after running exercise is active walking. The aim of the current study was to examine the reliability of HR measures during walking (4 km · h(-1)) before and following a maximal test. Twelve endurance athletes performed an incremental running test on 2 days separated by 48 h. Absolute (coefficient of variation, CV, %) and relative [Intraclass correlation coefficient, (ICC)] reliability of time domain and non-linear measures of HR variability (HRV) from 3 min recordings, and HRR parameters over 5 min were assessed. Moderate to very high reliability was identified for most HRV indices with short-term components of time domain and non-linear HRV measures demonstrating the greatest reliability before (CV: 12-22%; ICC: 0.73-0.92) and after exercise (CV: 14-32%; ICC: 0.78-0.91). Most HRR indices and parameters of HRR kinetics demonstrated high to very high reliability with HR values at a given point and the asymptotic value of HR being the most reliable (CV: 2.5-10.6%; ICC: 0.81-0.97). These findings demonstrate these measures as reliable tools for the assessment of autonomic control of HR during walking before and after maximal efforts. PMID:24841837
Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization.
Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu
2016-01-01
In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications. PMID:26771830
Conditional entropy maximization for PET image reconstruction using adaptive mesh model.
Zhu, Hongqing; Shu, Huazhong; Zhou, Jian; Dai, Xiubin; Luo, Limin
2007-04-01
Iterative image reconstruction algorithms have been widely used in the field of positron emission tomography (PET). However, such algorithms are sensitive to noise artifacts so that the reconstruction begins to degrade when the number of iterations is high. In this paper, we propose a new algorithm to reconstruct an image from the PET emission projection data by using the conditional entropy maximization and the adaptive mesh model. In a traditional tomography reconstruction method, the reconstructed image is directly computed in the pixel domain. Unlike this kind of methods, the proposed approach is performed by estimating the nodal values from the observed projection data in a mesh domain. In our method, the initial Delaunay triangulation mesh is generated from a set of randomly selected pixel points, and it is then modified according to the pixel intensity value of the estimated image at each iteration step in which the conditional entropy maximization is used. The advantage of using the adaptive mesh model for image reconstruction is that it provides a natural spatially adaptive smoothness mechanism. In experiments using the synthetic and clinical data, it is found that the proposed algorithm is more robust to noise compared to the common pixel-based MLEM algorithm and mesh-based MLEM with a fixed mesh structure. PMID:17368841
Grice's cooperative principle in the psychoanalytic setting.
Ephratt, Michal
2014-12-01
Grice's "cooperative principle," including conversational implicatures and maxims, is commonplace in current pragmatics (a subfield of linguistics), and is generally applied in conversational analysis. The author examines the unique contribution of Grice's principle in considering the psychotherapeutic setting and its discourse. Such an investigation is called for chiefly because of the central role of speech in psychoanalytic practice (the "talking cure"). Symptoms and transference, which are characterized as forms of expression that are fundamentally deceptive, must be equivocal and indirect, and must breach all four of Grice's categories and maxims: truth (Quality), relation (Relevance), Manner (be clear), and Quantity. Therapeutic practice, according to Freud's "fundamental rule of psychoanalysis," encourages the parties (analysand and analyst) to breach each and every one of Grice's maxims. Using case reports drawn from the literature, the author shows that these breachings are essential for therapeutic progress. They serve as a unique and important ground for revealing inner (psychic) contents, and demarcating real self from illusive self, which in turn constitutes leverage for integrating these contents with the self. PMID:25490077
ERIC Educational Resources Information Center
Coleman, Jennifer
2005-01-01
The impact that visuals can have on the mind of children while reading in the library is discussed. Thus it is important to provide young readers with a visual feast as they begin to choose reading materials to explore independently.
Independent Schools: Landscape and Learnings.
ERIC Educational Resources Information Center
Oates, William A.
1981-01-01
Examines American independent schools (parochial, southern segregated, and private institutions) in terms of their funding, expenditures, changing enrollment patterns, teacher-student ratios, and societal functions. Journal available from Daedalus Subscription Department, 1172 Commonwealth Ave., Boston, MA 02132. (AM)
Whatever Happened to Independent Inventors?
ERIC Educational Resources Information Center
Douglas, John H.
1976-01-01
Discuss the increasing problems, facing the private innovator, which may seriously affect the progress of American technology. Statistics show a decrease in patents issued to independent inventors. Information concerning patents, suggested cautions, and useful publications are cited. (Author/EB)
Technology for Independent Living: Sourcebook.
ERIC Educational Resources Information Center
Enders, Alexandra, Ed.
This sourcebook provides information for the practical implementation of independent living technology in the everyday rehabilitation process. "Information Services and Resources" lists databases, clearinghouses, networks, research and development programs, toll-free telephone numbers, consumer protection caveats, selected publications, and…
ERIC Educational Resources Information Center
Baker, Mark; Beltran, Jane; Buell, Jason; Conrey, Brian; Davis, Tom; Donaldson, Brianna; Detorre-Ozeki, Jeanne; Dibble, Leila; Freeman, Tom; Hammie, Robert; Montgomery, Julie; Pickford, Avery; Wong, Justine
2013-01-01
Sets in the game "Set" are lines in a certain four-dimensional space. Here we introduce planes into the game, leading to interesting mathematical questions, some of which we solve, and to a wonderful variation on the game "Set," in which every tableau of nine cards must contain at least one configuration for a player to pick up.
Independent functions in rearrangement invariant spaces and the Kruglov property
NASA Astrophysics Data System (ADS)
Astashkin, S. V.
2008-08-01
Let X be a separable or maximal rearrangement invariant space on [0,1]. It is shown that the inequality \\displaystyle \\biggl\\Vert\\,\\sum_{k=1}^\\infty f_k\\biggr\\Vert _{X}\\le C\\biggl\\Vert\\biggl(\\,\\sum_{k=1}^\\infty f_k^2\\biggl)^{1/2}\\biggr\\Vert _Xholds for an arbitrary sequence of independent functions \\{f_k\\}_{k=1}^\\infty\\subset X, \\displaystyle\\int_0^1f_k(t)\\,dt=0, k=1,2,\\dots, if and only if X has the Kruglov property. As a consequence, it is proved that the same property is necessary and sufficient for a version of Maurey's well-known inequality for vector-valued Rademacher series with independent coefficients to hold in X.Bibliography: 24 titles.
A Geometric-Structure Theory for Maximally Random Jammed Packings
NASA Astrophysics Data System (ADS)
Tian, Jianxiang; Xu, Yaopengxiao; Jiao, Yang; Torquato, Salvatore
2015-11-01
Maximally random jammed (MRJ) particle packings can be viewed as prototypical glasses in that they are maximally disordered while simultaneously being mechanically rigid. The prediction of the MRJ packing density ϕMRJ, among other packing properties of frictionless particles, still poses many theoretical challenges, even for congruent spheres or disks. Using the geometric-structure approach, we derive for the first time a highly accurate formula for MRJ densities for a very wide class of two-dimensional frictionless packings, namely, binary convex superdisks, with shapes that continuously interpolate between circles and squares. By incorporating specific attributes of MRJ states and a novel organizing principle, our formula yields predictions of ϕMRJ that are in excellent agreement with corresponding computer-simulation estimates in almost the entire α-x plane with semi-axis ratio α and small-particle relative number concentration x. Importantly, in the monodisperse circle limit, the predicted ϕMRJ = 0.834 agrees very well with the very recently numerically discovered MRJ density of 0.827, which distinguishes it from high-density “random-close packing” polycrystalline states and hence provides a stringent test on the theory. Similarly, for non-circular monodisperse superdisks, we predict MRJ states with densities that are appreciably smaller than is conventionally thought to be achievable by standard packing protocols.
Numerical analysis of maximal bat performance in baseball.
Nicholls, Rochelle L; Miller, Karol; Elliott, Bruce C
2006-01-01
Metal baseball bats have been experimentally demonstrated to produce higher ball exit velocity (BEV) than wooden bats. In the United States, all bats are subject to BEV tests using hitting machines that rotate the bat in a horizontal plane. In this paper, a model of bat-ball impact was developed based on 3-D translational and rotational kinematics of a swing performed by high-level players. The model was designed to simulate the maximal performance of specific models of a wooden bat and a metal bat when swung by a player, and included material properties and kinematics specific to each bat. Impact dynamics were quantified using the finite element method (ANSYS/LSDYNA, version 6.1). Maximum BEV from both a metal (61.5 m/s) and a wooden (50.9 m/s) bat exceeded the 43.1 m/s threshold by which bats are certified as appropriate for commercial sale. The lower BEV from the wooden bat was attributed to a lower pre-impact bat linear velocity, and a more oblique impact that resulted in a greater proportion of BEV being lost to lateral and vertical motion. The results demonstrate the importance of factoring bat linear velocity and spatial orientation into tests of maximal bat performance, and have implications for the design of metal baseball bats. PMID:15878593
Do Brain Networks Evolve by Maximizing Their Information Flow Capacity?
Antonopoulos, Chris G.; Srivastava, Shambhavi; Pinto, Sandro E. de S.; Baptista, Murilo S.
2015-01-01
We propose a working hypothesis supported by numerical simulations that brain networks evolve based on the principle of the maximization of their internal information flow capacity. We find that synchronous behavior and capacity of information flow of the evolved networks reproduce well the same behaviors observed in the brain dynamical networks of Caenorhabditis elegans and humans, networks of Hindmarsh-Rose neurons with graphs given by these brain networks. We make a strong case to verify our hypothesis by showing that the neural networks with the closest graph distance to the brain networks of Caenorhabditis elegans and humans are the Hindmarsh-Rose neural networks evolved with coupling strengths that maximize information flow capacity. Surprisingly, we find that global neural synchronization levels decrease during brain evolution, reflecting on an underlying global no Hebbian-like evolution process, which is driven by no Hebbian-like learning behaviors for some of the clusters during evolution, and Hebbian-like learning rules for clusters where neurons increase their synchronization. PMID:26317592
On the configuration of supercapacitors for maximizing electrochemical performance.
Zhang, Jintao; Zhao, X S
2012-05-01
Supercapacitors, which are attracting rapidly growing interest from both academia and industry, are important energy-storage devices for acquiring sustainable energy. Recent years have seen a number of significant breakthroughs in the research and development of supercapacitors. The emergence of innovative electrode materials (e.g., graphene) has clearly provided great opportunities for advancing the science in the field of electrochemical energy storage. Conversely, smart configurations of electrode materials and new designs of supercapacitor devices have, in many cases, boosted the electrochemical performance of the materials. We attempt to summarize recent research progress towards the design and configuration of electrode materials to maximize supercapacitor performance in terms of energy density, power density, and cycle stability. With a brief description of the structure, energy-storage mechanism, and electrode configuration of supercapacitor devices, the design and configuration of symmetric supercapacitors are discussed, followed by that of asymmetric and hybrid supercapacitors. Emphasis is placed on the rational design and configuration of supercapacitor electrodes to maximize the electrochemical performance of the device. PMID:22550045
Maximizing Photoluminescence Extraction in Silicon Photonic Crystal Slabs.
Mahdavi, Ali; Sarau, George; Xavier, Jolly; Paraïso, Taofiq K; Christiansen, Silke; Vollmer, Frank
2016-01-01
Photonic crystal modes can be tailored for increasing light matter interactions and light extraction efficiencies. These PhC properties have been explored for improving the device performance of LEDs, solar cells and precision biosensors. Tuning the extended band structure of 2D PhC provides a means for increasing light extraction throughout a planar device. This requires careful design and fabrication of PhC with a desirable mode structure overlapping with the spectral region of emission. We show a method for predicting and maximizing light extraction from 2D photonic crystal slabs, exemplified by maximizing silicon photoluminescence (PL). Systematically varying the lattice constant and filling factor, we predict the increases in PL intensity from band structure calculations and confirm predictions in micro-PL experiments. With the near optimal design parameters of PhC, we demonstrate more than 500-fold increase in PL intensity, measured near band edge of silicon at room temperature, an enhancement by an order of magnitude more than what has been reported. PMID:27113674
Maximizing the efficiency of a flexible propulsor using experimental optimization
NASA Astrophysics Data System (ADS)
Quinn, Daniel; Lauder, George; Smits, Alexander
2014-11-01
Experimental gradient-based optimization is used to maximize the propulsive efficiency of a heaving and pitching flexible panel. Optimum and near-optimum conditions are studied via direct force measurements and Particle Image Velocimetry (PIV). The net thrust and power are found to scale predictably with the frequency and amplitude of the leading edge, but the efficiency shows a complex multimodal response. Optimum pitch and heave motions are found to produce nearly twice the efficiencies of optimum heave-only motions. Efficiency is globally optimized when (1) the Strouhal number is within an optimal range that varies weakly with amplitude and boundary conditions; (2) the panel is actuated at a resonant frequency of the fluid-propulsor system; (3) heave amplitude is tuned such that trailing edge amplitude is maximized while flow along the body remains attached; and (4) the maximum pitch angle and phase lag are chosen so that the effective angle of attack is minimized. This work was supported by the Office of Naval Research under MURI Grant Number N00014-08-1-0642 (Program Director Dr. Bob Brizzolara), and the National Science Foundation under Grant DBI 1062052 (PI Lisa Fauci) and Grant EFRI-0938043 (PI George Lauder).
Strategies for maximizing clinical effectiveness in the treatment of schizophrenia.
Tandon, Rajiv; Targum, Steven D; Nasrallah, Henry A; Ross, Ruth
2006-11-01
The ultimate clinical objective in the treatment of schizophrenia is to enable affected individuals to lead maximally productive and personally meaningful lives. As with other chronic diseases that lack a definitive cure, the individual's service/recovery plan must include treatment interventions directed towards decreasing manifestations of the illness, rehabilitative services directed towards enhancing adaptive skills, and social support mobilization aimed at optimizing function and quality of life. In this review, we provide a conceptual framework for considering approaches for maximizing the effectiveness of the array of treatments and other services towards promoting recovery of persons with schizophrenia. We discuss pharmacological, psychological, and social strategies that decrease the burden of the disease of schizophrenia on affected individuals and their families while adding the least possible burden of treatment. In view of the multitude of treatments necessary to optimize outcomes for individuals with schizophrenia, effective coordination of these services is essential. In addition to providing best possible clinical assessment and pharmacological treatment, the psychiatrist must function as an effective leader of the treatment team. To do so, however, the psychiatrist must be knowledgeable about the range of available services, must have skills in clinical-administrative leadership, and must accept the responsibility of coordinating the planning and delivery of this multidimensional array of treatments and services. Finally, the effectiveness of providing optimal individualized treatment/rehabilitation is best gauged by measuring progress on multiple effectiveness domains. Approaches for efficient and reliable assessment are discussed. PMID:17122696
Critical behavior of large maximally informative neural populations
NASA Astrophysics Data System (ADS)
Berkowitz, John; Sharpee, Tatyana
We consider maximally informative encoding of scalar signals by neural populations. In a small time window, neural responses are binary, with spiking probability that follows a sigmoidal tuning curve. The width of the tuning curve represents effective noise in neural transmission. Previous analyses of this problem for relatively small numbers of neurons with identical noise parameters indicated the presence of multiple bifurcations that occurred with decreasing noise value. For very high noise values, maximal information is achieved when all neurons have the same threshold values. With decreasing noise, the threshold values split into two or more groups via a series of bifurcations, until finally each neuron has a different threshold. Analyzing this problem in the large N limit, we found instead that there is a single phase transition from redundant coding to coding based on distributed thresholds. The order parameter of this transition is the threshold standard deviation across the population; differences in noise parameter from the mean are analogous to local magnetic fields. Near the bifurcation point, information transmitted follows a Landau expansion. We use this expansion to quantify the scaling of the order parameter with noise and effective magnetic field. NSF CAREER Award IIS-1254123, NSF Ideas Lab Collaborative Research IOS 1556388.
Modularity-maximizing graph communities via mathematical programming
NASA Astrophysics Data System (ADS)
Agarwal, G.; Kempe, D.
2008-12-01
In many networks, it is of great interest to identify communities, unusually densely knit groups of individuals. Such communities often shed light on the function of the networks or underlying properties of the individuals. Recently, Newman suggested modularity as a natural measure of the quality of a network partitioning into communities. Since then, various algorithms have been proposed for (approximately) maximizing the modularity of the partitioning determined. In this paper, we introduce the technique of rounding mathematical programs to the problem of modularity maximization, presenting two novel algorithms. More specifically, the algorithms round solutions to linear and vector programs. Importantly, the linear programing algorithm comes with an a posteriori approximation guarantee: by comparing the solution quality to the fractional solution of the linear program, a bound on the available “room for improvement” can be obtained. The vector programming algorithm provides a similar bound for the best partition into two communities. We evaluate both algorithms using experiments on several standard test cases for network partitioning algorithms, and find that they perform comparably or better than past algorithms, while being more efficient than exhaustive techniques.
Entropic anomaly and maximal efficiency of microscopic heat engines.
Bo, Stefano; Celani, Antonio
2013-05-01
The efficiency of microscopic heat engines in a thermally heterogenous environment is considered. We show that-as a consequence of the recently discovered entropic anomaly-quasistatic engines, whose efficiency is maximal in a fluid at uniform temperature, have in fact vanishing efficiency in the presence of temperature gradients. For slow cycles the efficiency falls off as the inverse of the period. The maximum efficiency is reached at a finite value of the cycle period that is inversely proportional to the square root of the gradient intensity. The relative loss in maximal efficiency with respect to the thermally homogeneous case grows as the square root of the gradient. As an illustration of these general results, we construct an explicit, analytically solvable example of a Carnot stochastic engine. In this thought experiment, a Brownian particle is confined by a harmonic trap and immersed in a fluid with a linear temperature profile. This example may serve as a template for the design of real experiments in which the effect of the entropic anomaly can be measured. PMID:23767467
Optimal Thresholding of Classifiers to Maximize F1 Measure
Lipton, Zachary C.; Elkan, Charles; Naryanaswamy, Balakrishnan
2015-01-01
This paper provides new insight into maximizing F1 measures in the context of binary classification and also in the context of multilabel classification. The harmonic mean of precision and recall, the F1 measure is widely used to evaluate the success of a binary classifier when one class is rare. Micro average, macro average, and per instance average F1 measures are used in multilabel classification. For any classifier that produces a real-valued output, we derive the relationship between the best achievable F1 value and the decision-making threshold that achieves this optimum. As a special case, if the classifier outputs are well-calibrated conditional probabilities, then the optimal threshold is half the optimal F1 value. As another special case, if the classifier is completely uninformative, then the optimal behavior is to classify all examples as positive. When the actual prevalence of positive examples is low, this behavior can be undesirable. As a case study, we discuss the results, which can be surprising, of maximizing F1 when predicting 26,853 labels for Medline documents. PMID:26023687
Maximizing mutagenesis with solubilized CRISPR-Cas9 ribonucleoprotein complexes.
Burger, Alexa; Lindsay, Helen; Felker, Anastasia; Hess, Christopher; Anders, Carolin; Chiavacci, Elena; Zaugg, Jonas; Weber, Lukas M; Catena, Raul; Jinek, Martin; Robinson, Mark D; Mosimann, Christian
2016-06-01
CRISPR-Cas9 enables efficient sequence-specific mutagenesis for creating somatic or germline mutants of model organisms. Key constraints in vivo remain the expression and delivery of active Cas9-sgRNA ribonucleoprotein complexes (RNPs) with minimal toxicity, variable mutagenesis efficiencies depending on targeting sequence, and high mutation mosaicism. Here, we apply in vitro assembled, fluorescent Cas9-sgRNA RNPs in solubilizing salt solution to achieve maximal mutagenesis efficiency in zebrafish embryos. MiSeq-based sequence analysis of targeted loci in individual embryos using CrispRVariants, a customized software tool for mutagenesis quantification and visualization, reveals efficient bi-allelic mutagenesis that reaches saturation at several tested gene loci. Such virtually complete mutagenesis exposes loss-of-function phenotypes for candidate genes in somatic mutant embryos for subsequent generation of stable germline mutants. We further show that targeting of non-coding elements in gene regulatory regions using saturating mutagenesis uncovers functional control elements in transgenic reporters and endogenous genes in injected embryos. Our results establish that optimally solubilized, in vitro assembled fluorescent Cas9-sgRNA RNPs provide a reproducible reagent for direct and scalable loss-of-function studies and applications beyond zebrafish experiments that require maximal DNA cutting efficiency in vivo. PMID:27130213
Entangled states close to the maximally mixed state
Hildebrand, Roland
2007-06-15
This paper deals with the radius of the largest ball of separable mixed states around the maximally mixed state for multiqubit systems. This radius determines how close entangled states can be to the maximally mixed state. In Aubrun and Szarek (e-print arXiv:quant-ph/0503221) an upper bound on the radius was given, while in Gurvits and Barnum (e-print arXiv:quant-ph/0409095) a lower bound was provided. In this paper we improve both the upper and the lower bound, bringing the ratio of these bounds down to a constant c={radical}(34/27){approx_equal}1.122, as opposed to a term of order {radical}(m log m) for the best bounds known previously, where m is the number of qubits in the system. We construct concrete examples of separable states on the boundary to entanglement which realize the upper bounds. As a by-product, we compute the radii of the largest balls that fit into the projective tensor product of three and four unit balls in R{sup 3} and in the projective tensor product of an arbitrary number of unit balls in R{sup n} for n=2, 4, and 8.
A Geometric-Structure Theory for Maximally Random Jammed Packings
Tian, Jianxiang; Xu, Yaopengxiao; Jiao, Yang; Torquato, Salvatore
2015-01-01
Maximally random jammed (MRJ) particle packings can be viewed as prototypical glasses in that they are maximally disordered while simultaneously being mechanically rigid. The prediction of the MRJ packing density ϕMRJ, among other packing properties of frictionless particles, still poses many theoretical challenges, even for congruent spheres or disks. Using the geometric-structure approach, we derive for the first time a highly accurate formula for MRJ densities for a very wide class of two-dimensional frictionless packings, namely, binary convex superdisks, with shapes that continuously interpolate between circles and squares. By incorporating specific attributes of MRJ states and a novel organizing principle, our formula yields predictions of ϕMRJ that are in excellent agreement with corresponding computer-simulation estimates in almost the entire α-x plane with semi-axis ratio α and small-particle relative number concentration x. Importantly, in the monodisperse circle limit, the predicted ϕMRJ = 0.834 agrees very well with the very recently numerically discovered MRJ density of 0.827, which distinguishes it from high-density “random-close packing” polycrystalline states and hence provides a stringent test on the theory. Similarly, for non-circular monodisperse superdisks, we predict MRJ states with densities that are appreciably smaller than is conventionally thought to be achievable by standard packing protocols. PMID:26568437
Maximizing cochlear implant patients' performance with advanced speech training procedures.
Fu, Qian-Jie; Galvin, John J
2008-08-01
Advances in implant technology and speech processing have provided great benefit to many cochlear implant patients. However, some patients receive little benefit from the latest technology, even after many years' experience with the device. Moreover, even the best cochlear implant performers have great difficulty understanding speech in background noise, and music perception and appreciation remain major challenges. Recent studies have shown that targeted auditory training can significantly improve cochlear implant patients' speech recognition performance. Such benefits are not only observed in poorly performing patients, but also in good performers under difficult listening conditions (e.g., speech noise, telephone speech, music, etc.). Targeted auditory training has also been shown to enhance performance gains provided by new implant devices and/or speech processing strategies. These studies suggest that cochlear implantation alone may not fully meet the needs of many patients, and that additional auditory rehabilitation may be needed to maximize the benefits of the implant device. Continuing research will aid in the development of efficient and effective training protocols and materials, thereby minimizing the costs (in terms of time, effort and resources) associated with auditory rehabilitation while maximizing the benefits of cochlear implantation for all recipients. PMID:18295992
Managing up to maximize medicare reimbursement for outpatient care.
Bowden, K
2001-10-01
An example of managing up through participation in a multidisciplinary team tasked with maximizing reimbursement under the Medicare ambulatory patient classification (APC) system is described. Medicare's new system of payment for hospital outpatient services replaces the cost-based reimbursement model of the past with a technical payment based on the outpatient evaluation and management level. Individual institutions are responsible for developing criteria for defining technical visit levels. Managers at the New England Medical Center formed a team to develop these criteria. The team outlined components of the patient visit that qualified as technical costs, such as the use of space at a facility, medical and surgical supplies, and nonphysician professional services. Team members then contacted each of the center's clinics to identify specific services that met these criteria. After formulating the technical visit level criteria, the team determined who would assign the technical visit level, wrote policies and procedures, and trained staff. The APC team also assessed billing procedures, focusing particularly on the accuracy of the charge master and the use of proper codes and billing units for pass-through drugs. The team continues to monitor its results by reviewing payments received from Medicare and auditing high-risk areas. The APC team used the principles of managing up to maximize Medicare reimbursement for outpatient visits. PMID:11592351
Maximizing Photoluminescence Extraction in Silicon Photonic Crystal Slabs
Mahdavi, Ali; Sarau, George; Xavier, Jolly; Paraïso, Taofiq K.; Christiansen, Silke; Vollmer, Frank
2016-01-01
Photonic crystal modes can be tailored for increasing light matter interactions and light extraction efficiencies. These PhC properties have been explored for improving the device performance of LEDs, solar cells and precision biosensors. Tuning the extended band structure of 2D PhC provides a means for increasing light extraction throughout a planar device. This requires careful design and fabrication of PhC with a desirable mode structure overlapping with the spectral region of emission. We show a method for predicting and maximizing light extraction from 2D photonic crystal slabs, exemplified by maximizing silicon photoluminescence (PL). Systematically varying the lattice constant and filling factor, we predict the increases in PL intensity from band structure calculations and confirm predictions in micro-PL experiments. With the near optimal design parameters of PhC, we demonstrate more than 500-fold increase in PL intensity, measured near band edge of silicon at room temperature, an enhancement by an order of magnitude more than what has been reported. PMID:27113674
Effect of gender on maximal breath-hold time.
Cherouveim, Evgenia D; Botonis, Petros G; Koskolou, Maria D; Geladas, Nickos D
2013-05-01
This study examined the effect of gender on breath-hold time (BHT). Sixteen healthy subjects, eight males (M) and eight females (F), aged 18-30 years, without breath-hold (BH) experience, performed: (a) a pulmonary function test, (b) an incremental cycle ergometer test to exhaustion and (c) a BH protocol, which included eight repeated maximal efforts separated by 2-min intervals on two occasions: without (BHFOI) and with face immersion (BHFI) in cool water (14.8 ± 0.07 °C). Cardiovascular, ventilatory and hematological responses were studied before, during and after BH efforts. Maximal BHT was similar between genders (M: 103.90 ± 25.68 s; F: 104.97 ± 32.71 s, p > 0.05) and unaffected by face immersion (BHFOI: 105.13 ± 28.68 s; BHFI: 103.74 ± 31.19 s, p > 0.05). The aerobic capacity, lung volumes and hematological indexes were higher in males compared to females. BHT was predicted (r (2) = 0.98, p = 0.005) by aerobic capacity, total lung volume, hematocrit and hemoglobin concentration only in males. It was concluded that despite gender differences in physiological and anthropometrical traits, BH ability was not different between males and females, both not experienced in apneas. PMID:23187428
Using Empirical Data to Set Cutoff Scores.
ERIC Educational Resources Information Center
Hills, John R.
Six experimental approaches to the problems of setting cutoff scores and choosing proper test length are briefly mentioned. Most of these methods share the premise that a test is a random sample of items, from a domain associated with a carefully specified objective. Each item is independent and is scored zero or one, with no provision for…
Multidimensional set switching.
Hahn, Sowon; Andersen, George J; Kramer, Arthur F
2003-06-01
The present study examined the organization of preparatory processes that underlie set switching and, more specifically, switch costs. On each trial, subjects performed one of two perceptual judgment tasks, color or shape discrimination. Subjects also responded with one of two different response sets. The task set and/or the response set switched from one to the other after 2-6 repeated trials. Response set, task set, and double set switches were performed in both blocked and randomized conditions. Subjects performed with short (100-msec) and long (800-msec) preparatory intervals. Task and response set switches had an additive effect on reaction times (RTs) in the blocked condition. Such a pattern of results suggests a serial organization of preparatory processes when the nature of switches is predictable. However, task and response set switches had an underadditive effect on RTs in the random condition when subjects performed with a brief cue-to-target interval. This pattern of results suggests overlapping task and response set preparation. These findings are discussed in terms of strategic control of preparatory processes in set switching. PMID:12921431
NASA Astrophysics Data System (ADS)
Díaz-González, Edgar C.; López-Rentería, Jorge-Antonio; Campos-Cantón, Eric; Aguirre-Hernández, Baltazar
2016-07-01
In this paper, we present families of piecewise linear systems which are controlled by a continuous piecewise monoparametric control function for the generation of monoparametric families of multi-scroll attractors. Thus, the maximum range of values that the parameter set can take in order to preserve the useful dynamics for generating of multi-scroll attractors is found and it will be called maximal robust dynamics interval. This class of dynamical systems is the result of combining two or more unstable "one-spiral" trajectories. We give necessary and sufficient conditions in order to preserve multi-scroll attractors in terms of a parameter, i.e., a family of multi-scroll attractors is generated by means of a family of switching systems with multiple monoparametric companion matrices. Lastly, we provide an example to show how the developed theory works.
Self-assisted complete maximally hyperentangled state analysis via the cross-Kerr nonlinearity
NASA Astrophysics Data System (ADS)
Li, Xi-Han; Ghose, Shohini
2016-02-01
We present two complete maximally hyperentangled state analysis protocols for photons entangled in the polarization and spatial-mode degrees of freedom. The first protocol is a hyperentangled Bell state analysis scheme for two photons, and the second is a hyperentangled Greenberger-Horne-Zeilinger (GHZ) state analysis scheme for three photons. In each scheme, a set of mutually orthogonal hyperentangled basis states are completely and deterministically discriminated with the aid of cross-Kerr nonlinearities and linear optics. We also generalize the schemes to unambiguously analyze the N -photon hyperentangled GHZ state. Compared with previous protocols, our schemes greatly simplify the discrimination process and reduce the requirements on nonlinearities by using the measured spatial-mode state to assist in the analysis of the polarization state. These advantages make our schemes useful for practical applications in long-distance high-capacity quantum communication.
Silva, Adão; Gameiro, Atílio
2014-01-01
We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928