Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers
ERIC Educational Resources Information Center
Dragojlovic, Veljko
2015-01-01
Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.
The prevalence of terraced treescapes in analyses of phylogenetic data sets.
Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J
2018-04-04
The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.
The motion of a charged particle on a Riemannian surface under a non-zero magnetic field
NASA Astrophysics Data System (ADS)
Castilho, Cesar Augusto Rodrigues
In this thesis we study the motion of a charged particle on a Riemmanian surface under the influence of a positive magnetic field B. Using Moser's Twist Theorem and ideas from classical pertubation theory we find sufficient conditions to perpetually trap the motion of a particle with a sufficient large charge in a neighborhood of a level set of the magnetic field. The conditions on the level set of the magnetic field that guarantee the trapping are local and hold near all non- degenerate critical local minima or maxima of B. Using sympletic reduction we apply the results of our work to certain S1-invariant magnetic fields on R3.
The Motion of a Charged Particle on a Riemannian Surface under a Non-Zero Magnetic Field
NASA Astrophysics Data System (ADS)
Castilho, César
2001-03-01
In this paper we study the motion of a charged particle on a Riemmanian surface under the influence of a positive magnetic field B. Using Moser's Twist Theorem and ideas from classical pertubation theory we find sufficient conditions to perpetually trap the motion of a particle with a sufficient large charge in a neighborhood of a level set of the magnetic field. The conditions on the level set of the magnetic field that guarantee the trapping are local and hold near all non-degenerate critical local minima or maxima of B. Using symplectic reduction we apply the results of our work to certain S1-invariant magnetic fields on R3.
Quantum resource theories in the single-shot regime
NASA Astrophysics Data System (ADS)
Gour, Gilad
2017-06-01
One of the main goals of any resource theory such as entanglement, quantum thermodynamics, quantum coherence, and asymmetry, is to find necessary and sufficient conditions that determine whether one resource can be converted to another by the set of free operations. Here we find such conditions for a large class of quantum resource theories which we call affine resource theories. Affine resource theories include the resource theories of athermality, asymmetry, and coherence, but not entanglement. Remarkably, the necessary and sufficient conditions can be expressed as a family of inequalities between resource monotones (quantifiers) that are given in terms of the conditional min-entropy. The set of free operations is taken to be (1) the maximal set (i.e., consists of all resource nongenerating quantum channels) or (2) the self-dual set of free operations (i.e., consists of all resource nongenerating maps for which the dual map is also resource nongenerating). As an example, we apply our results to quantum thermodynamics with Gibbs preserving operations, and several other affine resource theories. Finally, we discuss the applications of these results to resource theories that are not affine and, along the way, provide the necessary and sufficient conditions that a quantum resource theory consists of a resource destroying map.
Chapter 2. Selecting Key Habitat Attributes for Monitoring
Gregory D. Hayward; Lowell H. Suring
2013-01-01
The success of habitat monitoring programs depends, to a large extent, on carefully selecting key habitat attributes to monitor. The challenge of choosing a limited but sufficient set of attributes will differ depending on the objectives of the monitoring program. In some circumstances, such as managing National Forest System lands for threatened and endangered species...
Big Data Analytics for a Smart Green Infrastructure Strategy
NASA Astrophysics Data System (ADS)
Barrile, Vincenzo; Bonfa, Stefano; Bilotta, Giuliana
2017-08-01
As well known, Big Data is a term for data sets so large or complex that traditional data processing applications aren’t sufficient to process them. The term “Big Data” is referred to using predictive analytics. It is often related to user behavior analytics, or other advanced data analytics methods which from data extract value, and rarely to a particular size of data set. This is especially true for the huge amount of Earth Observation data that satellites constantly orbiting the earth daily transmit.
Melen, Miranda K; Herman, Julie A; Lucas, Jessica; O'Malley, Rachel E; Parker, Ingrid M; Thom, Aaron M; Whittall, Justen B
2016-11-01
Self incompatibility (SI) in rare plants presents a unique challenge-SI protects plants from inbreeding depression, but requires a sufficient number of mates and xenogamous pollination. Does SI persist in an endangered polyploid? Is pollinator visitation sufficient to ensure reproductive success? Is there evidence of inbreeding/outbreeding depression? We characterized the mating system, primary pollinators, pollen limitation, and inbreeding/outbreeding depression in Erysimum teretifolium to guide conservation efforts. We compared seed production following self pollination and within- and between-population crosses. Pollen tubes were visualized after self pollinations and between-population pollinations. Pollen limitation was tested in the field. Pollinator observations were quantified using digital video. Inbreeding/outbreeding depression was assessed in progeny from self and outcross pollinations at early and later developmental stages. Self-pollination reduced seed set by 6.5× and quadrupled reproductive failure compared with outcross pollination. Pollen tubes of some self pollinations were arrested at the stigmatic surface. Seed-set data indicated strong SI, and fruit-set data suggested partial SI. Pollinator diversity and visitation rates were high, and there was no evidence of pollen limitation. Inbreeding depression (δ) was weak for early developmental stages and strong for later developmental stages, with no evidence of outbreeding depression. The rare hexaploid E. teretifolium is largely self incompatible and suffers from late-acting inbreeding depression. Reproductive success in natural populations was accomplished through high pollinator visitation rates consistent with a lack of pollen limitation. Future reproductive health for this species will require large population sizes with sufficient mates and a robust pollinator community. © 2016 Melen et al. Published by the Botanical Society of America. This work is licensed under a Creative Commons Attribution License (CC-BY).
Crops and food security--experiences and perspectives from Taiwan.
Huang, Chen-Te; Fu, Tzu-Yu Richard; Chang, Su-San
2009-01-01
Food security is an important issue that is of concern for all countries around the world. There are many factors which may cause food insecurity including increasing demand, shortage of supply, trade condition, another countries' food policy, lack of money, high food and oil prices, decelerating productivity, speculation, etc. The food self-sufficiency ratio of Taiwan is only 30.6% weighted by energy in 2007. Total agriculture imports and cereals have increased significantly due to the expansion of livestock and fishery industries and improve living standard. The agriculture sector of Taiwan is facing many challenges, such as: low level of food self-sufficiency, aging farmers, large acreage of set-aside farmlands, small scale farming, soaring price of fertilizers, natural disasters accelerated by climate change, and rapid changes in the world food economy. To cope with these challenges, the present agricultural policy is based on three guidelines: "Healthfulness, Efficiency, and Sustainability." A program entitled "Turning Small Landlords into Large Tenants" was launched to make effective use of idle lands. Facing globalization and the food crisis, Taiwan will secure stable food supply through revitalization of its set-aside farmlands and international markets, and provide technical assistance to developing countries, in particular for staple food crops.
A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corynen, G.C.
1987-11-01
An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less
Sufficient Forecasting Using Factor Models
Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei
2017-01-01
We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537
Phenomenology-Based Inverse Scattering for Sensor Information Fusion
2006-09-15
abilities in the past. Rule -based systems and mathematics of logic implied significant similarities between the two: Thoughts, words, and phrases...all are logical statements. The situation has changed, in part due to the fact that logic- rule systems have not been sufficiently powerful to explain...references]. 3 Language mechanisms of our mind include abilities to acquire a large vocabulary, rules of grammar, and to use the finite set of
On sufficient statistics of least-squares superposition of vector sets.
Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M
2015-06-01
The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.
Seidl, Roman; Moser, Corinne; Blumer, Yann
2017-01-01
Many countries have some kind of energy-system transformation either planned or ongoing for various reasons, such as to curb carbon emissions or to compensate for the phasing out of nuclear energy. One important component of these transformations is the overall reduction in energy demand. It is generally acknowledged that the domestic sector represents a large share of total energy consumption in many countries. Increased energy efficiency is one factor that reduces energy demand, but behavioral approaches (known as "sufficiency") and their respective interventions also play important roles. In this paper, we address citizens' heterogeneity regarding both their current behaviors and their willingness to realize their sufficiency potentials-that is, to reduce their energy consumption through behavioral change. We collaborated with three Swiss cities for this study. A survey conducted in the three cities yielded thematic sets of energy-consumption behavior that various groups of participants rated differently. Using this data, we identified four groups of participants with different patterns of both current behaviors and sufficiency potentials. The paper discusses intervention types and addresses citizens' heterogeneity and behaviors from a city-based perspective.
Classification of large-sized hyperspectral imagery using fast machine learning algorithms
NASA Astrophysics Data System (ADS)
Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira
2017-07-01
We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.
Development of Pulsar Detection Methods for a Galactic Center Search
NASA Astrophysics Data System (ADS)
Thornton, Stephen; Wharton, Robert; Cordes, James; Chatterjee, Shami
2018-01-01
Finding pulsars within the inner parsec of the galactic center would be incredibly beneficial: for pulsars sufficiently close to Sagittarius A*, extremely precise tests of general relativity in the strong field regime could be performed through measurement of post-Keplerian parameters. Binary pulsar systems with sufficiently short orbital periods could provide the same laboratories with which to test existing theories. Fast and efficient methods are needed to parse large sets of time-domain data from different telescopes to search for periodicity in signals and differentiate radio frequency interference (RFI) from pulsar signals. Here we demonstrate several techniques to reduce red noise (low-frequency interference), generate signals from pulsars in binary orbits, and create plots that allow for fast detection of both RFI and pulsars.
Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.
2014-01-01
The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.
DataWarrior: an open-source program for chemistry aware data visualization and analysis.
Sander, Thomas; Freyss, Joel; von Korff, Modest; Rufener, Christian
2015-02-23
Drug discovery projects in the pharmaceutical industry accumulate thousands of chemical structures and ten-thousands of data points from a dozen or more biological and pharmacological assays. A sufficient interpretation of the data requires understanding, which molecular families are present, which structural motifs correlate with measured properties, and which tiny structural changes cause large property changes. Data visualization and analysis software with sufficient chemical intelligence to support chemists in this task is rare. In an attempt to contribute to filling the gap, we released our in-house developed chemistry aware data analysis program DataWarrior for free public use. This paper gives an overview of DataWarrior's functionality and architecture. Exemplarily, a new unsupervised, 2-dimensional scaling algorithm is presented, which employs vector-based or nonvector-based descriptors to visualize the chemical or pharmacophore space of even large data sets. DataWarrior uses this method to interactively explore chemical space, activity landscapes, and activity cliffs.
Good, Andrew C; Hermsmeier, Mark A
2007-01-01
Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.
Heavy-flavor parton distributions without heavy-flavor matching prescriptions
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Glazov, Alexandre; Mitov, Alexander; Papanastasiou, Andrew S.; Ubiali, Maria
2018-04-01
We show that the well-known obstacle for working with the zero-mass variable flavor number scheme, namely, the omission of O(1) mass power corrections close to the conventional heavy flavor matching point (HFMP) μ b = m, can be easily overcome. For this it is sufficient to take advantage of the freedom in choosing the position of the HFMP. We demonstrate that by choosing a sufficiently large HFMP, which could be as large as 10 times the mass of the heavy quark, one can achieve the following improvements: 1) above the HFMP the size of missing power corrections O(m) is restricted by the value of μ b and, therefore, the error associated with their omission can be made negligible; 2) additional prescriptions for the definition of cross-sections are not required; 3) the resummation accuracy is maintained and 4) contrary to the common lore we find that the discontinuity of α s and pdfs across thresholds leads to improved continuity in predictions for observables. We have considered a large set of proton-proton and electron-proton collider processes, many through NNLO QCD, that demonstrate the broad applicability of our proposal.
Data-driven confounder selection via Markov and Bayesian networks.
Häggström, Jenny
2018-06-01
To unbiasedly estimate a causal effect on an outcome unconfoundedness is often assumed. If there is sufficient knowledge on the underlying causal structure then existing confounder selection criteria can be used to select subsets of the observed pretreatment covariates, X, sufficient for unconfoundedness, if such subsets exist. Here, estimation of these target subsets is considered when the underlying causal structure is unknown. The proposed method is to model the causal structure by a probabilistic graphical model, for example, a Markov or Bayesian network, estimate this graph from observed data and select the target subsets given the estimated graph. The approach is evaluated by simulation both in a high-dimensional setting where unconfoundedness holds given X and in a setting where unconfoundedness only holds given subsets of X. Several common target subsets are investigated and the selected subsets are compared with respect to accuracy in estimating the average causal effect. The proposed method is implemented with existing software that can easily handle high-dimensional data, in terms of large samples and large number of covariates. The results from the simulation study show that, if unconfoundedness holds given X, this approach is very successful in selecting the target subsets, outperforming alternative approaches based on random forests and LASSO, and that the subset estimating the target subset containing all causes of outcome yields smallest MSE in the average causal effect estimation. © 2017, The International Biometric Society.
Indirect addressing and load balancing for faster solution to Mandelbrot Set on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1989-01-01
SIMD computers with local indirect addressing allow programs to have queues and buffers, making certain kinds of problems much more efficient. Examined here are a class of problems characterized by computations on data points where the computation is identical, but the convergence rate is data dependent. Normally, in this situation, the algorithm time is governed by the maximum number of iterations required by each point. Using indirect addressing allows a processor to proceed to the next data point when it is done, reducing the overall number of iterations required to approach the mean convergence rate when a sufficiently large problem set is solved. Load balancing techniques can be applied for additional performance improvement. Simulations of this technique applied to solving Mandelbrot Sets indicate significant performance gains.
A posteriori noise estimation in variable data sets. With applications to spectra and light curves
NASA Astrophysics Data System (ADS)
Czesla, S.; Molle, T.; Schmitt, J. H. M. M.
2018-01-01
Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.
Catch of channel catfish with tandem-set hoop nets and gill nets in lentic systems of Nebraska
Richters, Lindsey K.; Pope, Kevin L.
2011-01-01
Twenty-six Nebraska water bodies representing two ecosystem types (small standing waters and large standing waters) were surveyed during 2008 and 2009 with tandem-set hoop nets and experimental gill nets to determine if similar trends existed in catch rates and size structures of channel catfish Ictalurus punctatus captured with these gears. Gear efficiency was assessed as the number of sets (nets) that would be required to capture 100 channel catfish given observed catch per unit effort (CPUE). Efficiency of gill nets was not correlated with efficiency of hoop nets for capturing channel catfish. Small sample sizes prohibited estimation of proportional size distributions in most surveys; in the four surveys for which sample size was sufficient to quantify length-frequency distributions of captured channel catfish, distributions differed between gears. The CPUE of channel catfish did not differ between small and large water bodies for either gear. While catch rates of hoop nets were lower than rates recorded in previous studies, this gear was more efficient than gill nets at capturing channel catfish. However, comparisons of size structure between gears may be problematic.
Control landscapes are almost always trap free: a geometric assessment
NASA Astrophysics Data System (ADS)
Russell, Benjamin; Rabitz, Herschel; Wu, Re-Bing
2017-05-01
A proof is presented that almost all closed, finite dimensional quantum systems have trap free (i.e. free from local optima) landscapes for a large and physically general class of circumstances, which includes qubit evolutions in quantum computing. This result offers an explanation for why gradient-based methods succeed so frequently in quantum control. The role of singular controls is analyzed using geometric tools in the case of the control of the propagator, and thus in the case of observables as well. Singular controls have been implicated as a source of landscape traps. The conditions under which singular controls can introduce traps, and thus interrupt the progress of a control optimization, are discussed and a geometrical characterization of the issue is presented. It is shown that a control being singular is not sufficient to cause control optimization progress to halt, and sufficient conditions for a trap free landscape are presented. It is further shown that the local surjectivity (full rank) assumption of landscape analysis can be refined to the condition that the end-point map is transverse to each of the level sets of the fidelity function. This mild condition is shown to be sufficient for a quantum system’s landscape to be trap free. The control landscape is shown to be trap free for all but a null set of Hamiltonians using a geometric technique based on the parametric transversality theorem. Numerical evidence confirming this analysis is also presented. This new result is the analogue of the work of Altifini, wherein it was shown that controllability holds for all but a null set of quantum systems in the dipole approximation. These collective results indicate that the availability of adequate control resources remains the most physically relevant issue for achieving high fidelity control performance while also avoiding landscape traps.
Design of sEMG assembly to detect external anal sphincter activity: a proof of concept.
Shiraz, Arsam; Leaker, Brian; Mosse, Charles Alexander; Solomon, Eskinder; Craggs, Michael; Demosthenous, Andreas
2017-10-31
Conditional trans-rectal stimulation of the pudendal nerve could provide a viable solution to treat hyperreflexive bladder in spinal cord injury. A set threshold of the amplitude estimate of the external anal sphincter surface electromyography (sEMG) may be used as the trigger signal. The efficacy of such a device should be tested in a large scale clinical trial. As such, a probe should remain in situ for several hours while patients attend to their daily routine; the recording electrodes should be designed to be large enough to maintain good contact while observing design constraints. The objective of this study was to arrive at a design for intra-anal sEMG recording electrodes for the subsequent clinical trials while deriving the possible recording and processing parameters. Having in mind existing solutions and based on theoretical and anatomical considerations, a set of four multi-electrode probes were designed and developed. These were tested in a healthy subject and the measured sEMG traces were recorded and appropriately processed. It was shown that while comparatively large electrodes record sEMG traces that are not sufficiently correlated with the external anal sphincter contractions, smaller electrodes may not maintain a stable electrode tissue contact. It was shown that 3 mm wide and 1 cm long electrodes with 5 mm inter-electrode spacing, in agreement with Nyquist sampling, placed 1 cm from the orifice may intra-anally record a sEMG trace sufficiently correlated with external anal sphincter activity. The outcome of this study can be used in any biofeedback, treatment or diagnostic application where the activity of the external anal sphincter sEMG should be detected for an extended period of time.
ESR paper on the proper use of mobile devices in radiology.
2018-04-01
Mobile devices (smartphones, tablets, etc.) have become key methods of communication, data access and data sharing for the population in the past decade. The technological capabilities of these devices have expanded very rapidly; for example, their in-built cameras have largely replaced conventional cameras. Their processing power is often sufficient to handle the large data sets of radiology studies and to manipulate images and studies directly on hand-held devices. Thus, they can be used to transmit and view radiology studies, often in locations remote from the source of the imaging data. They are not recommended for primary interpretation of radiology studies, but they facilitate sharing of studies for second opinions, viewing of studies and reports by clinicians at the bedside, etc. Other potential applications include remote participation in educational activity (e.g. webinars) and consultation of online educational content, e-books, journals and reference sources. Social-networking applications can be used for exchanging professional information and teaching. Users of mobile device must be aware of the vulnerabilities and dangers of their use, in particular regarding the potential for inappropriate sharing of confidential patient information, and must take appropriate steps to protect confidential data. • Mobile devices have revolutionized communication in the past decade, and are now ubiquitous. • Mobile devices have sufficient processing power to manipulate and display large data sets of radiological images. • Mobile devices allow transmission & sharing of radiologic studies for purposes of second opinions, bedside review of images, teaching, etc. • Mobile devices are currently not recommended as tools for primary interpretation of radiologic studies. • The use of mobile devices for image and data transmission carries risks, especially regarding confidentiality, which must be considered.
A minimal standardization setting for language mapping tests: an Italian example.
Rofes, Adrià; de Aguiar, Vânia; Miceli, Gabriele
2015-07-01
During awake surgery, picture-naming tests are administered to identify brain structures related to language function (language mapping), and to avoid iatrogenic damage. Before and after surgery, naming tests and other neuropsychological procedures aim at charting naming abilities, and at detecting which items the subject can respond to correctly. To achieve this goal, sufficiently large samples of normed and standardized stimuli must be available for preoperative and postoperative testing, and to prepare intraoperative tasks, the latter only including items named flawlessly preoperatively. To discuss design, norming and presentation of stimuli, and to describe the minimal standardization setting used to develop two sets of Italian stimuli, one for object naming and one for verb naming, respectively. The setting includes a naming study (to obtain picture-name agreement ratings), two on-line questionnaires (to acquire age-of-acquisition and imageability ratings for all test items), and the norming of other relevant language variables. The two sets of stimuli have >80 % picture-name agreement, high levels of internal consistency and reliability for imageability and age of acquisition ratings. They are normed for psycholinguistic variables known to affect lexical access and retrieval, and are validated in a clinical population. This framework can be used to increase the probability of reliably detecting language impairments before and after surgery, to prepare intraoperative tests based on sufficient knowledge of pre-surgical language abilities in each patient, and to decrease the probability of false positives during surgery. Examples of data usage are provided. Normative data can be found in the supplementary materials.
NASA Technical Reports Server (NTRS)
Majda, G.
1985-01-01
A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.
NASA Astrophysics Data System (ADS)
Langhoff, P. W.; Winstead, C. L.
Early studies of the electronically excited states of molecules by John A. Pople and coworkers employing ab initio single-excitation configuration interaction (SECI) calculations helped to simulate related applications of these methods to the partial-channel photoionization cross sections of polyatomic molecules. The Gaussian representations of molecular orbitals adopted by Pople and coworkers can describe SECI continuum states when sufficiently large basis sets are employed. Minimal-basis virtual Fock orbitals stabilized in the continuous portions of such SECI spectra are generally associated with strong photoionization resonances. The spectral attributes of these resonance orbitals are illustrated here by revisiting previously reported experimental and theoretical studies of molecular formaldehyde (H2CO) in combination with recently calculated continuum orbital amplitudes.
NASA Astrophysics Data System (ADS)
Kuroki, Nahoko; Mori, Hirotoshi
2018-02-01
Effective fragment potential version 2 - molecular dynamics (EFP2-MD) simulations, where the EFP2 is a polarizable force field based on ab initio electronic structure calculations were applied to water-methanol binary mixture. Comparing EFP2s defined with (aug-)cc-pVXZ (X = D,T) basis sets, it was found that large sets are necessary to generate sufficiently accurate EFP2 for predicting mixture properties. It was shown that EFP2-MD could predict the excess molar volume. Since the computational cost of EFP2-MD are far less than ab initio MD, the results presented herein demonstrate that EFP2-MD is promising for predicting physicochemical properties of novel mixed solvents.
Multilingual Twitter Sentiment Classification: The Role of Human Annotators
Mozetič, Igor; Grčar, Miha; Smailović, Jasmina
2016-01-01
What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered. PMID:27149621
Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.
Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf
2014-01-01
Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.
An Oracle-based co-training framework for writer identification in offline handwriting
NASA Astrophysics Data System (ADS)
Porwal, Utkarsh; Rajan, Sreeranga; Govindaraju, Venu
2012-01-01
State-of-the-art techniques for writer identification have been centered primarily on enhancing the performance of the system for writer identification. Machine learning algorithms have been used extensively to improve the accuracy of such system assuming sufficient amount of data is available for training. Little attention has been paid to the prospect of harnessing the information tapped in a large amount of un-annotated data. This paper focuses on co-training based framework that can be used for iterative labeling of the unlabeled data set exploiting the independence between the multiple views (features) of the data. This paradigm relaxes the assumption of sufficiency of the data available and tries to generate labeled data from unlabeled data set along with improving the accuracy of the system. However, performance of co-training based framework is dependent on the effectiveness of the algorithm used for the selection of data points to be added in the labeled set. We propose an Oracle based approach for data selection that learns the patterns in the score distribution of classes for labeled data points and then predicts the labels (writers) of the unlabeled data point. This method for selection statistically learns the class distribution and predicts the most probable class unlike traditional selection algorithms which were based on heuristic approaches. We conducted experiments on publicly available IAM dataset and illustrate the efficacy of the proposed approach.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Reflections on experimental research in medical education.
Cook, David A; Beckman, Thomas J
2010-08-01
As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest-posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest-posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.
Provenance Challenges for Earth Science Dataset Publication
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2011-01-01
Modern science is increasingly dependent on computational analysis of very large data sets. Organizing, referencing, publishing those data has become a complex problem. Published research that depends on such data often fails to cite the data in sufficient detail to allow an independent scientist to reproduce the original experiments and analyses. This paper explores some of the challenges related to data identification, equivalence and reproducibility in the domain of data intensive scientific processing. It will use the example of Earth Science satellite data, but the challenges also apply to other domains.
On Tree-Based Phylogenetic Networks.
Zhang, Louxin
2016-07-01
A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree-based networks. We present a simple necessary and sufficient condition for tree-based networks and prove that a universal tree-based network exists for any number of taxa that contains as its base every phylogenetic tree on the same set of taxa. This answers two problems posted by Francis and Steel recently. A byproduct is a computer program for generating random binary phylogenetic networks under the uniform distribution model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, T.F.; Lee, A.Y.; Ruck, G.W.
A feasible compact poloidal divertor system has been designed as an impurity control and vacuum vessel first-wall protection option for the TNS tokamak. The divertor coils are inside the TF coil array and vacuum vessel. The poloidal divertor is formed by a pair of coil sets with zero net current. Each set consists of a number of coils forming a dish-shaped washer-like ring. The magnetic flux in the space between the coil sets is compressed vertically to limit the height and to expand the horizontal width of the particle and energy burial chamber which is located in the gap betweenmore » the coil sets. The intensity of the poloidal field is increased to make the pitch angle of the flux lines very large so that the diverted particles can be intercepted by a large number of panels oriented at a small angle with respect to the flux lines. They are carefully shaped and designed such that the entire surfaces are exposed to the incident particles and are not shadowed by each other. Large collecting surface areas can be obtained. Flowing liquid lithium film and solid metal panels have been considered as the particle collectors. The power density for the former is designed at 1 MW/m/sup 2/ and for the latter 0.5 MW/m/sup 2/. The major mechanical, thermal, and vacuum problems have been evaluated in sufficient detail so that the advantages and difficulties are identified. A complete functional picture is presented.« less
Can multi-subpopulation reference sets improve the genomic predictive ability for pigs?
Fangmann, A; Bergfelder-Drüing, S; Tholen, E; Simianer, H; Erbe, M
2015-12-01
In most countries and for most livestock species, genomic evaluations are obtained from within-breed analyses. To achieve reliable breeding values, however, a sufficient reference sample size is essential. To increase this size, the use of multibreed reference populations for small populations is considered a suitable option in other species. Over decades, the separate breeding work of different pig breeding organizations in Germany has led to stratified subpopulations in the breed German Large White. Due to this fact and the limited number of Large White animals available in each organization, there was a pressing need for ascertaining if multi-subpopulation genomic prediction is superior compared with within-subpopulation prediction in pigs. Direct genomic breeding values were estimated with genomic BLUP for the trait "number of piglets born alive" using genotype data (Illumina Porcine 60K SNP BeadChip) from 2,053 German Large White animals from five different commercial pig breeding companies. To assess the prediction accuracy of within- and multi-subpopulation reference sets, a random 5-fold cross-validation with 20 replications was performed. The five subpopulations considered were only slightly differentiated from each other. However, the prediction accuracy of the multi-subpopulations approach was not better than that of the within-subpopulation evaluation, for which the predictive ability was already high. Reference sets composed of closely related multi-subpopulation sets performed better than sets of distantly related subpopulations but not better than the within-subpopulation approach. Despite the low differentiation of the five subpopulations, the genetic connectedness between these different subpopulations seems to be too small to improve the prediction accuracy by applying multi-subpopulation reference sets. Consequently, resources should be used for enlarging the reference population within subpopulation, for example, by adding genotyped females.
Chemotaxis with logistic source
NASA Astrophysics Data System (ADS)
Winkler, Michael
2008-12-01
We consider the chemotaxis system in a smooth bounded domain , where [chi]>0 and g generalizes the logistic function g(u)=Au-bu[alpha] with [alpha]>1, A[greater-or-equal, slanted]0 and b>0. A concept of very weak solutions is introduced, and global existence of such solutions for any nonnegative initial data u0[set membership, variant]L1([Omega]) is proved under the assumption that . Moreover, boundedness properties of the constructed solutions are studied. Inter alia, it is shown that if b is sufficiently large and u0[set membership, variant]L[infinity]([Omega]) has small norm in L[gamma]([Omega]) for some then the solution is globally bounded. Finally, in the case that additionally holds, a bounded set in L[infinity]([Omega]) can be found which eventually attracts very weak solutions emanating from arbitrary L1 initial data. The paper closes with numerical experiments that illustrate some of the theoretically established results.
"Size-Independent" Single-Electron Tunneling.
Zhao, Jianli; Sun, Shasha; Swartz, Logan; Riechers, Shawn; Hu, Peiguang; Chen, Shaowei; Zheng, Jie; Liu, Gang-Yu
2015-12-17
Incorporating single-electron tunneling (SET) of metallic nanoparticles (NPs) into modern electronic devices offers great promise to enable new properties; however, it is technically very challenging due to the necessity to integrate ultrasmall (<10 nm) particles into the devices. The nanosize requirements are intrinsic for NPs to exhibit quantum or SET behaviors, for example, 10 nm or smaller, at room temperature. This work represents the first observation of SET that defies the well-known size restriction. Using polycrystalline Au NPs synthesized via our newly developed solid-state glycine matrices method, a Coulomb Blockade was observed for particles as large as tens of nanometers, and the blockade voltage exhibited little dependence on the size of the NPs. These observations are counterintuitive at first glance. Further investigations reveal that each observed SET arises from the ultrasmall single crystalline grain(s) within the polycrystal NP, which is (are) sufficiently isolated from the nearest neighbor grains. This work demonstrates the concept and feasibility to overcome orthodox spatial confinement requirements to achieve quantum effects.
Nagle, Brian J; Holub, Christina K; Barquera, Simón; Sánchez-Romero, Luz María; Eisenberg, Christina M; Rivera-Dommarco, Juan A; Mehta, Setoo M; Lobelo, Felipe; Arredondo, Elva M; Elder, John P
2013-01-01
The objective of this systematic literature review was to identify evidence-based strategies associated with effective healthcare interventions for prevention or treatment of childhood obesity in Latin America. A systematic review of peer-reviewed, obesity-related interventions implemented in the healthcare setting was conducted. Inclusion criteria included: implementation in Latin America, aimed at overweight or obese children and evaluation of at least one obesity-related outcome (e.g., body mass index (BMI), z-score, weight, and waist circumference, and body fat). Five interventions in the healthcare setting targeting obese children in Latin America were identified. All five studies showed significant changes in BMI, and the majority produced sufficient to large effect sizes through emphasizing physical activity and health eating. Despite the limited number of intervention studies that treat obesity in the healthcare setting, there is evidence that interventions in this setting can be effective in creating positive anthropometric changes in overweight and obese children.
Telling plant species apart with DNA: from barcodes to genomes
Li, De-Zhu; van der Bank, Michelle
2016-01-01
Land plants underpin a multitude of ecosystem functions, support human livelihoods and represent a critically important component of terrestrial biodiversity—yet many tens of thousands of species await discovery, and plant identification remains a substantial challenge, especially where material is juvenile, fragmented or processed. In this opinion article, we tackle two main topics. Firstly, we provide a short summary of the strengths and limitations of plant DNA barcoding for addressing these issues. Secondly, we discuss options for enhancing current plant barcodes, focusing on increasing discriminatory power via either gene capture of nuclear markers or genome skimming. The former has the advantage of establishing a defined set of target loci maximizing efficiency of sequencing effort, data storage and analysis. The challenge is developing a probe set for large numbers of nuclear markers that works over sufficient phylogenetic breadth. Genome skimming has the advantage of using existing protocols and being backward compatible with existing barcodes; and the depth of sequence coverage can be increased as sequencing costs fall. Its non-targeted nature does, however, present a major informatics challenge for upscaling to large sample sets. This article is part of the themed issue ‘From DNA barcodes to biomes’. PMID:27481790
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, Bienvenido; Novo, Vicente
We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditionsmore » when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given.« less
Deep learning with non-medical training used for chest pathology identification
NASA Astrophysics Data System (ADS)
Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit
2015-03-01
In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.
NASA Astrophysics Data System (ADS)
Rees, S. J.; Jones, Bryan F.
1992-11-01
Once feature extraction has occurred in a processed image, the recognition problem becomes one of defining a set of features which maps sufficiently well onto one of the defined shape/object models to permit a claimed recognition. This process is usually handled by aggregating features until a large enough weighting is obtained to claim membership, or an adequate number of located features are matched to the reference set. A requirement has existed for an operator or measure capable of a more direct assessment of membership/occupancy between feature sets, particularly where the feature sets may be defective representations. Such feature set errors may be caused by noise, by overlapping of objects, and by partial obscuration of features. These problems occur at the point of acquisition: repairing the data would then assume a priori knowledge of the solution. The technique described in this paper offers a set theoretical measure for partial occupancy defined in terms of the set of minimum additions to permit full occupancy and the set of locations of occupancy if such additions are made. As is shown, this technique permits recognition of partial feature sets with quantifiable degrees of uncertainty. A solution to the problems of obscuration and overlapping is therefore available.
Naor, Michael; Heyman, Samuel N; Bader, Tarif; Merin, Ofer
2017-01-01
The Israeli Defense Force (IDF) Medical Corps developed a model of airborne field hospital. This model was structured to deal with disaster settings, requiring self-sufficiency, innovation and flexible operative mode in the setup of large margins of uncertainty regarding the disaster environment. The current study is aimed to critically analyze the experience, gathered in ten such missions worldwide. Interviews with physicians who actively participated in the missions from 1988 until 2015 as chief medical officers combined with literature review of principal medical and auxiliary publications in order to assess and integrate information about the assembly of these missions. A body of knowledge was accumulated over the years by the IDF Medical Corps from deploying numerous relief missions to both natural (earthquake, typhoon, and tsunami), and man-made disasters, occurring in nine countries (Armenia, Rwanda, Kosovo, Turkey, India, Haiti, Japan, Philippines, and Nepal). This study shows an evolutionary pattern with improvements implemented from one mission to the other, with special adaptations (creativity and improvisation) to accommodate logistics barriers. The principals and operative function for deploying medical relief system, proposed over 20 years ago, were challenged and validated in the subsequent missions of IDF outlined in the current study. These principals, with the advantage of the military infrastructure and the expertise of drafted civilian medical professionals enable the rapid assembly and allocation of highly competent medical facilities in disaster settings. This structure model is to large extent self-sufficient with a substantial operative flexibility that permits early deployment upon request while the disaster assessment and definition of needs are preliminary.
Video Salient Object Detection via Fully Convolutional Networks.
Wang, Wenguan; Shen, Jianbing; Shao, Ling
This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).
Habitat and environment of islands: primary and supplemental island sets
Matalas, Nicholas C.; Grossling, Bernardo F.
2002-01-01
The original intent of the study was to develop a first-order synopsis of island hydrology with an integrated geologic basis on a global scale. As the study progressed, the aim was broadened to provide a framework for subsequent assessments on large regional or global scales of island resources and impacts on those resources that are derived from global changes. Fundamental to the study was the development of a comprehensive framework?a wide range of parameters that describe a set of 'saltwater' islands sufficiently large to Characterize the spatial distribution of the world?s islands; Account for all major archipelagos; Account for almost all oceanically isolated islands, and Account collectively for a very large proportion of the total area of the world?s islands whereby additional islands would only marginally contribute to the representativeness and accountability of the island set. The comprehensive framework, which is referred to as the ?Primary Island Set,? is built on 122 parameters that describe 1,000 islands. To complement the investigations based on the Primary Island Set, two supplemental island sets, Set A?Other Islands (not in the Primary Island Set) and Set B?Lagoonal Atolls, are included in the study. The Primary Island Set, together with the Supplemental Island Sets A and B, provides a framework that can be used in various scientific disciplines for their island-based studies on broad regional or global scales. The study uses an informal, coherent, geophysical organization of the islands that belong to the three island sets. The organization is in the form of a global island chain, which is a particular sequential ordering of the islands referred to as the 'Alisida.' The Alisida was developed through a trial-and-error procedure by seeking to strike a balance between 'minimizing the length of the global chain' and 'maximizing the chain?s geophysical coherence.' The fact that an objective function cannot be minimized and maximized simultaneously indicates that the Alisida is not unique. Global island chains other than the Alisida may better serve disciplines other than those of hydrology and geology.
Moser, Corinne; Blumer, Yann
2017-01-01
Many countries have some kind of energy-system transformation either planned or ongoing for various reasons, such as to curb carbon emissions or to compensate for the phasing out of nuclear energy. One important component of these transformations is the overall reduction in energy demand. It is generally acknowledged that the domestic sector represents a large share of total energy consumption in many countries. Increased energy efficiency is one factor that reduces energy demand, but behavioral approaches (known as “sufficiency”) and their respective interventions also play important roles. In this paper, we address citizens’ heterogeneity regarding both their current behaviors and their willingness to realize their sufficiency potentials—that is, to reduce their energy consumption through behavioral change. We collaborated with three Swiss cities for this study. A survey conducted in the three cities yielded thematic sets of energy-consumption behavior that various groups of participants rated differently. Using this data, we identified four groups of participants with different patterns of both current behaviors and sufficiency potentials. The paper discusses intervention types and addresses citizens’ heterogeneity and behaviors from a city-based perspective. PMID:29016642
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
Financial performance and managed care trends of health centers.
Martin, Brian C; Shi, Leiyu; Ward, Ryan D
2009-01-01
Data were analyzed from the 1998-2004 Uniform Data System (UDS) to identify trends and predictors of financial performance (costs, productivity, and overall financial health) for health centers (HCs). Several differences were noted regarding revenues, self-sufficiency, service offerings, and urban/rural setting. Urban centers with larger numbers of clients, centers that treated high numbers of patients with chronic diseases, and centers with large numbers of prenatal care users were the most fiscally sound. Positive financial performance can be targeted through strategies that generate positive revenue, strive to decrease costs, and target services that are in demand.
Remote Earth Sciences data collection using ACTS
NASA Technical Reports Server (NTRS)
Evans, Robert H.
1992-01-01
Given the focus on global change and the attendant scope of such research, we anticipate significant growth of requirements for investigator interaction, processing system capabilities, and availability of data sets. The increased complexity of global processes requires interdisciplinary teams to address them; the investigators will need to interact on a regular basis; however, it is unlikely that a single institution will house sufficient investigators with the required breadth of skills. The complexity of the computations may also require resources beyond those located within a single institution; this lack of sufficient computational resources leads to a distributed system located at geographically dispersed institutions. Finally the combination of long term data sets like the Pathfinder datasets and the data to be gathered by new generations of satellites such as SeaWiFS and MODIS-N yield extra-ordinarily large amounts of data. All of these factors combine to increase demands on the communications facilities available; the demands are generating requirements for highly flexible, high capacity networks. We have been examining the applicability of the Advanced Communications Technology Satellite (ACTS) to address the scientific, computational, and, primarily, communications questions resulting from global change research. As part of this effort three scenarios for oceanographic use of ACTS have been developed; a full discussion of this is contained in Appendix B.
Gardiner, Laura-Jayne; Gawroński, Piotr; Olohan, Lisa; Schnurbusch, Thorsten; Hall, Neil; Hall, Anthony
2014-12-01
Mapping-by-sequencing analyses have largely required a complete reference sequence and employed whole genome re-sequencing. In species such as wheat, no finished genome reference sequence is available. Additionally, because of its large genome size (17 Gb), re-sequencing at sufficient depth of coverage is not practical. Here, we extend the utility of mapping by sequencing, developing a bespoke pipeline and algorithm to map an early-flowering locus in einkorn wheat (Triticum monococcum L.) that is closely related to the bread wheat genome A progenitor. We have developed a genomic enrichment approach using the gene-rich regions of hexaploid bread wheat to design a 110-Mbp NimbleGen SeqCap EZ in solution capture probe set, representing the majority of genes in wheat. Here, we use the capture probe set to enrich and sequence an F2 mapping population of the mutant. The mutant locus was identified in T. monococcum, which lacks a complete genome reference sequence, by mapping the enriched data set onto pseudo-chromosomes derived from the capture probe target sequence, with a long-range order of genes based on synteny of wheat with Brachypodium distachyon. Using this approach we are able to map the region and identify a set of deleted genes within the interval. © 2014 The Authors.The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
Fast Segmentation of Stained Nuclei in Terabyte-Scale, Time Resolved 3D Microscopy Image Stacks
Stegmaier, Johannes; Otte, Jens C.; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G. Ulrich; Strähle, Uwe; Mikut, Ralf
2014-01-01
Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu’s method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm’s superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results. PMID:24587204
Evaluating information content of SNPs for sample-tagging in re-sequencing projects.
Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F
2015-05-15
Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.
A MapReduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset.
Kamal, Sarwar; Ripon, Shamim Hasnat; Dey, Nilanjan; Ashour, Amira S; Santhi, V
2016-07-01
In the age of information superhighway, big data play a significant role in information processing, extractions, retrieving and management. In computational biology, the continuous challenge is to manage the biological data. Data mining techniques are sometimes imperfect for new space and time requirements. Thus, it is critical to process massive amounts of data to retrieve knowledge. The existing software and automated tools to handle big data sets are not sufficient. As a result, an expandable mining technique that enfolds the large storage and processing capability of distributed or parallel processing platforms is essential. In this analysis, a contemporary distributed clustering methodology for imbalance data reduction using k-nearest neighbor (K-NN) classification approach has been introduced. The pivotal objective of this work is to illustrate real training data sets with reduced amount of elements or instances. These reduced amounts of data sets will ensure faster data classification and standard storage management with less sensitivity. However, general data reduction methods cannot manage very big data sets. To minimize these difficulties, a MapReduce-oriented framework is designed using various clusters of automated contents, comprising multiple algorithmic approaches. To test the proposed approach, a real DNA (deoxyribonucleic acid) dataset that consists of 90 million pairs has been used. The proposed model reduces the imbalance data sets from large-scale data sets without loss of its accuracy. The obtained results depict that MapReduce based K-NN classifier provided accurate results for big data of DNA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Stellar evolution of high mass based on the Ledoux criterion for convection
NASA Technical Reports Server (NTRS)
Stothers, R.; Chin, C.
1972-01-01
Theoretical evolutionary sequences of models for stars of 15 and 30 solar masses were computed from the zero-age main sequence to the end of core helium burning. During the earliest stages of core helium depletion, the envelope rapidly expands into the red-supergiant configuration. At 15 solar mass, a blue loop on the H-R diagram ensues if the initial metals abundance, initial helium abundance, or C-12 + alpha particle reaction rate is sufficiently large, or if the 3-alpha reaction rate is sufficiently small. These quantities affect the opacity of the base of the outer convection zone, the mass of the core, and the thermal properties of the core. The blue loop occurs abruptly and fully developed when the critical value of any of these quantities is exceeded, and the effective temperature range and fraction of the lifetime of core helium burning during the slow phase of the blue loop vary surprisingly little. At 30 solar mass no blue loop occurs for any reasonable set of input parameters.
Posterior consistency in conditional distribution estimation
Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.
2014-01-01
A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858
Gallos, Lazaros K; Makse, Hernán A; Sigman, Mariano
2012-02-21
The human brain is organized in functional modules. Such an organization presents a basic conundrum: Modules ought to be sufficiently independent to guarantee functional specialization and sufficiently connected to bind multiple processors for efficient information transfer. It is commonly accepted that small-world architecture of short paths and large local clustering may solve this problem. However, there is intrinsic tension between shortcuts generating small worlds and the persistence of modularity, a global property unrelated to local clustering. Here, we present a possible solution to this puzzle. We first show that a modified percolation theory can define a set of hierarchically organized modules made of strong links in functional brain networks. These modules are "large-world" self-similar structures and, therefore, are far from being small-world. However, incorporating weaker ties to the network converts it into a small world preserving an underlying backbone of well-defined modules. Remarkably, weak ties are precisely organized as predicted by theory maximizing information transfer with minimal wiring cost. This trade-off architecture is reminiscent of the "strength of weak ties" crucial concept of social networks. Such a design suggests a natural solution to the paradox of efficient information flow in the highly modular structure of the brain.
Gallos, Lazaros K.; Makse, Hernán A.; Sigman, Mariano
2012-01-01
The human brain is organized in functional modules. Such an organization presents a basic conundrum: Modules ought to be sufficiently independent to guarantee functional specialization and sufficiently connected to bind multiple processors for efficient information transfer. It is commonly accepted that small-world architecture of short paths and large local clustering may solve this problem. However, there is intrinsic tension between shortcuts generating small worlds and the persistence of modularity, a global property unrelated to local clustering. Here, we present a possible solution to this puzzle. We first show that a modified percolation theory can define a set of hierarchically organized modules made of strong links in functional brain networks. These modules are “large-world” self-similar structures and, therefore, are far from being small-world. However, incorporating weaker ties to the network converts it into a small world preserving an underlying backbone of well-defined modules. Remarkably, weak ties are precisely organized as predicted by theory maximizing information transfer with minimal wiring cost. This trade-off architecture is reminiscent of the “strength of weak ties” crucial concept of social networks. Such a design suggests a natural solution to the paradox of efficient information flow in the highly modular structure of the brain. PMID:22308319
Challenges of microtome‐based serial block‐face scanning electron microscopy in neuroscience
WANNER, A. A.; KIRSCHMANN, M. A.
2015-01-01
Summary Serial block‐face scanning electron microscopy (SBEM) is becoming increasingly popular for a wide range of applications in many disciplines from biology to material sciences. This review focuses on applications for circuit reconstruction in neuroscience, which is one of the major driving forces advancing SBEM. Neuronal circuit reconstruction poses exceptional challenges to volume EM in terms of resolution, field of view, acquisition time and sample preparation. Mapping the connections between neurons in the brain is crucial for understanding information flow and information processing in the brain. However, information on the connectivity between hundreds or even thousands of neurons densely packed in neuronal microcircuits is still largely missing. Volume EM techniques such as serial section TEM, automated tape‐collecting ultramicrotome, focused ion‐beam scanning electron microscopy and SBEM (microtome serial block‐face scanning electron microscopy) are the techniques that provide sufficient resolution to resolve ultrastructural details such as synapses and provides sufficient field of view for dense reconstruction of neuronal circuits. While volume EM techniques are advancing, they are generating large data sets on the terabyte scale that require new image processing workflows and analysis tools. In this review, we present the recent advances in SBEM for circuit reconstruction in neuroscience and an overview of existing image processing and analysis pipelines. PMID:25907464
Reisner, A E
2005-11-01
The building and expansion of large-scale swine facilities has created considerable controversy in many neighboring communities, but to date, no systematic analysis has been done of the types of claims made during these conflicts. This study examined how local newspapers in one state covered the transition from the dominance of smaller, diversified swine operations to large, single-purpose pig production facilities. To look at publicly made statements concerning large-scale swine facilities (LSSF), the study collected all articles related to LSSF from 22 daily Illinois newspapers over a 3-yr period (a total of 1,737 articles). The most frequent sets of claims used by proponents of LSSF were that the environment was not harmed, that state regulations were sufficiently strict, and that the state economically needed this type of agriculture. The most frequent claims made by opponents were that LSSF harmed the environment and neighboring communities and that stricter regulations were needed. Proponents' claims were primarily defensive and, to some degree, underplayed the advantages of LSSF. Pro-and anti-LSSF groups were talking at cross-purposes, to some degree. Even across similar themes, those in favor of LSSF and those opposed were addressing different sets of concerns. The newspaper claims did not indicate any effective alliances forming between local anti-LSSF groups and national environmental or animal rights groups.
Bell, Steven E J; Barrett, Lindsay J; Burns, D Thorburn; Dennis, Andrew C; Speers, S James
2003-11-01
Here we report the results of the largest study yet carried out on composition profiling of seized "ecstasy" tablets by Raman spectroscopy. Approximately 1500 tablets from different seizures in N. Ireland were analysed and even though practically all the tablets contained MDMA as active constituent, there were very significant differences in their Raman spectra, which were due to variations in both the nature and concentration of the excipients used and/or the degree of hydration of the MDMA. The ratios of the peak heights of the prominent drug bands at 810 cm(-1) and 716 cm(-1) (which vary with hydration state of the drug), and the drug band at 810 cm(-1) against the largest clearly discernible excipient band in the spectrum were measured for all the samples. It was found that there was sufficient variation in composition in the general sample population to make any matches between batches of tablets taken from different seizures significant, rather than the result of random chance. Despite the large number of different batches of tablets examined in this study, only two examples of indistinguishable sets of tablets were found and in only one of these had the two batches of tablets been seized at different times. Finally, the fact that there are many examples of batches of tablets (particularly in different batches taken from single seizures) in which the differences between each set are sufficiently small that they appear to arise only from random variations within a standard manufacturing method implies that, with more extensive data, it may be possible to recognize the "signature" of tablets prepared by major manufacturers.
Duda, Catherine; Rajaram, Kumar; Barz, Christiane; Rosenthal, J Thomas
2013-01-01
There has been an increasing emphasis on health care efficiency and costs and on improving quality in health care settings such as hospitals or clinics. However, there has not been sufficient work on methods of improving access and customer service times in health care settings. The study develops a framework for improving access and customer service time for health care settings. In the framework, the operational concept of the bottleneck is synthesized with queuing theory to improve access and reduce customer service times without reduction in clinical quality. The framework is applied at the Ronald Reagan UCLA Medical Center to determine the drivers for access and customer service times and then provides guidelines on how to improve these drivers. Validation using simulation techniques shows significant potential for reducing customer service times and increasing access at this institution. Finally, the study provides several practice implications that could be used to improve access and customer service times without reduction in clinical quality across a range of health care settings from large hospitals to small community clinics.
Komatsu, F; Ishida, Y
1997-04-01
For chronic myelocytic leukemia patients with very high numbers of platelets, we describe an efficient method for the collection of peripheral blood stem cells (PBSC) using the Fresenius AS104 cell separator. In these patients, it is difficult to collect a sufficient number of PBSC, due to the platelet band interfering with the machine's red cell interface sensor. We, therefore, tried a manual adjustment of the device. The collection phase was set automatically. When the whole blood began to separate into the red cell layer and plasma (plus mononuclear cell) layer, the red cell interface setting of "7:1" was changed to "OFF," and the plasma pump flow rate was controlled manually in order to locate the interface position 1 cm from the outside wall of the centrifuge chamber. After the collection phase, the procedure was returned to the automatic setting. By repeating this procedure, we were able to collect large numbers of PBSC.
Minimal sufficient positive-operator valued measure on a separable Hilbert space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp
We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.
Drug2Gene: an exhaustive resource to explore effectively the drug-target relation network.
Roider, Helge G; Pavlova, Nadia; Kirov, Ivaylo; Slavov, Stoyan; Slavov, Todor; Uzunov, Zlatyo; Weiss, Bertram
2014-03-11
Information about drug-target relations is at the heart of drug discovery. There are now dozens of databases providing drug-target interaction data with varying scope, and focus. Therefore, and due to the large chemical space, the overlap of the different data sets is surprisingly small. As searching through these sources manually is cumbersome, time-consuming and error-prone, integrating all the data is highly desirable. Despite a few attempts, integration has been hampered by the diversity of descriptions of compounds, and by the fact that the reported activity values, coming from different data sets, are not always directly comparable due to usage of different metrics or data formats. We have built Drug2Gene, a knowledge base, which combines the compound/drug-gene/protein information from 19 publicly available databases. A key feature is our rigorous unification and standardization process which makes the data truly comparable on a large scale, allowing for the first time effective data mining in such a large knowledge corpus. As of version 3.2, Drug2Gene contains 4,372,290 unified relations between compounds and their targets most of which include reported bioactivity data. We extend this set with putative (i.e. homology-inferred) relations where sufficient sequence homology between proteins suggests they may bind to similar compounds. Drug2Gene provides powerful search functionalities, very flexible export procedures, and a user-friendly web interface. Drug2Gene v3.2 has become a mature and comprehensive knowledge base providing unified, standardized drug-target related information gathered from publicly available data sources. It can be used to integrate proprietary data sets with publicly available data sets. Its main goal is to be a 'one-stop shop' to identify tool compounds targeting a given gene product or for finding all known targets of a drug. Drug2Gene with its integrated data set of public compound-target relations is freely accessible without restrictions at http://www.drug2gene.com.
NASA Technical Reports Server (NTRS)
Sirlin, S. W.; Longman, R. W.; Juang, J. N.
1985-01-01
With a sufficiently great number of sensors and actuators, any finite dimensional dynamic system is identifiable on the basis of input-output data. It is presently indicated that, for conservative nongyroscopic linear mechanical systems, the number of sensors and actuators required for identifiability is very large, where 'identifiability' is understood as a unique determination of the mass and stiffness matrices. The required number of sensors and actuators drops by a factor of two, given a relaxation of the identifiability criterion so that identification can fail only if the system parameters being identified lie in a set of measure zero. When the mass matrix is known a priori, this additional information does not significantly affect the requirements for guaranteed identifiability, though the number of parameters to be determined is reduced by a factor of two.
Opposing Effects of Fasting Metabolism on Tissue Tolerance in Bacterial and Viral Inflammation
Wang, Andrew; Huen, Sarah C.; Luan, Harding H.; Yu, Shuang; Zhang, Cuiling; Gallezot, Jean-Dominique; Booth, Carmen J.; Medzhitov, Ruslan
2017-01-01
Summary Acute infections are associated with a set of stereotypic behavioral responses, including anorexia, lethargy, and social withdrawal. Although these so called sickness behaviors are the most common and familiar symptoms of infections, their roles in host defense are largely unknown. Here we investigated the role of anorexia in models of bacterial and viral infections. We found that anorexia was protective while nutritional supplementation was detrimental in bacterial sepsis. Furthermore, glucose was necessary and sufficient for these effects. In contrast, nutritional supplementation protected against mortality from influenza infection and viral sepsis, while blocking glucose utilization was lethal. In both bacterial and viral models, these effects were largely independent of pathogen load and magnitude of inflammation. Instead, we identify opposing metabolic requirements tied to cellular stress adaptations critical for tolerance of differential inflammatory states. PMID:27610573
NASA Technical Reports Server (NTRS)
Billman, Kenneth W.; Gilbreath, William P.; Bowen, Stuart W.
1978-01-01
A system of orbiting, large-area, low mass density reflector satellites which provide nearly continuous solar energy to a world-distributed set of conversion sites is examined under the criteria for any potential new energy system: technical feasibility, significant and renewable energy impact, economic feasibility and social/political acceptability. Although many technical issues need further study, reasonable advances in space technology appear sufficient to implement the system. The enhanced insolation is shown to greatly improve the economic competitiveness of solar-electric generation to circa 1995 fossil/nuclear alternatives. The system is shown to have the potential for supplying a significant fraction of future domestic and world energy needs. Finally, the environmental and social issues, including a means for financing such a large shift to a world solar energy dependence, is addressed.
Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.
1977-01-01
A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.
Ion accelerator systems for high power 30 cm thruster operation
NASA Technical Reports Server (NTRS)
Aston, G.
1982-01-01
Two and three-grid accelerator systems for high power ion thruster operation were investigated. Two-grid translation tests show that over compensation of the 30 cm thruster SHAG grid set spacing the 30 cm thruster radial plasma density variation and by incorporating grid compensation only sufficient to maintain grid hole axial alignment, it is shown that beam current gains as large as 50% can be realized. Three-grid translation tests performed with a simulated 30 cm thruster discharge chamber show that substantial beamlet steering can be reliably affected by decelerator grid translation only, at net-to-total voltage ratios as low as 0.05.
Merriman, Tony R; Choi, Hyon K; Dalbeth, Nicola
2014-05-01
Gout results from deposition of monosodium urate (MSU) crystals. Elevated serum urate concentrations (hyperuricemia) are not sufficient for the development of disease. Genome-wide association studies (GWAS) have identified 28 loci controlling serum urate levels. The largest genetic effects are seen in genes involved in the renal excretion of uric acid, with others being involved in glycolysis. Whereas much is understood about the genetic control of serum urate levels, little is known about the genetic control of inflammatory responses to MSU crystals. Extending knowledge in this area depends on recruitment of large, clinically ascertained gout sample sets suitable for GWAS. Copyright © 2014 Elsevier Inc. All rights reserved.
A laser technique for characterizing the geometry of plant canopies
NASA Technical Reports Server (NTRS)
Vanderbilt, V. C.; Silva, L. F.; Bauer, M. E.
1977-01-01
The interception of solar power by the canopy is investigated as a function of solar zenith angle (time), component of the canopy, and depth into the canopy. The projected foliage area, cumulative leaf area, and view factors within the canopy are examined as a function of the same parameters. Two systems are proposed that are capable of describing the geometrical aspects of a vegetative canopy and of operation in an automatic mode. Either system would provide sufficient data to yield a numerical map of the foliage area in the canopy. Both systems would involve the collection of large data sets in a short time period using minimal manpower.
Prospects and perspectives for development of a vaccine against herpes simplex virus infections.
McAllister, Shane C; Schleiss, Mark R
2014-11-01
Herpes simplex viruses 1 and 2 are human pathogens that lead to significant morbidity and mortality in certain clinical settings. The development of effective antiviral medications, however, has had little discernible impact on the epidemiology of these pathogens, largely because the majority of infections are clinically silent. Decades of work have gone into various candidate HSV vaccines, but to date none has demonstrated sufficient efficacy to warrant licensure. This review examines developments in HSV immunology and vaccine development published since 2010, and assesses the prospects for improved immunization strategies that may result in an effective, licensed vaccine in the near future.
Prospects and Perspectives for Development of a Vaccine Against Herpes Simplex Virus Infections
McAllister, Shane C.; Schleiss, Mark R.
2014-01-01
Herpes simplex viruses 1 and -2 are human pathogens that lead to significant morbidity and mortality in certain clinical settings. The development of effective antiviral medications, however, has had little discernible impact on the epidemiology of these pathogens, largely because the majority of infections are clinically silent. Decades of work have gone into various candidate HSV vaccines, but to date none has demonstrated sufficient efficacy to warrant licensure. This review examines developments in HSV immunology and vaccine development published since 2010, and assesses the prospects for improved immunization strategies that may result in an effective, licensed vaccine in the near future. PMID:25077372
Relation of pediatric blood lead levels to lead in gasoline.
Billick, I H; Curran, A S; Shier, D R
1980-01-01
Analysis of a large data set of pediatric blood lead levels collected in New York City (1970-1976) shows a highly significant association between geometric mean blood lead levels and the amount of lead present in gasoline sold during the same period. This association was observed for all age and ethnic groups studied, and it suggests that possible exposure pathways other than ambient air should be considered. Even without detailed knowledge of the exact exposure pathways, sufficient information now exists for policy analysis and decisions relevant to controls and standards related to lead in gasoline and its effect on subsets of the population. PMID:7389685
A slewing control experiment for flexible structures
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Horta, L. G.; Robertshaw, H. H.
1985-01-01
A hardware set-up has been developed to study slewing control for flexible structures including a steel beam and a solar panel. The linear optimal terminal control law is used to design active controllers which are implemented in an analog computer. The objective of this experiment is to demonstrate and verify the dynamics and optimal terminal control laws as applied to flexible structures for large angle maneuver. Actuation is provided by an electric motor while sensing is given by strain gages and angle potentiometer. Experimental measurements are compared with analytical predictions in terms of modal parameters of the system stability matrix and sufficient agreement is achieved to validate the theory.
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets.
Cole, J B; Newman, S; Foertter, F; Aguilar, I; Coffey, M
2012-03-01
Modern animal breeding data sets are large and getting larger, due in part to recent availability of high-density SNP arrays and cheap sequencing technology. High-performance computing methods for efficient data warehousing and analysis are under development. Financial and security considerations are important when using shared clusters. Sound software engineering practices are needed, and it is better to use existing solutions when possible. Storage requirements for genotypes are modest, although full-sequence data will require greater storage capacity. Storage requirements for intermediate and results files for genetic evaluations are much greater, particularly when multiple runs must be stored for research and validation studies. The greatest gains in accuracy from genomic selection have been realized for traits of low heritability, and there is increasing interest in new health and management traits. The collection of sufficient phenotypes to produce accurate evaluations may take many years, and high-reliability proofs for older bulls are needed to estimate marker effects. Data mining algorithms applied to large data sets may help identify unexpected relationships in the data, and improved visualization tools will provide insights. Genomic selection using large data requires a lot of computing power, particularly when large fractions of the population are genotyped. Theoretical improvements have made possible the inversion of large numerator relationship matrices, permitted the solving of large systems of equations, and produced fast algorithms for variance component estimation. Recent work shows that single-step approaches combining BLUP with a genomic relationship (G) matrix have similar computational requirements to traditional BLUP, and the limiting factor is the construction and inversion of G for many genotypes. A naïve algorithm for creating G for 14,000 individuals required almost 24 h to run, but custom libraries and parallel computing reduced that to 15 m. Large data sets also create challenges for the delivery of genetic evaluations that must be overcome in a way that does not disrupt the transition from conventional to genomic evaluations. Processing time is important, especially as real-time systems for on-farm decisions are developed. The ultimate value of these systems is to decrease time-to-results in research, increase accuracy in genomic evaluations, and accelerate rates of genetic improvement.
On the generation of magnetized collisionless shocks in the large plasma device
NASA Astrophysics Data System (ADS)
Schaeffer, D. B.; Winske, D.; Larson, D. J.; Cowee, M. M.; Constantin, C. G.; Bondarenko, A. S.; Clark, S. E.; Niemann, C.
2017-04-01
Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, background magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. The results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.
A new tool called DISSECT for analysing large genomic data sets using a Big Data approach
Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert
2015-01-01
Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010
On the generation of magnetized collisionless shocks in the large plasma device
Schaeffer, D. B.; Winske, D.; Larson, D. J.; ...
2017-03-22
Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, backgroundmore » magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. Here, the results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.« less
What shapes stellar metallicity gradients of massive galaxies at large radii?
NASA Astrophysics Data System (ADS)
Hirschmann, Michaela
2017-03-01
We investigate the differential impact of physical mechanisms, mergers and internal energetic phenomena, on the evolution of stellar metallicity gradients in massive, present-day galaxies employing sets of high-resolution, cosmological zoom simulations. We demonstrate that negative metallicity gradients at large radii (>2Reff) originate from the accretion of metal-poor stellar systems. At larger radii, galaxies become typically more dominated by stars accreted from satellite galaxies in major and minor mergers. However, only strong galactic, stellar-driven winds can sufficiently reduce the metallicity content of the accreted stars to realistically steepen the outer metallicity gradients in agreement with observations. In contrast, the gradients of the models without winds are inconsistent with observations. Moreover, we discuss the impact of additional AGN feedback. This analysis greatly highlights the importance of both energetic processes and merger events for stellar population properties of massive galaxies at large radii. Our results are expected to significantly contribute to the interpretation of current and up-coming IFU surveys (e.g. MaNGA, CALIFA).
Einfeld, Stewart L; Tonge, Bruce J; Clarke, Kristina S
2013-05-01
To review the recent evidence regarding early intervention and prevention studies for children with developmental disabilities and behaviour problems from 2011 to 2013. Recent advances in the field are discussed and important areas for future research are highlighted. Recent reviews and studies highlight the utility of antecedent interventions and skills training interventions for reducing behaviour problems. There is preliminary evidence for the effectiveness of parent training interventions when delivered in minimally sufficient formats or in clinical settings. Two recent studies have demonstrated the utility of behavioural interventions for children with genetic causes of disability. Various forms of behavioural and parent training interventions are effective at reducing the behaviour problems in children with developmental disabilities. However, research on prevention and early intervention continues to be relatively scarce. Further large-scale dissemination studies and effectiveness studies in clinical or applied settings are needed.
Bayesian learning of visual chunks by human observers
Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté
2008-01-01
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353
A Versatile Mounting Method for Long Term Imaging of Zebrafish Development.
Hirsinger, Estelle; Steventon, Ben
2017-01-26
Zebrafish embryos offer an ideal experimental system to study complex morphogenetic processes due to their ease of accessibility and optical transparency. In particular, posterior body elongation is an essential process in embryonic development by which multiple tissue deformations act together to direct the formation of a large part of the body axis. In order to observe this process by long-term time-lapse imaging it is necessary to utilize a mounting technique that allows sufficient support to maintain samples in the correct orientation during transfer to the microscope and acquisition. In addition, the mounting must also provide sufficient freedom of movement for the outgrowth of the posterior body region without affecting its normal development. Finally, there must be a certain degree in versatility of the mounting method to allow imaging on diverse imaging set-ups. Here, we present a mounting technique for imaging the development of posterior body elongation in the zebrafish D. rerio. This technique involves mounting embryos such that the head and yolk sac regions are almost entirely included in agarose, while leaving out the posterior body region to elongate and develop normally. We will show how this can be adapted for upright, inverted and vertical light-sheet microscopy set-ups. While this protocol focuses on mounting embryos for imaging for the posterior body, it could easily be adapted for the live imaging of multiple aspects of zebrafish development.
Ahene, Ago; Calonder, Claudio; Davis, Scott; Kowalchick, Joseph; Nakamura, Takahiro; Nouri, Parya; Vostiar, Igor; Wang, Yang; Wang, Jin
2014-01-01
In recent years, the use of automated sample handling instrumentation has come to the forefront of bioanalytical analysis in order to ensure greater assay consistency and throughput. Since robotic systems are becoming part of everyday analytical procedures, the need for consistent guidance across the pharmaceutical industry has become increasingly important. Pre-existing regulations do not go into sufficient detail in regard to how to handle the use of robotic systems for use with analytical methods, especially large molecule bioanalysis. As a result, Global Bioanalytical Consortium (GBC) Group L5 has put forth specific recommendations for the validation, qualification, and use of robotic systems as part of large molecule bioanalytical analyses in the present white paper. The guidelines presented can be followed to ensure that there is a consistent, transparent methodology that will ensure that robotic systems can be effectively used and documented in a regulated bioanalytical laboratory setting. This will allow for consistent use of robotic sample handling instrumentation as part of large molecule bioanalysis across the globe.
Complex extreme learning machine applications in terahertz pulsed signals feature sets.
Yin, X-X; Hadjiloucas, S; Zhang, Y
2014-11-01
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
In-flight results of adaptive attitude control law for a microsatellite
NASA Astrophysics Data System (ADS)
Pittet, C.; Luzi, A. R.; Peaucelle, D.; Biannic, J.-M.; Mignot, J.
2015-06-01
Because satellites usually do not experience large changes of mass, center of gravity or inertia in orbit, linear time invariant (LTI) controllers have been widely used to control their attitude. But, as the pointing requirements become more stringent and the satellite's structure more complex with large steerable and/or deployable appendices and flexible modes occurring in the control bandwidth, one unique LTI controller is no longer sufficient. One solution consists in designing several LTI controllers, one for each set point, but the switching between them is difficult to tune and validate. Another interesting solution is to use adaptive controllers, which could present at least two advantages: first, as the controller automatically and continuously adapts to the set point without changing the structure, no switching logic is needed in the software; second, performance and stability of the closed-loop system can be assessed directly on the whole flight domain. To evaluate the real benefits of adaptive control for satellites, in terms of design, validation and performances, CNES selected it as end-of-life experiment on PICARD microsatellite. This paper describes the design, validation and in-flight results of the new adaptive attitude control law, compared to nominal control law.
Harnessing the Bethe free energy†
Bapst, Victor
2016-01-01
ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178
Small parameters in infrared quantum chromodynamics
NASA Astrophysics Data System (ADS)
Peláez, Marcela; Reinosa, Urko; Serreau, Julien; Tissier, Matthieu; Wschebor, Nicolás
2017-12-01
We study the long-distance properties of quantum chromodynamics in the Landau gauge in an expansion in powers of the three-gluon, four-gluon, and ghost-gluon couplings, but without expanding in the quark-gluon coupling. This is motivated by two observations. First, the gauge sector is well described by perturbation theory in the context of a phenomenological model with a massive gluon. Second, the quark-gluon coupling is significantly larger than those in the gauge sector at large distances. In order to resum the contributions of the remaining infinite set of QED-like diagrams, we further expand the theory in 1 /Nc, where Nc is the number of colors. At leading order, this double expansion leads to the well-known rainbow approximation for the quark propagator. We take advantage of the systematic expansion to get a renormalization-group improvement of the rainbow resummation. A simple numerical solution of the resulting coupled set of equations reproduces the phenomenology of the spontaneous chiral symmetry breaking: for sufficiently large quark-gluon coupling constant, the constituent quark mass saturates when its valence mass approaches zero. We find very good agreement with lattice data for the scalar part of the propagator and explain why the vectorial part is poorly reproduced.
Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters
NASA Technical Reports Server (NTRS)
Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.
2011-01-01
We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.
ηc Hadroproduction at Large Hadron Collider Challenges NRQCD Factorization
NASA Astrophysics Data System (ADS)
Butenschoen, Mathias; He, Zhi-Guo; Kniehl, Bernd A.
2017-03-01
We report on our analysis [1] of prompt ηc meson production, measured by the LHCb Collaboration at the Large Hadron Collider, within the framework of non-relativistic QCD (NRQCD) factorization up to the sub-leading order in both the QCD coupling constant αs and the relative velocity v of the bound heavy quarks. We thereby convert various sets of J/ψ and χc,J long-distance matrix elements (LDMEs), determined by different groups in J/ψ and χc,J yield and polarization fits, to ηc and hc production LDMEs making use of the NRQCD heavy quark spin symmetry. The resulting predictions for ηc hadroproduction in all cases greatly overshoot the LHCb data, while the color-singlet model contributions alone would indeed be sufficient. We investigate the consequences for the universality of the LDMEs, and show how the observed tensions remain in follow-up works by other groups.
Locating multiple diffusion sources in time varying networks from sparse observations.
Hu, Zhao-Long; Shen, Zhesi; Cao, Shinan; Podobnik, Boris; Yang, Huijie; Wang, Wen-Xu; Lai, Ying-Cheng
2018-02-08
Data based source localization in complex networks has a broad range of applications. Despite recent progress, locating multiple diffusion sources in time varying networks remains to be an outstanding problem. Bridging structural observability and sparse signal reconstruction theories, we develop a general framework to locate diffusion sources in time varying networks based solely on sparse data from a small set of messenger nodes. A general finding is that large degree nodes produce more valuable information than small degree nodes, a result that contrasts that for static networks. Choosing large degree nodes as the messengers, we find that sparse observations from a few such nodes are often sufficient for any number of diffusion sources to be located for a variety of model and empirical networks. Counterintuitively, sources in more rapidly varying networks can be identified more readily with fewer required messenger nodes.
Using a Cray Y-MP as an array processor for a RISC Workstation
NASA Technical Reports Server (NTRS)
Lamaster, Hugh; Rogallo, Sarah J.
1992-01-01
As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.
CKM pattern from localized generations in extra dimension
NASA Astrophysics Data System (ADS)
Matti, C.
2006-10-01
We revisit the issue of the quark masses and mixing angles in the framework of large extra dimension. We consider three identical standard model families resulting from higher-dimensional fields localized on different branes embedded in a large extra dimension. Furthermore we use a decaying profile in the bulk different form previous works. With the Higgs field also localized on a different brane, the hierarchy of masses between the families results from their different positions in the extra space. When the left-handed doublet and the right-handed singlets are localized with different couplings on the branes, we found a set of brane locations in one extra dimension which leads to the correct quark masses and mixing angles with the sufficient strength of CP-violation. We see that the decaying profile of the Higgs field plays a crucial role for producing the hierarchies in a rather natural way.
Planning multi-arm screening studies within the context of a drug development program
Wason, James M S; Jaki, Thomas; Stallard, Nigel
2013-01-01
Screening trials are small trials used to decide whether an intervention is sufficiently promising to warrant a large confirmatory trial. Previous literature examined the situation where treatments are tested sequentially until one is considered sufficiently promising to take forward to a confirmatory trial. An important consideration for sponsors of clinical trials is how screening trials should be planned to maximize the efficiency of the drug development process. It has been found previously that small screening trials are generally the most efficient. In this paper we consider the design of screening trials in which multiple new treatments are tested simultaneously. We derive analytic formulae for the expected number of patients until a successful treatment is found, and propose methodology to search for the optimal number of treatments, and optimal sample size per treatment. We compare designs in which only the best treatment proceeds to a confirmatory trial and designs in which multiple treatments may proceed to a multi-arm confirmatory trial. We find that inclusion of a large number of treatments in the screening trial is optimal when only one treatment can proceed, and a smaller number of treatments is optimal when more than one can proceed. The designs we investigate are compared on a real-life set of screening designs. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23529936
Allowable carbon emissions lowered by multiple climate targets.
Steinacher, Marco; Joos, Fortunat; Stocker, Thomas F
2013-07-11
Climate targets are designed to inform policies that would limit the magnitude and impacts of climate change caused by anthropogenic emissions of greenhouse gases and other substances. The target that is currently recognized by most world governments places a limit of two degrees Celsius on the global mean warming since preindustrial times. This would require large sustained reductions in carbon dioxide emissions during the twenty-first century and beyond. Such a global temperature target, however, is not sufficient to control many other quantities, such as transient sea level rise, ocean acidification and net primary production on land. Here, using an Earth system model of intermediate complexity (EMIC) in an observation-informed Bayesian approach, we show that allowable carbon emissions are substantially reduced when multiple climate targets are set. We take into account uncertainties in physical and carbon cycle model parameters, radiative efficiencies, climate sensitivity and carbon cycle feedbacks along with a large set of observational constraints. Within this framework, we explore a broad range of economically feasible greenhouse gas scenarios from the integrated assessment community to determine the likelihood of meeting a combination of specific global and regional targets under various assumptions. For any given likelihood of meeting a set of such targets, the allowable cumulative emissions are greatly reduced from those inferred from the temperature target alone. Therefore, temperature targets alone are unable to comprehensively limit the risks from anthropogenic emissions.
BAMSI: a multi-cloud service for scalable distributed filtering of massive genome data.
Ausmees, Kristiina; John, Aji; Toor, Salman Z; Hellander, Andreas; Nettelblad, Carl
2018-06-26
The advent of next-generation sequencing (NGS) has made whole-genome sequencing of cohorts of individuals a reality. Primary datasets of raw or aligned reads of this sort can get very large. For scientific questions where curated called variants are not sufficient, the sheer size of the datasets makes analysis prohibitively expensive. In order to make re-analysis of such data feasible without the need to have access to a large-scale computing facility, we have developed a highly scalable, storage-agnostic framework, an associated API and an easy-to-use web user interface to execute custom filters on large genomic datasets. We present BAMSI, a Software as-a Service (SaaS) solution for filtering of the 1000 Genomes phase 3 set of aligned reads, with the possibility of extension and customization to other sets of files. Unique to our solution is the capability of simultaneously utilizing many different mirrors of the data to increase the speed of the analysis. In particular, if the data is available in private or public clouds - an increasingly common scenario for both academic and commercial cloud providers - our framework allows for seamless deployment of filtering workers close to data. We show results indicating that such a setup improves the horizontal scalability of the system, and present a possible use case of the framework by performing an analysis of structural variation in the 1000 Genomes data set. BAMSI constitutes a framework for efficient filtering of large genomic data sets that is flexible in the use of compute as well as storage resources. The data resulting from the filter is assumed to be greatly reduced in size, and can easily be downloaded or routed into e.g. a Hadoop cluster for subsequent interactive analysis using Hive, Spark or similar tools. In this respect, our framework also suggests a general model for making very large datasets of high scientific value more accessible by offering the possibility for organizations to share the cost of hosting data on hot storage, without compromising the scalability of downstream analysis.
A Semisupervised Support Vector Machines Algorithm for BCI Systems
Qin, Jianzhao; Li, Yuanqing; Sun, Wei
2007-01-01
As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141
A pilot cluster randomized controlled trial of structured goal-setting following stroke.
Taylor, William J; Brown, Melanie; William, Levack; McPherson, Kathryn M; Reed, Kirk; Dean, Sarah G; Weatherall, Mark
2012-04-01
To determine the feasibility, the cluster design effect and the variance and minimal clinical importance difference in the primary outcome in a pilot study of a structured approach to goal-setting. A cluster randomized controlled trial. Inpatient rehabilitation facilities. People who were admitted to inpatient rehabilitation following stroke who had sufficient cognition to engage in structured goal-setting and complete the primary outcome measure. Structured goal elicitation using the Canadian Occupational Performance Measure. Quality of life at 12 weeks using the Schedule for Individualised Quality of Life (SEIQOL-DW), Functional Independence Measure, Short Form 36 and Patient Perception of Rehabilitation (measuring satisfaction with rehabilitation). Assessors were blinded to the intervention. Four rehabilitation services and 41 patients were randomized. We found high values of the intraclass correlation for the outcome measures (ranging from 0.03 to 0.40) and high variance of the SEIQOL-DW (SD 19.6) in relation to the minimally importance difference of 2.1, leading to impractically large sample size requirements for a cluster randomized design. A cluster randomized design is not a practical means of avoiding contamination effects in studies of inpatient rehabilitation goal-setting. Other techniques for coping with contamination effects are necessary.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639
Performance Measures for Adaptive Decisioning Systems
1991-09-11
set to hypothesis space mapping best approximates the known map. Two assumptions, a sufficiently representative training set and the ability of the...successful prediction of LINEXT performance. The LINEXT algorithm above performs the decision space mapping on the training-set ele- ments exactly. For a
NASA Astrophysics Data System (ADS)
The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas
2016-11-01
Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.
The, Matthew; MacCoss, Michael J; Noble, William S; Käll, Lukas
2016-11-01
Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method-grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein-in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license. Graphical Abstract ᅟ.
Hardware Evaluation of the Horizontal Exercise Fixture with Weight Stack
NASA Technical Reports Server (NTRS)
Newby, Nate; Leach, Mark; Fincke, Renita; Sharp, Carwyn
2009-01-01
HEF with weight stack seems to be a very sturdy and reliable exercise device that should function well in a bed rest training setting. A few improvements should be made to both the hardware and software to improve usage efficiency, but largely, this evaluation has demonstrated HEF's robustness. The hardware offers loading to muscles, bones, and joints, potentially sufficient to mitigate the loss of muscle mass and bone mineral density during long-duration bed rest campaigns. With some minor modifications, the HEF with weight stack equipment provides the best currently available means of performing squat, heel raise, prone row, bench press, and hip flexion/extension exercise in a supine orientation.
Recommending a minimum English proficiency standard for entry-level nursing.
O'Neill, Thomas R; Marks, Casey; Wendt, Anne
2005-01-01
The purpose of this research was to provide sufficient information to the National Council of State Boards of Nursing (NCSBN) to make a defensible recommended passing standard for English proficiency. This standard was based upon the Test of English as a Foreign Language (TOEFL). A large panel of nurses and nurse regulators (N = 25) was convened to determine how much English proficiency is required to be minimally competent as an entry-level nurse. Two standard setting procedures were combined to produce recommendations for each panelist. In conjunction with collateral information, these recommendations were reviewed by the NCSBN Examination Committee, which decided upon an NCSBN recommended standard, a TOEFL score of 220.
Signal velocity in oscillator arrays
NASA Astrophysics Data System (ADS)
Cantos, C. E.; Veerman, J. J. P.; Hammond, D. K.
2016-09-01
We investigate a system of coupled oscillators on the circle, which arises from a simple model for behavior of large numbers of autonomous vehicles where the acceleration of each vehicle depends on the relative positions and velocities between itself and a set of local neighbors. After describing necessary and sufficient conditions for asymptotic stability, we derive expressions for the phase velocity of propagation of disturbances in velocity through this system. We show that the high frequencies exhibit damping, which implies existence of well-defined signal velocitiesc+ > 0 and c- < 0 such that low frequency disturbances travel through the flock as f+(x - c+t) in the direction of increasing agent numbers and f-(x - c-t) in the other.
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
NASA Astrophysics Data System (ADS)
Győrffy, Werner; Knizia, Gerald; Werner, Hans-Joachim
2017-12-01
We present the theory and algorithms for computing analytical energy gradients for explicitly correlated second-order Møller-Plesset perturbation theory (MP2-F12). The main difficulty in F12 gradient theory arises from the large number of two-electron integrals for which effective two-body density matrices and integral derivatives need to be calculated. For efficiency, the density fitting approximation is used for evaluating all two-electron integrals and their derivatives. The accuracies of various previously proposed MP2-F12 approximations [3C, 3C(HY1), 3*C(HY1), and 3*A] are demonstrated by computing equilibrium geometries for a set of molecules containing first- and second-row elements, using double-ζ to quintuple-ζ basis sets. Generally, the convergence of the bond lengths and angles with respect to the basis set size is strongly improved by the F12 treatment, and augmented triple-ζ basis sets are sufficient to closely approach the basis set limit. The results obtained with the different approximations differ only very slightly. This paper is the first step towards analytical gradients for coupled-cluster singles and doubles with perturbative treatment of triple excitations, which will be presented in the second part of this series.
Running Out of Time: Why Elephants Don't Gallop
NASA Astrophysics Data System (ADS)
Noble, Julian V.
2001-11-01
The physics of high speed running implies that galloping becomes impossible for sufficiently large animals. Some authors have suggested that because the strength/weight ratio decreases with size and eventually renders large animals excessively liable to injury when they attempt to gallop. This paper suggests that large animals cannot move their limbs sufficiently rapidly to take advantage of leaving the ground, hence are restricted to walking gaits. >From this point of view the relatively low strength/weight ratio of elephants follows from their inability to gallop, rather than causing it.
Cosmic shear as a probe of galaxy formation physics
Foreman, Simon; Becker, Matthew R.; Wechsler, Risa H.
2016-09-01
Here, we evaluate the potential for current and future cosmic shear measurements from large galaxy surveys to constrain the impact of baryonic physics on the matter power spectrum. We do so using a model-independent parametrization that describes deviations of the matter power spectrum from the dark-matter-only case as a set of principal components that are localized in wavenumber and redshift. We perform forecasts for a variety of current and future data sets, and find that at least ~90 per cent of the constraining power of these data sets is contained in no more than nine principal components. The constraining powermore » of different surveys can be quantified using a figure of merit defined relative to currently available surveys. With this metric, we find that the final Dark Energy Survey data set (DES Y5) and the Hyper Suprime-Cam Survey will be roughly an order of magnitude more powerful than existing data in constraining baryonic effects. Upcoming Stage IV surveys (Large Synoptic Survey Telescope, Euclid, and Wide Field Infrared Survey Telescope) will improve upon this by a further factor of a few. We show that this conclusion is robust to marginalization over several key systematics. The ultimate power of cosmic shear to constrain galaxy formation is dependent on understanding systematics in the shear measurements at small (sub-arcminute) scales. Lastly, if these systematics can be sufficiently controlled, cosmic shear measurements from DES Y5 and other future surveys have the potential to provide a very clean probe of galaxy formation and to strongly constrain a wide range of predictions from modern hydrodynamical simulations.« less
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.
Data Identifiers, Versioning, and Micro-citation
NASA Astrophysics Data System (ADS)
Parsons, M. A.; Duerr, R. E.
2012-12-01
Data citation, especially using Digital Object Identifiers (DOIs), is an increasingly accepted scientific practice. For example, the AGU Council asserts that data "publications" should "be credited and cited like the products of any other scientific activity," and Thomson Reuters has recently announced a data citation index built from DOIs assigned to data sets. Correspondingly, formal guidelines for how to cite a data set (using DOIs or similar identifiers/locators) have recently emerged, notably those from the international DataCite consortium, the UK Digital Curation Centre, and the US Federation of Earth Science Information Partners. These different data citation guidelines are largely congruent. They agree on the basic practice and elements of data citation, especially for relatively static, whole data collections. There is less agreement on some of the more subtle nuances of data citation. They define different methods for handling different data set versions, especially for the very dynamic, growing data sets that are common in Earth Sciences. They also differ in how people should cite specific, arbitrarily large elements, "passages," or subsets of a larger data collection, i.e., the precise data records actually used in a study. This detailed "micro-citation", and careful reference to exact versions of data are essential to ensure scientific reproducibility. Identifiers such as DOIs are necessary but not sufficient for the precise, detailed, references necessary. Careful practice must be coupled with the use of curated identifiers. In this paper we review the pros and cons of different approaches to versioning and micro-citation. We suggest a workable solution for most existing Earth science data and suggest a more rigorous path forward for the future.
Magmatic Volatiles as an Amplifier of Centrifugal Volcanism
NASA Astrophysics Data System (ADS)
Pratt, V. R.
2017-12-01
There is a striking correlation between negated Length of Day -LOD and the 60-70 year period in 20th century global climate, associated by some with the so-called Atlantic Multidecadal Oscillation or AMO. A number of authors have suggested mechanisms by which the former might cause the latter. One such that this author finds quite compelling is that gravity fluctuations at low latitudes increase essentially linearly with LOD fluctuations and therefore moves magma towards or away from the surface as LOD decreases or increases, i.e. angular velocity increases or decreases, respectively. At AGU FM2016 we proposed the term "centrifugal volcanism" for this mechanism and listed four possible objections to it, explaining three to our satisfaction. The remaining objection is the very obvious one that the 4 ms increase in LOD between 1880 and 1910 seems far too small to be able to account for the observed variation of about a quarter of a degree. A basic mechanism underlying many violent eruptions is the strong positive feedback between reduction of pressure in magma and evaporation of dissolved volatiles found in some magmas, driving the magma outwards and thereby further reducing the pressure. The normal state of magma is equilibrium. Any fluctuation in gravity, even a very small one, can be sufficient to shift this equilibrium sufficiently far to set this positive feedback in motion. The relevant electrical analogy would be an operational amplifier whose amplification is greatly increased by a positive feedback. We therefore propose that the same mechanism responsible for some violent eruptions also serves to amplify the tiny changes in gravity sufficiently to increase or decrease the vertical component of the movement of magma in general. This movement, felt throughout the planet albeit most strongly at low latitudes, influences the temperature at ocean bottoms wherever there is a significant level of magmatic volatiles. This in turn creates thermals that are large enough to reach the oceanic mixed layer before they have lost all their heat. That such large thermals have not been observed with modern tools is a consequence of such large changes in LOD not having been observed since midcentury, though some recent large (70 km) thermals have come close to the OML.
Skewness and kurtosis analysis for non-Gaussian distributions
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2018-06-01
In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.
Do Social Conditions Affect Capuchin Monkeys' (Cebus apella) Choices in a Quantity Judgment Task?
Beran, Michael J; Perdue, Bonnie M; Parrish, Audrey E; Evans, Theodore A
2012-01-01
Beran et al. (2012) reported that capuchin monkeys closely matched the performance of humans in a quantity judgment test in which information was incomplete but a judgment still had to be made. In each test session, subjects first made quantity judgments between two known options. Then, they made choices where only one option was visible. Both humans and capuchin monkeys were guided by past outcomes, as they shifted from selecting a known option to selecting an unknown option at the point at which the known option went from being more than the average rate of return to less than the average rate of return from earlier choices in the test session. Here, we expanded this assessment of what guides quantity judgment choice behavior in the face of incomplete information to include manipulations to the unselected quantity. We manipulated the unchosen set in two ways: first, we showed the monkeys what they did not get (the unchosen set), anticipating that "losses" would weigh heavily on subsequent trials in which the same known quantity was presented. Second, we sometimes gave the unchosen set to another monkey, anticipating that this social manipulation might influence the risk-taking responses of the focal monkey when faced with incomplete information. However, neither manipulation caused difficulty for the monkeys who instead continued to use the rational strategy of choosing known sets when they were as large as or larger than the average rate of return in the session, and choosing the unknown (riskier) set when the known set was not sufficiently large. As in past experiments, this was true across a variety of daily ranges of quantities, indicating that monkeys were not using some absolute quantity as a threshold for selecting (or not) the known set, but instead continued to use the daily average rate of return to determine when to choose the known versus the unknown quantity.
Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D
2014-08-01
Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Antenatal training to improve breast feeding: a randomised trial.
Kronborg, Hanne; Maimburg, Rikke Damkjær; Væth, Michael
2012-12-01
to assess the effect of an antenatal training programme on knowledge, self-efficacy and problems related to breast feeding and on breast-feeding duration. a randomised controlled trial. the Aarhus Midwifery Clinic, a large clinic connected to a Danish university hospital in an urban area of Denmark. a total of 1193 nulliparous women were recruited before week 21+6 days of gestation, 603 were randomised to the intervention group, and 590 to the reference group. we compared a structured antenatal training programme attended in mid-pregnancy with usual practice. data were collected through self-reported questionnaires sent to the women's e-mail addresses and analysed according to the intention to treat principle. The primary outcomes were duration of full and any breast feeding collected 6 weeks post partum (any) and 1 year post partum (full and any). no differences were found between groups according to duration of breast feeding, self-efficacy score, or breast-feeding problems, but after participation in the course in week 36 of gestation women in the intervention group reported a higher level of confidence (p=0.05), and 6 weeks after birth they reported to have obtained sufficient knowledge about breast feeding (p=0.02). Supplemental analysis in the intervention group revealed that women with sufficient knowledge breast fed significantly longer than women without sufficient knowledge (HR=0.74 CI: 0.58-0.97). This association was not found in the reference group (HR=1.12 CI: 0.89-1.41). antenatal training can increase confidence of breast feeding in pregnancy and provide women with sufficient knowledge about breast feeding after birth. Antenatal training may therefore be an important low-technology health promotion tool that can be provided at low costs in most settings. The antenatal training programme needs to be followed by postnatal breast-feeding support as it is not sufficient in itself to increase the duration of breast feeding or reduce breast-feeding problems. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fappani, Denis; IDE, Monique
2017-05-01
Many high power laser facilities are in operation all around the world and include various tight optical components such as large focussing lenses. Such lenses exhibit generally long focal lengths which induces some issues for their optical testing during manufacturing and inspection. Indeed, their transmitted wave fronts need to be very accurate and interferometric testing is the baseline to achieve that. But, it is always a problem to manage simultaneously long testing distances and fine accuracies in such interferometry testing. Taking example of the large focusing lenses produced for the Orion experimentation at AWE (UK), the presentation will describe which kind of testing method has been developed to demonstrate simultaneously good performances with sufficiently good repeatability and absolute accuracy. Special emphasis will be made onto the optical manufacturing issues and interferometric testing solutions. Some ZEMAX results presenting the test set-up and the calibration method will be presented as well. The presentation will conclude with a brief overview of the existing "state of the art" at Thales SESO for these technologies.
The Complete Redistribution Approximation in Optically Thick Line-Driven Winds
NASA Astrophysics Data System (ADS)
Gayley, K. G.; Onifer, A. J.
2001-05-01
Wolf-Rayet winds are thought to exhibit large momentum fluxes, which has in part been explained by ionization stratification in the wind. However, it the cause of high mass loss, not high momentum flux, that remains largely a mystery, because standard models fail to achieve sufficient acceleration near the surface where the mass-loss rate is set. We consider a radiative transfer approximation that allows for the dynamics of optically thick Wolf-Rayet winds to be modeled without detailed treatment of the radiation field, called the complete redistribution approximation. In it, it is assumed that thermalization processes cause the photon frequencies to be completely randomized over the course of propagating through the wind, which allows the radiation field to be treated statistically rather than in detail. Thus the approach is similar to the statistical treatment of the line list used in the celebrated CAK approach. The results differ from the effectively gray treatment in that the radiation field is influenced by the line distribution, and the role of gaps in the line distribution is enhanced. The ramifications for the driving of large mass-loss rates is explored.
Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective
NASA Astrophysics Data System (ADS)
Cheng, W.; Samtaney, R.
2014-01-01
The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.
Collaborative visual analytics of radio surveys in the Big Data era
NASA Astrophysics Data System (ADS)
Vohl, Dany; Fluke, Christopher J.; Hassan, Amr H.; Barnes, David G.; Kilborn, Virginia A.
2017-06-01
Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform - allowing the research process to continue wherever you are.
An Application of Activity Theory
ERIC Educational Resources Information Center
Marken, James A.
2006-01-01
Activity Theory has often been used in workplace settings to gain new theoretical understandings about work and the humans who engage in work, but rarely has there been sufficient detail in the literature to allow HPT practitioners to do their own activity analysis. The detail presented in this case is sufficient for HPT practitioners to begin to…
15 CFR 2007.8 - Other reviews of article eligibilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... “sufficiently competitive” to warrant a reduced competitive need limit. Those articles determined to be “sufficiently competitive” will be subject to a new lower competitive need limit set at 25 percent of the value... articles will continue to be subject to the original competitive need limits of 50 percent or $25 million...
Computationally efficient simulation of unsteady aerodynamics using POD on the fly
NASA Astrophysics Data System (ADS)
Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando
2016-12-01
Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.
Impact of Market Behavior, Fleet Composition, and Ancillary Services on Revenue Sufficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany; Gallo, Giulia; Brinkman, Gregory
Revenue insufficiency, or the missing money problem, occurs when the revenues that generators earn from the market are not sufficient to cover both fixed and variable costs to remain in the market and/or justify investments in new capacity, which may be needed for reliability. The near-zero marginal cost of variable renewable generators further exacerbates these revenue challenges. Estimating the extent of the missing money problem in current electricity markets is an important, nontrivial task that requires representing both how the power system operates and how market participants behave. This paper explores the missing money problem using a production cost modelmore » that represented a simplified version of the Electric Reliability Council of Texas (ERCOT) energy-only market for the years 2012-2014. We evaluate how various market structures -- including market behavior, ancillary services, and changing fleet compositions -- affect net revenues in this ERCOT-like system. In most production cost modeling exercises, resources are assumed to offer their marginal capabilities at marginal costs. Although this assumption is reasonable for feasibility studies and long-term planning, it does not adequately consider the market behaviors that impact revenue sufficiency. In this work, we simulate a limited set of market participant strategic bidding behaviors by means of different sets of markups; these markups are applied to the true production costs of all gas generators, which are the most prominent generators in ERCOT. Results show that markups can help generators increase their net revenues overall, although net revenues may increase or decrease depending on the technology and the year under study. Results also confirm that conventional, variable-cost-based production cost simulations do not capture prices accurately, and this particular feature calls for proxies for strategic behaviors (e.g., markups) and more accurate representations of how electricity markets work. The analysis also shows that generators face revenue sufficiency challenges in this ERCOT-like energy-only market model; net revenues provided by the market in all base markup cases and sensitivity scenarios (except when a large fraction of the existing coal fleet is retired) are not sufficient to justify investments in new capacity for thermal and nuclear power units. Overall, the work described in this paper points to the need for improved behavioral models of electricity markets to more accurately study current and potential market design issues that could arise in systems with high penetrations of renewable generation.« less
Web tools for predictive toxicology model building.
Jeliazkova, Nina
2012-07-01
The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.
Topographic Enhancement of Vertical Mixing in the Southern Ocean
NASA Astrophysics Data System (ADS)
Mashayek, A.; Ferrari, R. M.; Merrifield, S.; St Laurent, L.
2016-02-01
Diapycnal turbulent mixing in the Southern Ocean is believed to play a role in setting the rate of the ocean Meridional Overturning Circulation (MOC), an important element of the global climate system. Whether this role is important, however, depends on the strength of this mixing, which remains poorly qualified on global scale. To address this question, a passive tracer was released upstream of the Drake Passage in 2009 as a part of the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES). The mixing was then inferred from the vertical/diapycnal spreading of the tracer. The mixing was also calculated from microstructure measurements of shear and stratification. The diapycnal turbulent mixing inferred from the tracer was found to be an order of magnitude larger than that estimated with the microstructure probes at various locations along the path of the tracer. While the values inferred from tracer imply a key role played by mixing in setting the MOC, those based on localized measurements suggest otherwise. In this work we use a high resolution numerical ocean model of the Drake Passage region sampled in the DIMES experiment to explain that the difference between the two estimates arise from the large values of mixing encountered by the tracer, when it flows close to the bottom topography. We conclude that the large mixing close to the ocean bottom topography is sufficiently strong to play an important role in setting the Southern Ocean branch of the MOC below 2 km.
Monotone viable trajectories for functional differential inclusions
NASA Astrophysics Data System (ADS)
Haddad, Georges
This paper is a study on functional differential inclusions with memory which represent the multivalued version of retarded functional differential equations. The main result gives a necessary and sufficient equations. The main result gives a necessary and sufficient condition ensuring the existence of viable trajectories; that means trajectories remaining in a given nonempty closed convex set defined by given constraints the system must satisfy to be viable. Some motivations for this paper can be found in control theory where F( t, φ) = { f( t, φ, u)} uɛU is the set of possible velocities of the system at time t, depending on the past history represented by the function φ and on a control u ranging over a set U of controls. Other motivations can be found in planning procedures in microeconomics and in biological evolutions where problems with memory do effectively appear in a multivalued version. All these models require viability constraints represented by a closed convex set.
Fine‐resolution conservation planning with limited climate‐change information
Shah, Payal; Mallory, Mindy L.; Ando , Amy W.; Guntenspergen, Glenn R.
2017-01-01
Climate‐change induced uncertainties in future spatial patterns of conservation‐related outcomes make it difficult to implement standard conservation‐planning paradigms. A recent study translates Markowitz's risk‐diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate‐change scenarios for carrying out fine‐resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk‐return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate‐change information and full climate‐change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate‐change forecasts such that the best possible risk‐return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate‐change information could be reduced by 17% relative to other iterative approaches.
Foxp3 Expression is Required for the Induction of Therapeutic Tissue Tolerance1
Regateiro, Frederico S.; Chen, Ye; Kendal, Adrian R.; Hilbrands, Robert; Adams, Elizabeth; Cobbold, Stephen P.; Ma, Jianbo; Andersen, Kristian G.; Betz, Alexander G.; Zhang, Mindy; Madhiwalla, Shruti; Roberts, Bruce; Waldmann, Herman; Nolan, Kathleen F.; Howie, Duncan
2012-01-01
CD4+Foxp3+ Treg are essential for immune homeostasis and maintenance of self-tolerance. They are produced in the thymus and also generated de novo in the periphery in a TGFβ dependent manner. Foxp3+ Treg are also required to achieve tolerance to transplanted tissues when induced by co receptor or co stimulation blockade. Using TCR transgenic mice to avoid issues of autoimmune pathology, we show that Foxp3 expression is both necessary and sufficient for tissue tolerance by coreceptor blockade. Moreover, the known need in tolerance induction for TGFβ signalling to T cells can wholly be explained by its role in induction of Foxp3, as such signalling proved dispensable for the suppressive process. We analysed the relative contribution of TGFβ and Foxp3 to the transcriptome of TGFβ-induced Treg and showed that TGFβ elicited a large set of down-regulated signature genes. The number of genes uniquely modulated due to the influence of Foxp3 alone was surprisingly limited. Thus, despite the large genetic influence of TGFβ exposure on iTreg, the crucial Foxp3-influenced signature independent of TGFβ is small. Retroviral mediated conditional nuclear expression of Foxp3 proved sufficient to confer transplant-suppressive potency on CD4+ T cells, and was lost once nuclear Foxp3 expression was extinguished. These data support a dual role for TGFβ and Foxp3 in induced tolerance, where TGFβ stimulates Foxp3 expression, whose sustained expression is then associated with acquisition of tolerance. PMID:22988034
Diagnosing intramammary infections: evaluation of definitions based on a single milk sample.
Dohoo, I R; Smith, J; Andersen, S; Kelton, D F; Godden, S
2011-01-01
Criteria for diagnosing intramammary infections (IMI) have been debated for many years. Factors that may be considered in making a diagnosis include the organism of interest being found on culture, the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and whether or not concurrent evidence of inflammation existed (often measured by somatic cell count). However, research using these criteria has been hampered by the lack of a "gold standard" test (i.e., a perfect test against which the criteria can be evaluated) and the need for very large data sets of culture results to have sufficient numbers of quarters with infections with a variety of organisms. This manuscript used 2 large data sets of culture results to evaluate several definitions (sets of criteria) for classifying a quarter as having, or not having an IMI by comparing the results from a single culture to a gold standard diagnosis based on a set of 3 milk samples. The first consisted of 38,376 milk samples from which 25,886 triplicate sets of milk samples taken 1 wk apart were extracted. The second consisted of 784 quarters that were classified as infected or not based on a set of 3 milk samples collected at 2-d intervals. From these quarters, a total of 3,136 additional samples were evaluated. A total of 12 definitions (named A to L) based on combinations of the number of colonies isolated, whether or not the organism was recovered in pure or mixed culture, and the somatic cell count were evaluated for each organism (or group of organisms) with sufficient data. The sensitivity (ability of a definition to detect IMI) and the specificity (Sp; ability of a definition to correctly classify noninfected quarters) were both computed. For all species, except Staphylococcus aureus, the sensitivity of all definitions was <90% (and in many cases<50%). Consequently, if identifying as many existing infections as possible is important, then the criteria for considering a quarter positive should be a single colony (from a 0.01-mL milk sample) isolated (definition A). With the exception of "any organism" and coagulase-negative staphylococci, all Sp estimates were over 94% in the daily data and over 97% in the weekly data, suggesting that for most species, definition A may be acceptable. For coagulase-negative staphylococci, definitions B (2 colonies from a 0.01-mL milk sample) raised the Sp to 92 and 95% in the daily and weekly data, respectively. For "any organism," using definition B raised the Sp to 88 and 93% in the 2 data sets, respectively. The final choice of definition will depend on the objectives of study or control program for which the sample was collected. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The Great Observatories Origins Deep Survey
NASA Astrophysics Data System (ADS)
Dickinson, Mark
2008-05-01
Observing the formation and evolution of ordinary galaxies at early cosmic times requires data at many wavelengths in order to recognize, separate and analyze the many physical processes which shape galaxies' history, including the growth of large scale structure, gravitational interactions, star formation, and active nuclei. Extremely deep data, covering an adequately large volume, are needed to detect ordinary galaxies in sufficient numbers at such great distances. The Great Observatories Origins Deep Survey (GOODS) was designed for this purpose as an anthology of deep field observing programs that span the electromagnetic spectrum. GOODS targets two fields, one in each hemisphere. Some of the deepest and most extensive imaging and spectroscopic surveys have been carried out in the GOODS fields, using nearly every major space- and ground-based observatory. Many of these data have been taken as part of large, public surveys (including several Hubble Treasury, Spitzer Legacy, and ESO Large Programs), which have produced large data sets that are widely used by the astronomical community. I will review the history of the GOODS program, highlighting results on the formation and early growth of galaxies and their active nuclei. I will also describe new and upcoming observations, such as the GOODS Herschel Key Program, which will continue to fill out our portrait of galaxies in the young universe.
Estimating Divergence Parameters With Small Samples From a Large Number of Loci
Wang, Yong; Hey, Jody
2010-01-01
Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765
Simulations of the Formation and Evolution of X-ray Clusters
NASA Astrophysics Data System (ADS)
Bryan, G. L.; Klypin, A.; Norman, M. L.
1994-05-01
We describe results from a set of Omega = 1 Cold plus Hot Dark Matter (CHDM) and Cold Dark Matter (CDM) simulations. We examine the formation and evolution of X-ray clusters in a cosmological setting with sufficient numbers to perform statistical analysis. We find that CDM, normalized to COBE, seems to produce too many large clusters, both in terms of the luminosity (dn/dL) and temperature (dn/dT) functions. The CHDM simulation produces fewer clusters and the temperature distribution (our numerically most secure result) matches observations where they overlap. The computed cluster luminosity function drops below observations, but we are almost surely underestimating the X-ray luminosity. Because of the lower fluctuations in CHDM, there are only a small number of bright clusters in our simulation volume; however we can use the simulated clusters to fix the relation between temperature and velocity dispersion, allowing us to use collisionless N-body codes to probe larger length scales with correspondingly brighter clusters. The hydrodynamic simulations have been performed with a hybrid particle-mesh scheme for the dark matter and a high resolution grid-based piecewise parabolic method for the adiabatic gas dynamics. This combination has been implemented for massively parallel computers, allowing us to achive grids as large as 512(3) .
A model for plant lighting system selection.
Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W
2002-01-01
A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.
A Review of Criticality Accidents 2000 Revision
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas P. McLaughlin; Shean P. Monahan; Norman L. Pruvost
Criticality accidents and the characteristics of prompt power excursions are discussed. Sixty accidental power excursions are reviewed. Sufficient detail is provided to enable the reader to understand the physical situation, the chemistry and material flow, and when available the administrative setting leading up to the time of the accident. Information on the power history, energy release, consequences, and causes are also included when available. For those accidents that occurred in process plants, two new sections have been included in this revision. The first is an analysis and summary of the physical and neutronic features of the chain reacting systems. Themore » second is a compilation of observations and lessons learned. Excursions associated with large power reactors are not included in this report.« less
Azadmanesh, Jahaun; Trickel, Scott R.; Weiss, Kevin L.; ...
2017-03-29
Superoxide dismutases (SODs) are enzymes that protect against oxidative stress by dismutation of superoxide into oxygen and hydrogen peroxide through cyclic reduction and oxidation of the active-site metal. The complete enzymatic mechanisms of SODs are unknown since data on the positions of hydrogen are limited. Here, we present, methods for large crystal growth and neutron data collection of human manganese SOD (MnSOD) using perdeuteration and the MaNDi beamline at Oak Ridge National Laboratory. Furthermore, The crystal from which the human MnSOD data set was obtained is the crystal with the largest unit-cell edge (240 Å) from which data have beenmore » collectedvianeutron diffraction to sufficient resolution (2.30 Å) where hydrogen positions can be observed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azadmanesh, Jahaun; Trickel, Scott R.; Weiss, Kevin L.
Superoxide dismutases (SODs) are enzymes that protect against oxidative stress by dismutation of superoxide into oxygen and hydrogen peroxide through cyclic reduction and oxidation of the active-site metal. The complete enzymatic mechanisms of SODs are unknown since data on the positions of hydrogen are limited. Here, we present, methods for large crystal growth and neutron data collection of human manganese SOD (MnSOD) using perdeuteration and the MaNDi beamline at Oak Ridge National Laboratory. Furthermore, The crystal from which the human MnSOD data set was obtained is the crystal with the largest unit-cell edge (240 Å) from which data have beenmore » collectedvianeutron diffraction to sufficient resolution (2.30 Å) where hydrogen positions can be observed.« less
Prevalence and Seroprevalence of Trypanosoma cruzi Infection in a Military Population in Texas.
Webber, Bryant J; Pawlak, Mary T; Valtier, Sandra; Daniels, Candelaria C; Tully, Charla C; Wozniak, Edward J; Roachell, Walter D; Sanchez, Francisco X; Blasi, Audra A; Cropper, Thomas L
2017-11-01
Recent biosurveillance findings at Joint Base San Antonio (JBSA), a large military installation located in south-central Texas, indicate the potential for vector-borne human Chagas disease. A cross-sectional study was conducted to determine the prevalence and seroprevalence of Trypanosoma cruzi infection in highest risk subpopulations on the installation, including students and instructors who work and sleep in triatomine-endemic field settings. Real-time polymerase chain reaction, enzyme-linked immunosorbent assay, and indirect immunofluorescent antibody assay were performed on enrolled subjects ( N = 1,033), none of whom tested positive for T. cruzi or anti- T. cruzi antibodies. Current countermeasures used during field training on JBSA appear to be sufficient for preventing autochthonous human Chagas disease.
Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices.
Xin Yang; Kwang-Ting Cheng
2014-06-01
The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.
The Evaluation of Vocational Programming in Secondary School Settings: A Suggested Protocol
ERIC Educational Resources Information Center
George, Jennifer C.; Seruya, Francine M.
2018-01-01
The primary purpose of this project was to determine if a therapist-created protocol to develop a prevocational program provided sufficient information for a practitioner to implement a vocational program within another high school setting. The developed protocol was evaluated on feasibility and efficacy for replication within another setting by…
Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-05-12
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
ERIC Educational Resources Information Center
Andrich, David
2016-01-01
This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…
Hermite-Birkhoff interpolation in the nth roots of unity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.
1980-06-01
Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.
Requirements for Calibration in Noninvasive Glucose Monitoring by Raman Spectroscopy
Lipson, Jan; Bernhardt, Jeff; Block, Ueyn; Freeman, William R.; Hofmeister, Rudy; Hristakeva, Maya; Lenosky, Thomas; McNamara, Robert; Petrasek, Danny; Veltkamp, David; Waydo, Stephen
2009-01-01
Background In the development of noninvasive glucose monitoring technology, it is highly desirable to derive a calibration that relies on neither person-dependent calibration information nor supplementary calibration points furnished by an existing invasive measurement technique (universal calibration). Method By appropriate experimental design and associated analytical methods, we establish the sufficiency of multiple factors required to permit such a calibration. Factors considered are the discrimination of the measurement technique, stabilization of the experimental apparatus, physics–physiology-based measurement techniques for normalization, the sufficiency of the size of the data set, and appropriate exit criteria to establish the predictive value of the algorithm. Results For noninvasive glucose measurements, using Raman spectroscopy, the sufficiency of the scale of data was demonstrated by adding new data into an existing calibration algorithm and requiring that (a) the prediction error should be preserved or improved without significant re-optimization, (b) the complexity of the model for optimum estimation not rise with the addition of subjects, and (c) the estimation for persons whose data were removed entirely from the training set should be no worse than the estimates on the remainder of the population. Using these criteria, we established guidelines empirically for the number of subjects (30) and skin sites (387) for a preliminary universal calibration. We obtained a median absolute relative difference for our entire data set of 30 mg/dl, with 92% of the data in the Clarke A and B ranges. Conclusions Because Raman spectroscopy has high discrimination for glucose, a data set of practical dimensions appears to be sufficient for universal calibration. Improvements based on reducing the variance of blood perfusion are expected to reduce the prediction errors substantially, and the inclusion of supplementary calibration points for the wearable device under development will be permissible and beneficial. PMID:20144354
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gheorghiu, Vlad; Yu Li; Cohen, Scott M.
We investigate the conditions under which a set S of pure bipartite quantum states on a DxD system can be locally cloned deterministically by separable operations, when at least one of the states is full Schmidt rank. We allow for the possibility of cloning using a resource state that is less than maximally entangled. Our results include that: (i) all states in S must be full Schmidt rank and equally entangled under the G-concurrence measure, and (ii) the set S can be extended to a larger clonable set generated by a finite group G of order |G|=N, the number ofmore » states in the larger set. It is then shown that any local cloning apparatus is capable of cloning a number of states that divides D exactly. We provide a complete solution for two central problems in local cloning, giving necessary and sufficient conditions for (i) when a set of maximally entangled states can be locally cloned, valid for all D; and (ii) local cloning of entangled qubit states with nonvanishing entanglement. In both of these cases, we show that a maximally entangled resource is necessary and sufficient, and the states must be related to each other by local unitary 'shift' operations. These shifts are determined by the group structure, so need not be simple cyclic permutations. Assuming this shifted form and partially entangled states, then in D=3 we show that a maximally entangled resource is again necessary and sufficient, while for higher-dimensional systems, we find that the resource state must be strictly more entangled than the states in S. All of our necessary conditions for separable operations are also necessary conditions for local operations and classical communication (LOCC), since the latter is a proper subset of the former. In fact, all our results hold for LOCC, as our sufficient conditions are demonstrated for LOCC, directly.« less
Oblique nonlinear whistler wave
NASA Astrophysics Data System (ADS)
Yoon, Peter H.; Pandey, Vinay S.; Lee, Dong-Hun
2014-03-01
Motivated by satellite observation of large-amplitude whistler waves propagating in oblique directions with respect to the ambient magnetic field, a recent letter discusses the physics of large-amplitude whistler waves and relativistic electron acceleration. One of the conclusions of that letter is that oblique whistler waves will eventually undergo nonlinear steepening regardless of the amplitude. The present paper reexamines this claim and finds that the steepening associated with the density perturbation almost never occurs, unless whistler waves have sufficiently high amplitude and propagate sufficiently close to the resonance cone angle.
Effect of normalized plasma frequency on electron phase-space orbits in a free-electron laser
NASA Astrophysics Data System (ADS)
Ji, Yu-Pin; Wang, Shi-Jian; Xu, Jing-Yue; Xu, Yong-Gen; Liu, Xiao-Xu; Lu, Hong; Huang, Xiao-Li; Zhang, Shi-Chang
2014-02-01
Irregular phase-space orbits of the electrons are harmful to the electron-beam transport quality and hence deteriorate the performance of a free-electron laser (FEL). In previous literature, it was demonstrated that the irregularity of the electron phase-space orbits could be caused in several ways, such as varying the wiggler amplitude and inducing sidebands. Based on a Hamiltonian model with a set of self-consistent differential equations, it is shown in this paper that the electron-beam normalized plasma frequency functions not only couple the electron motion with the FEL wave, which results in the evolution of the FEL wave field and a possible power saturation at a large beam current, but also cause the irregularity of the electron phase-space orbits when the normalized plasma frequency has a sufficiently large value, even if the initial energy of the electron is equal to the synchronous energy or the FEL wave does not reach power saturation.
Dissociative recombination of the ground state of N2(+)
NASA Technical Reports Server (NTRS)
Guberman, Steven L.
1991-01-01
Large-scale calculations of the dissociative recombination cross sections and rates for the v = 0 level of the N2(+) ground state are reported, and the important role played by vibrationally excited Rydberg states lying both below and above the v = 0 level of the ion is demonstrated. The large-scale electronic wave function calculations were done using triple zeta plus polarization nuclear-centered-valence Gaussian basis sets. The electronic widths were obtained using smaller wave functions, and the cross sections were calculated on the basis of the multichannel quantum defect theory. The DR rate is calculated at 1.6 x 10 to the -7th x (Te/300) to the -0.37 cu cm/sec for Te in the range of 100 to 1000 K, and is found to be in excellent agreement with prior microwave afterglow experiments but in disagreement with recent merged beam results. It is inferred that the dominant mechanism for DR imparts sufficient energy to the product atoms to allow for escape from the Martian atmosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaeffer, D. B.; Winske, D.; Larson, D. J.
Collisionless shocks are common phenomena in space and astrophysical systems, and in many cases, the shocks can be modeled as the result of the expansion of a magnetic piston though a magnetized ambient plasma. Only recently, however, have laser facilities and diagnostic capabilities evolved sufficiently to allow the detailed study in the laboratory of the microphysics of piston-driven shocks. We review experiments on collisionless shocks driven by a laser-produced magnetic piston undertaken with the Phoenix laser laboratory and the Large Plasma Device at the University of California, Los Angeles. The experiments span a large parameter space in laser energy, backgroundmore » magnetic field, and ambient plasma properties that allow us to probe the physics of piston-ambient energy coupling, the launching of magnetosonic solitons, and the formation of subcritical shocks. Here, the results indicate that piston-driven magnetized collisionless shocks in the laboratory can be characterized with a small set of dimensionless formation parameters that place the formation process in an organized and predictive framework.« less
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
Biomedical information retrieval across languages.
Daumke, Philipp; Markü, Kornél; Poprat, Michael; Schulz, Stefan; Klar, Rüdiger
2007-06-01
This work presents a new dictionary-based approach to biomedical cross-language information retrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Neal R; Ruggiero, Christy E; Pawley, Norma H
2009-01-01
Detecting complex targets, such as facilities, in commercially available satellite imagery is a difficult problem that human analysts try to solve by applying world knowledge. Often there are known observables that can be extracted by pixel-level feature detectors that can assist in the facility detection process. Individually, each of these observables is not sufficient for an accurate and reliable detection, but in combination, these auxiliary observables may provide sufficient context for detection by a machine learning algorithm. We describe an approach for automatic detection of facilities that uses an automated feature extraction algorithm to extract auxiliary observables, and a semi-supervisedmore » assisted target recognition algorithm to then identify facilities of interest. We illustrate the approach using an example of finding schools in Quickbird image data of Albuquerque, New Mexico. We use Los Alamos National Laboratory's Genie Pro automated feature extraction algorithm to find a set of auxiliary features that should be useful in the search for schools, such as parking lots, large buildings, sports fields and residential areas and then combine these features using Genie Pro's assisted target recognition algorithm to learn a classifier that finds schools in the image data.« less
Quantum simulation of the spin-boson model with a microwave circuit
NASA Astrophysics Data System (ADS)
Leppäkangas, Juha; Braumüller, Jochen; Hauck, Melanie; Reiner, Jan-Michael; Schwenk, Iris; Zanker, Sebastian; Fritz, Lukas; Ustinov, Alexey V.; Weides, Martin; Marthaler, Michael
2018-05-01
We consider superconducting circuits for the purpose of simulating the spin-boson model. The spin-boson model consists of a single two-level system coupled to bosonic modes. In most cases, the model is considered in a limit where the bosonic modes are sufficiently dense to form a continuous spectral bath. A very well known case is the Ohmic bath, where the density of states grows linearly with the frequency. In the limit of weak coupling or large temperature, this problem can be solved numerically. If the coupling is strong, the bosonic modes can become sufficiently excited to make a classical simulation impossible. Here we discuss how a quantum simulation of this problem can be performed by coupling a superconducting qubit to a set of microwave resonators. We demonstrate a possible implementation of a continuous spectral bath with individual bath resonators coupling strongly to the qubit. Applying a microwave drive scheme potentially allows us to access the strong-coupling regime of the spin-boson model. We discuss how the resulting spin relaxation dynamics with different initialization conditions can be probed by standard qubit-readout techniques from circuit quantum electrodynamics.
Ito, Daisuke; Childress, Michael; Mason, Nicola; Winter, Amber; O’Brien, Timothy; Henson, Michael; Borgatti, Antonella; Lewellen, Mitzi; Krick, Erika; Stewart, Jane; Lahrman, Sarah; Rajwa, Bartek; Scott, Milcah C; Seelig, Davis; Koopmeiners, Joseph; Ruetz, Stephan; Modiano, Jaime
2017-01-01
We previously described a population of lymphoid progenitor cells (LPCs) in canine B-cell lymphoma defined by retention of the early progenitor markers CD34 and CD117 and “slow proliferation” molecular signatures that persist in the xenotransplantation setting. We examined whether valspodar, a selective inhibitor of the ATP binding cassette B1 transporter (ABCB1, a.k.a., p-glycoprotein/multidrug resistance protein-1) used in the neoadjuvant setting would sensitize LPCs to doxorubicin and extend the length of remission in dogs with therapy naïve large B-cell lymphoma. Twenty dogs were enrolled into a double-blinded, placebo controlled study where experimental and control groups received oral valspodar (7.5 mg/kg) or placebo, respectively, twice daily for five days followed by five treatments with doxorubicin 21 days apart with a reduction in the first dose to mitigate the potential side effects of ABCB1 inhibition. Lymph node and blood LPCs were quantified at diagnosis, on the fourth day of neoadjuvant period, and 1-week after the first chemotherapy dose. Valspodar therapy was well tolerated. There were no differences between groups in total LPCs in lymph nodes or peripheral blood, nor in event-free survival or overall survival. Overall, we conclude that valspodar can be administered safely in the neoadjuvant setting for canine B-cell lymphoma; however, its use to attenuate ABCB1 + cells does not alter the composition of lymph node or blood LPCs, and it does not appear to be sufficient to prolong doxorubicin-dependent remissions in this setting. PMID:28357033
NASA Astrophysics Data System (ADS)
Fisher, W. P., Jr.; Elbaum, B.; Coulter, A.
2010-07-01
Reliability coefficients indicate the proportion of total variance attributable to differences among measures separated along a quantitative continuum by a testing, survey, or assessment instrument. Reliability is usually considered to be influenced by both the internal consistency of a data set and the number of items, though textbooks and research papers rarely evaluate the extent to which these factors independently affect the data in question. Probabilistic formulations of the requirements for unidimensional measurement separate consistency from error by modelling individual response processes instead of group-level variation. The utility of this separation is illustrated via analyses of small sets of simulated data, and of subsets of data from a 78-item survey of over 2,500 parents of children with disabilities. Measurement reliability ultimately concerns the structural invariance specified in models requiring sufficient statistics, parameter separation, unidimensionality, and other qualities that historically have made quantification simple, practical, and convenient for end users. The paper concludes with suggestions for a research program aimed at focusing measurement research more on the calibration and wide dissemination of tools applicable to individuals, and less on the statistical study of inter-variable relations in large data sets.
The Impact of Heterogeneous Thresholds on Social Contagion with Multiple Initiators
Karampourniotis, Panagiotis D.; Sreenivasan, Sameet; Szymanski, Boleslaw K.; Korniss, Gyorgy
2015-01-01
The threshold model is a simple but classic model of contagion spreading in complex social systems. To capture the complex nature of social influencing we investigate numerically and analytically the transition in the behavior of threshold-limited cascades in the presence of multiple initiators as the distribution of thresholds is varied between the two extreme cases of identical thresholds and a uniform distribution. We accomplish this by employing a truncated normal distribution of the nodes’ thresholds and observe a non-monotonic change in the cascade size as we vary the standard deviation. Further, for a sufficiently large spread in the threshold distribution, the tipping-point behavior of the social influencing process disappears and is replaced by a smooth crossover governed by the size of initiator set. We demonstrate that for a given size of the initiator set, there is a specific variance of the threshold distribution for which an opinion spreads optimally. Furthermore, in the case of synthetic graphs we show that the spread asymptotically becomes independent of the system size, and that global cascades can arise just by the addition of a single node to the initiator set. PMID:26571486
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
A field-to-desktop toolchain for X-ray CT densitometry enables tree ring analysis
De Mil, Tom; Vannoppen, Astrid; Beeckman, Hans; Van Acker, Joris; Van den Bulcke, Jan
2016-01-01
Background and Aims Disentangling tree growth requires more than ring width data only. Densitometry is considered a valuable proxy, yet laborious wood sample preparation and lack of dedicated software limit the widespread use of density profiling for tree ring analysis. An X-ray computed tomography-based toolchain of tree increment cores is presented, which results in profile data sets suitable for visual exploration as well as density-based pattern matching. Methods Two temperate (Quercus petraea, Fagus sylvatica) and one tropical species (Terminalia superba) were used for density profiling using an X-ray computed tomography facility with custom-made sample holders and dedicated processing software. Key Results Density-based pattern matching is developed and able to detect anomalies in ring series that can be corrected via interactive software. Conclusions A digital workflow allows generation of structure-corrected profiles of large sets of cores in a short time span that provide sufficient intra-annual density information for tree ring analysis. Furthermore, visual exploration of such data sets is of high value. The dated profiles can be used for high-resolution chronologies and also offer opportunities for fast screening of lesser studied tropical tree species. PMID:27107414
Kurashige, Yuki; Yanai, Takeshi
2011-09-07
We present a second-order perturbation theory based on a density matrix renormalization group self-consistent field (DMRG-SCF) reference function. The method reproduces the solution of the complete active space with second-order perturbation theory (CASPT2) when the DMRG reference function is represented by a sufficiently large number of renormalized many-body basis, thereby being named DMRG-CASPT2 method. The DMRG-SCF is able to describe non-dynamical correlation with large active space that is insurmountable to the conventional CASSCF method, while the second-order perturbation theory provides an efficient description of dynamical correlation effects. The capability of our implementation is demonstrated for an application to the potential energy curve of the chromium dimer, which is one of the most demanding multireference systems that require best electronic structure treatment for non-dynamical and dynamical correlation as well as large basis sets. The DMRG-CASPT2/cc-pwCV5Z calculations were performed with a large (3d double-shell) active space consisting of 28 orbitals. Our approach using large-size DMRG reference addressed the problems of why the dissociation energy is largely overestimated by CASPT2 with the small active space consisting of 12 orbitals (3d4s), and also is oversensitive to the choice of the zeroth-order Hamiltonian. © 2011 American Institute of Physics
AOP: An R Package For Sufficient Causal Analysis in Pathway ...
Summary: How can I quickly find the key events in a pathway that I need to monitor to predict that a/an beneficial/adverse event/outcome will occur? This is a key question when using signaling pathways for drug/chemical screening in pharma-cology, toxicology and risk assessment. By identifying these sufficient causal key events, we have fewer events to monitor for a pathway, thereby decreasing assay costs and time, while maximizing the value of the information. I have developed the “aop” package which uses backdoor analysis of causal net-works to identify these minimal sets of key events that are suf-ficient for making causal predictions. Availability and Implementation: The source and binary are available online through the Bioconductor project (http://www.bioconductor.org/) as an R package titled “aop”. The R/Bioconductor package runs within the R statistical envi-ronment. The package has functions that can take pathways (as directed graphs) formatted as a Cytoscape JSON file as input, or pathways can be represented as directed graphs us-ing the R/Bioconductor “graph” package. The “aop” package has functions that can perform backdoor analysis to identify the minimal set of key events for making causal predictions.Contact: burgoon.lyle@epa.gov This paper describes an R/Bioconductor package that was developed to facilitate the identification of key events within an AOP that are the minimal set of sufficient key events that need to be tested/monit
Geometric derivations of minimal sets of sufficient multiview constraints
Thomas, Orrin H.; Oshel, Edward R.
2012-01-01
Geometric interpretations of four of the most common determinant formulations of multiview constraints are given, showing that they all enforce the same geometry and that all of the forms commonly in use in the machine vision community are a subset of a more general form. Generalising the work of Yi Ma yields a new general 2 x 2 determinant trilinear and 3 x 3 determinant quadlinear. Geometric descriptions of degenerate multiview constraints are given, showing that it is necessary, but insufficient, that the determinant equals zero. Understanding the degeneracies leads naturally into proofs for minimum sufficient sets of bilinear, trilinear and quadlinear constraints for arbitrary numbers of conjugate observations.
100 New Impact Crater Sites Found on Mars
NASA Astrophysics Data System (ADS)
Kennedy, M. R.; Malin, M. C.
2009-12-01
Recent observations constrain the formation of 100 new impact sites on Mars over the past decade; 19 of these were found using the Mars Global Surveyor Mars Orbiter Camera (MOC), and the other 81 have been identified since 2006 using the Mars Reconnaissance Orbiter Context Camera (CTX). Every 6 meter/pixel CTX image is examined upon receipt and, where they overlap images of 0.3-240 m/pixel scale acquired by the same or other Mars-orbiting spacecraft, we look for features that may have changed. New impact sites are initially identified by the presence of a new dark spot or cluster of dark spots in a CTX image. Such spots may be new impact craters, or result from the effect of impact blasts on the dusty surface. In some (generally rare) cases, the crater is sufficiently large to be resolved in the CTX image. In most cases, however, the crater(s) cannot be seen. These are tentatively designated as “candidate” new impact sites, and the CTX team then creates an opportunity for the MRO spacecraft to point its cameras off-nadir and requests that the High Resolution Imaging Science Experiment (HiRISE) team obtain an image of ~0.3 m/pixel to confirm whether a crater or crater cluster is present. It is clear even from cursory examination that the CTX observations are areographically biased to dusty, higher albedo areas on Mars. All but 3 of the 100 new impact sites occur on surfaces with Lambert albedo values in excess of 23.5%. Our initial study of MOC images greatly benefited from the initial global observations made in one month in 1999, creating a baseline date from which we could start counting new craters. The global coverage by MRO Mars Color Imager is more than a factor of 4 poorer in resolution than the MOC Wide Angle camera and does not offer the opportunity for global analysis. Instead, we must rely on partial global coverage and global coverage that has taken years to accumulate; thus we can only treat impact rates statistically. We subdivide the total data set of 100 sites into 3 sets of observations: the original 19 MOC observations found in a survey of 15% of the planet, craters found only in CTX repeat coverage of 7% of Mars, and the remaining 69 craters found in a data set covering 40% of the planet. Using the mean interval between the latest observation preceding the impact and the first observation showing the impact for these groups of craters, we determine that the cratering rate is roughly 8 ± 6 x 10-7 craters/km2/yr for craters greater than ~1 m diameter. The cratering rate on Mars is sufficiently high to warrant consideration both for scientific studies and as a hazard to future exploration. Impacts are sufficiently frequent to act as seismic sources for studies of shallow crustal structure, if a seismic network is sufficiently dispersed and long-lived. Impacts large enough to provide information about deep interior structure are rare but probably occur on a decadal timescale. As recently noted in Science, new craters can be used to probe the distribution of subsurface ice and to provide samples from shallow depths that otherwise require meter-scale drilling systems. There is a finite probability that visitors to Mars for more than a month or two will hear or feel the effects of a nearby impact.
Optimizing measurement geometry for seismic near-surface full waveform inversion
NASA Astrophysics Data System (ADS)
Nuber, André; Manukyan, Edgar; Maurer, Hansruedi
2017-09-01
Full waveform inversion (FWI) is an increasingly popular tool for analysing seismic data. Current practise is to record seismic data sets that are suitable for reflection processing, that is, a very dense spatial sampling and a high fold are required. Using tools from optimized experimental design (ED), we demonstrate that such a dense sampling is not necessary for FWI purposes. With a simple noise-free acoustic example, we show that only a few suitably selected source positions are required for computing high-quality images. A second, more extensive study includes elastic FWI with noise-contaminated data and free-surface boundary conditions on a typical near-surface setup, where surface waves play a crucial role. The study reveals that it is sufficient to employ a receiver spacing in the order of the minimum shear wavelength expected. Furthermore, we show that horizontally oriented sources and multicomponent receivers are the preferred option for 2-D elastic FWI, and we found that with a small amount of carefully selected source positions, similarly good results can be achieved, as if as many sources as receivers would have been employed. For the sake of simplicity, we assume in our simulations that the full data information content is available, but data pre-processing and the presence of coloured noise may impose restrictions. Our ED procedure requires an a priori subsurface model as input, but tests indicate that a relatively crude approximation to the true model is adequate. A further pre-requisite of our ED algorithm is that a suitable inversion strategy exists that accounts for the non-linearity of the FWI problem. Here, we assume that such a strategy is available. For the sake of simplicity, we consider only 2-D FWI experiments in this study, but our ED algorithm is sufficiently general and flexible, such that it can be adapted to other configurations, such as crosshole, vertical seismic profiling or 3-D surface setups, also including larger scale exploration experiments. It also offers interesting possibilities for analysing existing large-scale data sets that are too large to be inverted. With our methodology, it is possible to extract a small (and thus invertible) subset that offers similar information content as the full data set.
37 CFR 1.103 - Suspension of action by the Office.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Code. A request for deferral of examination under this paragraph must include the publication fee set... include: (1) A showing of good and sufficient cause for suspension of action; and (2) The fee set forth in... the period of suspension, and include the processing fee set forth in § 1.17(i). (c) Limited...
The Effects of Age and Set Size on the Fast Extraction of Egocentric Distance
Gajewski, Daniel A.; Wallin, Courtney P.; Philbeck, John W.
2016-01-01
Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance. Regardless of viewing duration, distance judgments were more accurate (less biased towards underestimation) when multiple potential targets were presented, suggesting that the relative angular declinations between the objects are an additional source of useful information. Distance judgments were more precise with additional viewing time, but the benefit did not depend on set size and accuracy did not improve with longer viewing durations. The overall pattern suggests that distance can be efficiently derived from direction for floor-level objects. Controlling for age-related differences in the viewing time needed to support detection was sufficient to support distal localization but only when brief and longer glimpse trials were interspersed. Information extracted from longer glimpse trials presumably supported performance on subsequent trials when viewing time was more limited. This outcome suggests a particularly important role for prior visual experience in distance judgments for older observers. PMID:27398065
Pretest Caluculations of Temperature Changes for Field Thermal Conductivity Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
N.S. Brodsky
A large volume fraction of the potential monitored geologic repository at Yucca Mountain may reside in the Tptpll (Tertiary, Paintbrush Group, Topopah Spring Tuff, crystal poor, lower lithophysal) lithostratigraphic unit. This unit is characterized by voids, or lithophysae, which range in size from centimeters to meters. A series of thermal conductivity field tests are planned in the Enhanced Characterization of the Repository Block (ECRB) Cross Drift. The objective of the pretest calculation described in this document is to predict changes in temperatures in the surrounding rock for these tests for a given heater power and a set of thermal transportmore » properties. The calculation can be extended, as described in this document, to obtain thermal conductivity, thermal capacitance (density x heat capacity, J {center_dot} m{sup -3} {center_dot} K{sup -1}), and thermal diffusivity from the field data. The work has been conducted under the ''Technical Work Plan For: Testing and Monitoring'' (BSC 2001). One of the outcomes of this analysis is to determine the initial output of the heater. This heater output must be sufficiently high that it will provide results in a reasonably short period of time (within several weeks or a month) and be sufficiently high that the heat increase is detectable by the instruments employed in the test. The test will be conducted in stages and heater output will be step increased as the test progresses. If the initial temperature is set too high, the experiment will not have as many steps and thus fewer thermal conductivity data points will result.« less
Planet-driven Spiral Arms in Protoplanetary Disks. I. Formation Mechanism
NASA Astrophysics Data System (ADS)
Bae, Jaehan; Zhu, Zhaohuan
2018-06-01
Protoplanetary disk simulations show that a single planet can excite more than one spiral arm, possibly explaining the recent observations of multiple spiral arms in some systems. In this paper, we explain the mechanism by which a planet excites multiple spiral arms in a protoplanetary disk. Contrary to previous speculations, the formation of both primary and additional arms can be understood as a linear process when the planet mass is sufficiently small. A planet resonantly interacts with epicyclic oscillations in the disk, launching spiral wave modes around the Lindblad resonances. When a set of wave modes is in phase, they can constructively interfere with each other and create a spiral arm. More than one spiral arm can form because such constructive interference can occur for different sets of wave modes, with the exact number and launching position of the spiral arms being dependent on the planet mass as well as the disk temperature profile. Nonlinear effects become increasingly important as the planet mass increases, resulting in spiral arms with stronger shocks and thus larger pitch angles. This is found to be common for both primary and additional arms. When a planet has a sufficiently large mass (≳3 thermal masses for (h/r) p = 0.1), only two spiral arms form interior to its orbit. The wave modes that would form a tertiary arm for smaller mass planets merge with the primary arm. Improvements in our understanding of the formation of spiral arms can provide crucial insights into the origin of observed spiral arms in protoplanetary disks.
Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Relay discovery and selection for large-scale P2P streaming
Zhang, Chengwei; Wang, Angela Yunxian
2017-01-01
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384
Relay discovery and selection for large-scale P2P streaming.
Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun
2017-01-01
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.
Electrohydrodynamically driven large-area liquid ion sources
Pregenzer, Arian L.
1988-01-01
A large-area liquid ion source comprises means for generating, over a large area of the surface of a liquid, an electric field of a strength sufficient to induce emission of ions from a large area of said liquid. Large areas in this context are those distinct from emitting areas in unidimensional emitters.
Samuel A. Cushman; Erin L. Landguth; Curtis H. Flather
2012-01-01
Aim: The goal of this study was to evaluate the sufficiency of the network of protected lands in the U.S. northern Rocky Mountains in providing protection for habitat connectivity for 105 hypothetical organisms. A large proportion of the landscape...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... supply that is sufficiently large, and a cash market sufficiently liquid, to render any contract traded... market. (2) The commodities that meet the criteria of paragraph (a)(1) of this section are: (i) The...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... supply that is sufficiently large, and a cash market sufficiently liquid, to render any contract traded... market. (2) The commodities that meet the criteria of paragraph (a)(1) of this section are: (i) The...
NASA Astrophysics Data System (ADS)
Schmid, David; Spekkens, Robert W.; Wolfe, Elie
2018-06-01
Within the framework of generalized noncontextuality, we introduce a general technique for systematically deriving noncontextuality inequalities for any experiment involving finitely many preparations and finitely many measurements, each of which has a finite number of outcomes. Given any fixed sets of operational equivalences among the preparations and among the measurements as input, the algorithm returns a set of noncontextuality inequalities whose satisfaction is necessary and sufficient for a set of operational data to admit of a noncontextual model. Additionally, we show that the space of noncontextual data tables always defines a polytope. Finally, we provide a computationally efficient means for testing whether any set of numerical data admits of a noncontextual model, with respect to any fixed operational equivalences. Together, these techniques provide complete methods for characterizing arbitrary noncontextuality scenarios, both in theory and in practice. Because a quantum prepare-and-measure experiment admits of a noncontextual model if and only if it admits of a positive quasiprobability representation, our techniques also determine the necessary and sufficient conditions for the existence of such a representation.
Translations from Kommunist, Number 13, September 1978
1978-10-30
programmed machine tool here is merely a component of a more complex reprogrammable technological system. This includes the robot machine tools with...sufficient possibilities for changing technological operations and processes and automated technological lines. 52 The reprogrammable automated sets will...simulate the possibilities of such sets. A new technological level will be developed in industry related to reprogrammable automated sets, their design
Rapid manufacturing of metallic Molds for parts in Automobile
NASA Astrophysics Data System (ADS)
Zhang, Renji; Xu, Da; Liu, Yuan; Yan, Xudong; Yan, Yongnian
1998-03-01
The recent research of RPM (Rapid Prototyping Manufacturing) in our lab has been focused on the rapid creation of alloyed cast iron (ACI) molds. There are a lot of machinery parts in an automobile, so a lot of mettallic molds are needed in automobile industry. A new mold manufacturing technology has been proposed. A new large scale RP machine has been set up in our lab now. Then rapid prototypes could be manufactured by means of laminated object manufacturing (LOM) technology. The molds for parts in automobile have been produced by ceramic shell precision casting. An example is a drawing mold for cover parts in automobile. Sufficient precision and surface roughness have been obtained. Itis proved that this is a vew kind of technology. Work supported by the Mational Science Foundation of China.
IMNN: Information Maximizing Neural Networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
Zhu, Huayang; Ricote, Sandrine; Coors, W Grover; Kee, Robert J
2015-01-01
A model-based interpretation of measured equilibrium conductivity and conductivity relaxation is developed to establish thermodynamic, transport, and kinetics parameters for multiple charged defect conducting (MCDC) ceramic materials. The present study focuses on 10% yttrium-doped barium zirconate (BZY10). In principle, using the Nernst-Einstein relationship, equilibrium conductivity measurements are sufficient to establish thermodynamic and transport properties. However, in practice it is difficult to establish unique sets of properties using equilibrium conductivity alone. Combining equilibrium and conductivity-relaxation measurements serves to significantly improve the quantitative fidelity of the derived material properties. The models are developed using a Nernst-Planck-Poisson (NPP) formulation, which enables the quantitative representation of conductivity relaxations caused by very large changes in oxygen partial pressure.
A Fractal Dimension Survey of Active Region Complexity
NASA Technical Reports Server (NTRS)
McAteer, R. T. James; Gallagher, Peter; Ireland, Jack
2005-01-01
A new approach to quantifying the magnetic complexity of active regions using a fractal dimension measure is presented. This fully-automated approach uses full disc MDI magnetograms of active regions from a large data set (2742 days of the SoHO mission; 9342 active regions) to compare the calculated fractal dimension to both Mount Wilson classification and flare rate. The main Mount Wilson classes exhibit no distinct fractal dimension distribution, suggesting a self-similar nature of all active regions. Solar flare productivity exhibits an increase in both the frequency and GOES X-ray magnitude of flares from regions with higher fractal dimensions. Specifically a lower threshold fractal dimension of 1.2 and 1.25 exists as a necessary, but not sufficient, requirement for an active region to produce M- and X-class flares respectively .
Mating motives are neither necessary nor sufficient to create the beauty premium.
Hafenbrädl, Sebastian; Dana, Jason
2017-01-01
Mating motives lead decision makers to favor attractive people, but this favoritism is not sufficient to create a beauty premium in competitive settings. Further, economic approaches to discrimination, when correctly characterized, could neatly accommodate the experimental and field evidence of a beauty premium. Connecting labor economics and evolutionary psychology is laudable, but mating motives do not explain the beauty premium.
ERIC Educational Resources Information Center
Papadopoulos, Timothy C.; Kendeou, Panayiota; Spanoudis, George
2012-01-01
Theory-driven conceptualizations of phonological abilities in a sufficiently transparent language (Greek) were examined in children ages 5 years 8 months to 7 years 7 months, by comparing a set of a priori models. Specifically, the fit of 9 different models was evaluated, as defined by the Number of Factors (1 to 3; represented by rhymes,…
NASA Technical Reports Server (NTRS)
Wasson, J. T.; Kallemeyn, G. W.
2002-01-01
We present new data or iron meteorites that are members of group IAB or are closely related to this large group, and we have also reevaluated some of our earlier data for these irons. In the past it was not possible to distinguish IAB and IIICD irons on the basis of their positions on element-Ni diagrams. We now find that plotting, the new and revised data yields six sets of compact fields on element-Au diagrams, each set corresponding to a compositional group. The largest set includes the majority (approximately equal to 70) of irons previously designated IA: We christened this set the IAB main group. The remaining five sets we designate subgroups within the IAB complex. Three of these subgroups have Au contents similar to the main group, and form parallel trends in most element-Ni diagrams. The groups originally designated IIIC and IIID are two of these subgroups: they are now well resolved from each other and from the main group. The other low-Au subgroup has Ni contents just above the main group. Two other IAB subgroups have appreciably higher Au contents than the main group and show weaker compositional links to it. We have named these five subgroups on the basis of their Au and Ni contents. The three subgroups having Au contents similar to the main group are the low-Au (L) subgroups the two others the high-Au (H) subgroups. The Ni contents are designated high (H), medium (M), or low (L). Thus the old group IIID is now the sLH subgroup. the old group IIIC is the sLM subgroup. In addition, eight irons assigned to two grouplets plot between sLL and sLM on most element-Au diagrams. A large number (27) of related irons plot outside these compact fields but nonetheless appear to be sufficiently related to also be included in the IAB complex.
Fine-resolution conservation planning with limited climate-change information.
Shah, Payal; Mallory, Mindy L; Ando, Amy W; Guntenspergen, Glenn R
2017-04-01
Climate-change induced uncertainties in future spatial patterns of conservation-related outcomes make it difficult to implement standard conservation-planning paradigms. A recent study translates Markowitz's risk-diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate-change scenarios for carrying out fine-resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk-return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate-change information and full climate-change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate-change forecasts such that the best possible risk-return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate-change information could be reduced by 17% relative to other iterative approaches. © 2016 Society for Conservation Biology.
Power and money in cluster randomized trials: when is it worth measuring a covariate?
Moerbeek, Mirjam
2006-08-15
The power to detect a treatment effect in cluster randomized trials can be increased by increasing the number of clusters. An alternative is to include covariates into the regression model that relates treatment condition to outcome. In this paper, formulae are derived in order to evaluate both strategies on basis of their costs. It is shown that the strategy that uses covariates is more cost-efficient in detecting a treatment effect when the costs to measure these covariates are small and the correlation between the covariates and outcome is sufficiently large. The minimum required correlation depends on the cluster size, and the costs to recruit a cluster and to measure the covariate, relative to the costs to recruit a person. Measuring a covariate that varies at the person level only is recommended when cluster sizes are small and the costs to recruit and measure a cluster are large. Measuring a cluster level covariate is recommended when cluster sizes are large and the costs to recruit and measure a cluster are small. An illustrative example shows the use of the formulae in a practical setting. Copyright 2006 John Wiley & Sons, Ltd.
Hand coverage by alcohol-based handrub varies: Volume and hand size matter.
Zingg, Walter; Haidegger, Tamas; Pittet, Didier
2016-12-01
Visitors of an infection prevention and control conference performed hand hygiene with 1, 2, or 3 mL ultraviolet light-traced alcohol-based handrub. Coverage of palms, dorsums, and fingertips were measured by digital images. Palms of all hand sizes were sufficiently covered when 2 mL was applied, dorsums of medium and large hands were never sufficiently covered. Palmar fingertips were sufficiently covered when 2 or 3 mL was applied, and dorsal fingertips were never sufficiently covered. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kruse, Holger; Grimme, Stefan
2012-04-01
A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.
The Stellar IMF from Isothermal MHD Turbulence
NASA Astrophysics Data System (ADS)
Haugbølle, Troels; Padoan, Paolo; Nordlund, Åke
2018-02-01
We address the turbulent fragmentation scenario for the origin of the stellar initial mass function (IMF), using a large set of numerical simulations of randomly driven supersonic MHD turbulence. The turbulent fragmentation model successfully predicts the main features of the observed stellar IMF assuming an isothermal equation of state without any stellar feedback. As a test of the model, we focus on the case of a magnetized isothermal gas, neglecting stellar feedback, while pursuing a large dynamic range in both space and timescales covering the full spectrum of stellar masses from brown dwarfs to massive stars. Our simulations represent a generic 4 pc region within a typical Galactic molecular cloud, with a mass of 3000 M ⊙ and an rms velocity 10 times the isothermal sound speed and 5 times the average Alfvén velocity, in agreement with observations. We achieve a maximum resolution of 50 au and a maximum duration of star formation of 4.0 Myr, forming up to a thousand sink particles whose mass distribution closely matches the observed stellar IMF. A large set of medium-size simulations is used to test the sink particle algorithm, while larger simulations are used to test the numerical convergence of the IMF and the dependence of the IMF turnover on physical parameters predicted by the turbulent fragmentation model. We find a clear trend toward numerical convergence and strong support for the model predictions, including the initial time evolution of the IMF. We conclude that the physics of isothermal MHD turbulence is sufficient to explain the origin of the IMF.
The force on the flex: Global parallelism and portability
NASA Technical Reports Server (NTRS)
Jordan, H. F.
1986-01-01
A parallel programming methodology, called the force, supports the construction of programs to be executed in parallel by an unspecified, but potentially large, number of processes. The methodology was originally developed on a pipelined, shared memory multiprocessor, the Denelcor HEP, and embodies the primitive operations of the force in a set of macros which expand into multiprocessor Fortran code. A small set of primitives is sufficient to write large parallel programs, and the system has been used to produce 10,000 line programs in computational fluid dynamics. The level of complexity of the force primitives is intermediate. It is high enough to mask detailed architectural differences between multiprocessors but low enough to give the user control over performance. The system is being ported to a medium scale multiprocessor, the Flex/32, which is a 20 processor system with a mixture of shared and local memory. Memory organization and the type of processor synchronization supported by the hardware on the two machines lead to some differences in efficient implementations of the force primitives, but the user interface remains the same. An initial implementation was done by retargeting the macros to Flexible Computer Corporation's ConCurrent C language. Subsequently, the macros were caused to directly produce the system calls which form the basis for ConCurrent C. The implementation of the Fortran based system is in step with Flexible Computer Corporations's implementation of a Fortran system in the parallel environment.
Automatic physical inference with information maximizing neural networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.
NASA Astrophysics Data System (ADS)
Zhang, Shun-Rong; Holt, John M.; Erickson, Philip J.; Goncharenko, Larisa P.
2018-05-01
Perrone and Mikhailov (2017, https://doi.org/10.1002/2017JA024193) and Mikhailov et al. (2017, https://doi.org/10.1002/2017JA023909) have recently examined thermospheric and ionospheric long-term trends using a data set of four thermospheric parameters (Tex, [O], [N2], and [O2]) and solar EUV flux. These data were derived from one single ionospheric parameter, foF1, using a nonlinear fitting procedure involving a photochemical model for the F1 peak. The F1 peak is assumed at the transition height ht with the linear recombination for atomic oxygen ions being equal to the quadratic recombination for molecular ions. This procedure has a number of obvious problems that are not addressed or not sufficiently justified. The potentially large ambiguities and biases in derived parameters make them unsuitable for precise quantitative ionospheric and thermospheric long-term trend studies. Furthermore, we assert that Perrone and Mikhailov (2017, https://doi.org/10.1002/2017JA024193) conclusions regarding incoherent scatter radar (ISR) ion temperature analysis for long-term trend studies are incorrect and in particular are based on a misunderstanding of the nature of the incoherent scatter radar measurement process. Large ISR data sets remain a consistent and statistically robust method for determining long term secular plasma temperature trends.
A simple parametric model observer for quality assurance in computer tomography
NASA Astrophysics Data System (ADS)
Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.
2018-04-01
Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.
The Die Is Cast: Precision Electrophilic Modifications Contribute to Cellular Decision Making
2016-01-01
This perspective sets out to critically evaluate the scope of reactive electrophilic small molecules as unique chemical signal carriers in biological information transfer cascades. We consider these electrophilic cues as a new volatile cellular currency and compare them to canonical signaling circulation such as phosphate in terms of chemical properties, biological specificity, sufficiency, and necessity. The fact that nonenzymatic redox sensing properties are found in proteins undertaking varied cellular tasks suggests that electrophile signaling is a moonlighting phenomenon manifested within a privileged set of sensor proteins. The latest interrogations into these on-target electrophilic responses set forth a new horizon in the molecular mechanism of redox signal propagation wherein direct low-occupancy electrophilic modifications on a single sensor target are biologically sufficient to drive functional redox responses with precision timing. We detail how the various mechanisms through which redox signals function could contribute to their interesting phenotypic responses, including hormesis. PMID:27617777
The Die Is Cast: Precision Electrophilic Modifications Contribute to Cellular Decision Making.
Long, Marcus J C; Aye, Yimon
2016-10-02
This perspective sets out to critically evaluate the scope of reactive electrophilic small molecules as unique chemical signal carriers in biological information transfer cascades. We consider these electrophilic cues as a new volatile cellular currency and compare them to canonical signaling circulation such as phosphate in terms of chemical properties, biological specificity, sufficiency, and necessity. The fact that nonenzymatic redox sensing properties are found in proteins undertaking varied cellular tasks suggests that electrophile signaling is a moonlighting phenomenon manifested within a privileged set of sensor proteins. The latest interrogations into these on-target electrophilic responses set forth a new horizon in the molecular mechanism of redox signal propagation wherein direct low-occupancy electrophilic modifications on a single sensor target are biologically sufficient to drive functional redox responses with precision timing. We detail how the various mechanisms through which redox signals function could contribute to their interesting phenotypic responses, including hormesis.
Hellmuth, Marc; Wieseke, Nicolas; Lechner, Marcus; Lenhof, Hans-Peter; Middendorf, Martin; Stadler, Peter F.
2015-01-01
Phylogenomics heavily relies on well-curated sequence data sets that comprise, for each gene, exclusively 1:1 orthologos. Paralogs are treated as a dangerous nuisance that has to be detected and removed. We show here that this severe restriction of the data sets is not necessary. Building upon recent advances in mathematical phylogenetics, we demonstrate that gene duplications convey meaningful phylogenetic information and allow the inference of plausible phylogenetic trees, provided orthologs and paralogs can be distinguished with a degree of certainty. Starting from tree-free estimates of orthology, cograph editing can sufficiently reduce the noise to find correct event-annotated gene trees. The information of gene trees can then directly be translated into constraints on the species trees. Although the resolution is very poor for individual gene families, we show that genome-wide data sets are sufficient to generate fully resolved phylogenetic trees, even in the presence of horizontal gene transfer. PMID:25646426
NASA Technical Reports Server (NTRS)
Flynn, Clare; Pickering, Kenneth E.; Crawford, James H.; Lamsol, Lok; Krotkov, Nickolay; Herman, Jay; Weinheimer, Andrew; Chen, Gao; Liu, Xiong; Szykman, James;
2014-01-01
To investigate the ability of column (or partial column) information to represent surface air quality, results of linear regression analyses between surface mixing ratio data and column abundances for O3 and NO2 are presented for the July 2011 Maryland deployment of the DISCOVER-AQ mission. Data collected by the P-3B aircraft, ground-based Pandora spectrometers, Aura/OMI satellite instrument, and simulations for July 2011 from the CMAQ air quality model during this deployment provide a large and varied data set, allowing this problem to be approached from multiple perspectives. O3 columns typically exhibited a statistically significant and high degree of correlation with surface data (R(sup 2) > 0.64) in the P- 3B data set, a moderate degree of correlation (0.16 < R(sup 2) < 0.64) in the CMAQ data set, and a low degree of correlation (R(sup 2) < 0.16) in the Pandora and OMI data sets. NO2 columns typically exhibited a low to moderate degree of correlation with surface data in each data set. The results of linear regression analyses for O3 exhibited smaller errors relative to the observations than NO2 regressions. These results suggest that O3 partial column observations from future satellite instruments with sufficient sensitivity to the lower troposphere can be meaningful for surface air quality analysis.
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
2000-01-01
A pulsating form of hydrodynamic instability has recently been shown to arise during liquid-propellant deflagration in those parameter regimes where the pressure-dependent burning rate is characterized by a negative pressure sensitivity. This type of instability can coexist with the classical cellular, or Landau form of hydrodynamic instability, with the occurrence of either dependent on whether the pressure sensitivity is sufficiently large or small in magnitude. For the inviscid problem, it has been shown that, when the burning rate is realistically allowed to depend on temperature as well as pressure, sufficiently large values of the temperature sensitivity relative to the pressure sensitivity causes like pulsating form of hydrodynamic instability to become dominant. In that regime, steady, planar burning becomes intrinsically unstable to pulsating disturbances whose wave numbers are sufficiently small. This analysis is extended to the fully viscous case, where it is shown that although viscosity is stabilizing for intermediate and larger wave number perturbations, the intrinsic pulsating instability for small wave numbers remains. Under these conditions, liquid-propellant combustion is predicted to be characterized by large unsteady cells along the liquid/gas interface.
NASA Astrophysics Data System (ADS)
Sieck, Paul; Woodruff, Simon; Stuber, James; Romero-Talamas, Carlos; Rivera, William; You, Setthivoine; Card, Alexander
2015-11-01
Additive manufacturing (or 3D printing) is now becoming sufficiently accurate with a large range of materials for use in printing sensors needed universally in fusion energy research. Decreasing production cost and significantly lowering design time of energy subsystems would realize significant cost reduction for standard diagnostics commonly obtained through research grants. There is now a well-established set of plasma diagnostics, but these expensive since they are often highly complex and require customization, sometimes pace the project. Additive manufacturing (3D printing) is developing rapidly, including open source designs. Basic components can be printed for (in some cases) less than 1/100th costs of conventional manufacturing. We have examined the impact that AM can have on plasma diagnostic cost by taking 15 separate diagnostics through an engineering design using Conventional Manufacturing (CM) techniques to determine costs of components and labor costs associated with getting the diagnostic to work as intended. With that information in hand, we set about optimizing the design to exploit the benefits of AM. Work performed under DOE Contract DE-SC0011858.
Health Care Merged With Senior Housing: Description and Evaluation of a Successful Program.
Barry, Theresa Teta
2017-01-01
Objective: This article describes and evaluates a successful partnership between a large health care organization and housing for seniors. The program provides on-site, primary care visits by a physician and a nurse in addition to intensive social services to residents in an affordable senior housing apartment building located in Pennsylvania. Per Donabedian's "Structure-Process-Outcome" model, the program demonstrated positive health care outcomes for its participants via a prescribed structure. To provide guidance for replication in similar settings, we qualitatively evaluated the processes by which successful outcomes were obtained. Methods: With program structures in place and outcomes measured, this case study collected and analyzed qualitative information taken from key informant interviews on care processes involved in the program. Themes were extracted from semistructured interviews and used to describe the processes that helped and hindered the program. Results and Discussion: Common processes were identified across respondents; however, the nuanced processes that lead to successful outcomes suggest that defined structures and processes may not be sufficient to produce similar outcomes in other settings. Further research is needed to determine the program's replicability and policy implications.
Globalising Synthetic Nitrogen: The Interwar Inauguration of a New Industry.
Travis, Anthony S
2017-02-01
The most spectacular development in industrial chemistry during the early twentieth century concerned the capture of atmospheric nitrogen by the Haber-Bosch high-pressure ammonia process at the German chemical enterprise Badische Anilin- & Soda-Fabrik (BASF), of Ludwigshafen. This firm, confident that its complex process could not be readily imitated, set out to dominate the global nitrogen fertiliser market. The response was the emergence of rival high-pressure ammonia processes in Western Europe, the United States, and Japan during the 1920s. This article is an historical appreciation of the settings in which several countries, often driven by concerns over national security, were encouraged to develop and adopt non-BASF high-pressure nitrogen capture technologies. Moreover, synthetic ammonia was at the forefront of large-scale strategic self-sufficiency and state sponsored programmes in three countries - Italy, Russia, and Japan - at the very same time when the newer technologies became available. As a result, the chemical industries of these nations, under the influences of fascism, communism, and colonial modernisation projects, began moving into the top ranks.
Risking Your Life without a Second Thought: Intuitive Decision-Making and Extreme Altruism
Rand, David G.; Epstein, Ziv G.
2014-01-01
When faced with the chance to help someone in mortal danger, what is our first response? Do we leap into action, only later considering the risks to ourselves? Or must instinctive self-preservation be overcome by will-power in order to act? We investigate this question by examining the testimony of Carnegie Hero Medal Recipients (CHMRs), extreme altruists who risked their lives to save others. We collected published interviews with CHMRs where they described their decisions to help. We then had participants rate the intuitiveness versus deliberativeness of the decision-making process described in each CHMR statement. The statements were judged to be overwhelmingly dominated by intuition; to be significantly more intuitive than a set of control statements describing deliberative decision-making; and to not differ significantly from a set of intuitive control statements. This remained true when restricting to scenarios in which the CHMRs had sufficient time to reflect before acting if they had so chosen. Text-analysis software found similar results. These findings suggest that high-stakes extreme altruism may be largely motivated by automatic, intuitive processes. PMID:25333876
The association of neoplasms and HIV infection in the correctional setting.
Baillargeon, Jacques; Pollock, Brad H; Leach, Charles T; Gao, Shou-Jiang
2004-05-01
HIV-associated immunosuppression has been linked to an increased risk of a number of cancers, including Kaposi sarcoma (KS), non-Hodgkin's lymphoma (NHL), and invasive cervical cancer. Because prison inmates constitute one of the highest HIV/AIDS prevalent populations in the US, understanding the link between HIV infection and cancer in the correctional setting holds particular public health relevance. The study population consisted of 336,668 Texas Department of Criminal Justice inmates who were incarcerated, for any duration, between 1 January 1999 and 31 December 2001. Inmates diagnosed with HIV infection exhibited elevated rates of KS, NHL, anal cancer, and Hodgkin's disease, after adjusting for age and race. The elevated rates of cancer among HIV-infected individuals, particularly prison inmates, may be mediated, in part, by high-risk behaviours. HIV-associated risk behaviours, including unsafe sexual practices, injection drug use, and prostitution may be associated with cancer-related risk behaviours, such as smoking, excessive alcohol consumption, and poor diet. It will be important for future investigators to examine the association between HIV infection and cancer risk with sufficiently large study cohorts and appropriate longitudinal designs.
Risking your life without a second thought: intuitive decision-making and extreme altruism.
Rand, David G; Epstein, Ziv G
2014-01-01
When faced with the chance to help someone in mortal danger, what is our first response? Do we leap into action, only later considering the risks to ourselves? Or must instinctive self-preservation be overcome by will-power in order to act? We investigate this question by examining the testimony of Carnegie Hero Medal Recipients (CHMRs), extreme altruists who risked their lives to save others. We collected published interviews with CHMRs where they described their decisions to help. We then had participants rate the intuitiveness versus deliberativeness of the decision-making process described in each CHMR statement. The statements were judged to be overwhelmingly dominated by intuition; to be significantly more intuitive than a set of control statements describing deliberative decision-making; and to not differ significantly from a set of intuitive control statements. This remained true when restricting to scenarios in which the CHMRs had sufficient time to reflect before acting if they had so chosen. Text-analysis software found similar results. These findings suggest that high-stakes extreme altruism may be largely motivated by automatic, intuitive processes.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
LiPISC: A Lightweight and Flexible Method for Privacy-Aware Intersection Set Computation
Huang, Shiyong; Ren, Yi; Choo, Kim-Kwang Raymond
2016-01-01
Privacy-aware intersection set computation (PISC) can be modeled as secure multi-party computation. The basic idea is to compute the intersection of input sets without leaking privacy. Furthermore, PISC should be sufficiently flexible to recommend approximate intersection items. In this paper, we reveal two previously unpublished attacks against PISC, which can be used to reveal and link one input set to another input set, resulting in privacy leakage. We coin these as Set Linkage Attack and Set Reveal Attack. We then present a lightweight and flexible PISC scheme (LiPISC) and prove its security (including against Set Linkage Attack and Set Reveal Attack). PMID:27326763
LiPISC: A Lightweight and Flexible Method for Privacy-Aware Intersection Set Computation.
Ren, Wei; Huang, Shiyong; Ren, Yi; Choo, Kim-Kwang Raymond
2016-01-01
Privacy-aware intersection set computation (PISC) can be modeled as secure multi-party computation. The basic idea is to compute the intersection of input sets without leaking privacy. Furthermore, PISC should be sufficiently flexible to recommend approximate intersection items. In this paper, we reveal two previously unpublished attacks against PISC, which can be used to reveal and link one input set to another input set, resulting in privacy leakage. We coin these as Set Linkage Attack and Set Reveal Attack. We then present a lightweight and flexible PISC scheme (LiPISC) and prove its security (including against Set Linkage Attack and Set Reveal Attack).
Yin, J Kevin; Heywood, Anita E; Georgousakis, Melina; King, Catherine; Chiu, Clayton; Isaacs, David; Macartney, Kristine K
2017-09-01
Universal childhood vaccination is a potential solution to reduce seasonal influenza burden. We reviewed systematically the literature on "herd"/indirect protection from vaccinating children aged 6 months to 17 years against influenza. Of 30 studies included, 14 (including 1 cluster randomized controlled trial [cRCT]) used live attenuated influenza vaccine, 11 (7 cRCTs) used inactivated influenza vaccine, and 5 (1 cRCT) compared both vaccine types. Twenty of 30 studies reported statistically significant indirect protection effectiveness (IPE) with point estimates ranging from 4% to 66%. Meta-regression suggests that studies with high quality and/or sufficiently large sample size are more likely to report significant IPE. In meta-analyses of 6 cRCTs with full randomization (rated as moderate quality overall), significant IPE was found in 1 cRCT in closely connected communities where school-aged children were vaccinated: 60% (95% confidence interval [CI], 41%-72%; I2 = 0%; N = 2326) against laboratory-confirmed influenza, and 3 household cRCTs in which preschool-aged children were vaccinated: 22% (95% CI, 1%-38%; I2 = 0%; N = 1903) against acute respiratory infections or influenza-like illness. Significant IPE was also reported in a large-scale cRCT (N = 8510) that was not fully randomized, and 3 ecological studies (N > 10000) of moderate quality including 36% reduction in influenza-related mortality among the elderly in a Japanese school-based program. Data on IPE in other settings are heterogeneous and lacked power to draw a firm conclusion. The available evidence suggests that influenza vaccination of children confers indirect protection in some but not all settings. Robust, large-scaled studies are required to better quantify the indirect protection from vaccinating children for different settings/endpoints. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.
Using Sentiment Analysis to Observe How Science is Communicated
NASA Astrophysics Data System (ADS)
Topping, David; Illingworth, Sam
2016-04-01
'Citizen Science' and 'Big data' are terms that are currently ubiquitous in the field of science communication. Whilst opinions differ as to what exactly constitutes a 'citizen', and how much information is needed in order for a data set to be considered truly 'big', what is apparent is that both of these fields have the potential to help revolutionise not just the way that science is communicated, but also the way that it is conducted. However, both the generation of sufficient data, and the efficiency of then analysing the data once it has been analysed need to be taken into account. Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral. The process of sentiment analysis can be automated, providing that an adequate training set has been used, and that the nuances that are associated with a particular topic have been accounted for. Given the large amounts of data that are generated by social media posts, and the often-opinionated nature of these posts, they present an ideal source of data to both train with and then scrutinize using sentiment analysis. In this work we will demonstrate how sentiment analysis can be used to examine a large number of Twitter posts, and how a training set can be established to ensure consistency and accuracy in the automation. Following an explanation of the process, we will demonstrate how automated sentiment analysis can be used to categorise opinions in relation to a large-scale science festival, and will discuss if sentiment analysis can be used to tell us if there is a bias in these communications. We will also investigate if sentiment analysis can be used to replace more traditional, and invasive evaluation strategies, and how this approach can then be adopted to investigate other topics, both within scientific communication and in the wider scientific context.
Liu, Feng; Tai, An; Lee, Percy; Biswas, Tithi; Ding, George X.; El Naqa, Isaam; Grimm, Jimm; Jackson, Andrew; Kong, Feng-Ming (Spring); LaCouture, Tamara; Loo, Billy; Miften, Moyed; Solberg, Timothy; Li, X Allen
2017-01-01
Purpose To analyze pooled clinical data using different radiobiological models and to understand the relationship between biologically effective dose (BED) and tumor control probability (TCP) for stereotactic body radiotherapy (SBRT) of early-stage non-small cell lung cancer (NSCLC). Method and Materials The clinical data of 1-, 2-, 3-, and 5-year actuarial or Kaplan-Meier TCP from 46 selected studies were collected for SBRT of NSCLC in the literature. The TCP data were separated for Stage T1 and T2 tumors if possible, otherwise collected for combined stages. BED was calculated at isocenters using six radiobiological models. For each model, the independent model parameters were determined from a fit to the TCP data using the least chi-square (χ2) method with either one set of parameters regardless of tumor stages or two sets for T1 and T2 tumors separately. Results The fits to the clinic data yield consistent results of large α/β ratios of about 20 Gy for all models investigated. The regrowth model that accounts for the tumor repopulation and heterogeneity leads to a better fit to the data, compared to other 5 models where the fits were indistinguishable between the models. The models based on the fitting parameters predict that the T2 tumors require about additional 1 Gy physical dose at isocenters per fraction (≤5 fractions) to achieve the optimal TCP when compared to the T1 tumors. Conclusion This systematic analysis of a large set of published clinical data using different radiobiological models shows that local TCP for SBRT of early-stage NSCLC has strong dependence on BED with large α/β ratios of about 20 Gy. The six models predict that a BED (calculated with α/β of 20) of 90 Gy is sufficient to achieve TCP ≥ 95%. Among the models considered, the regrowth model leads to a better fit to the clinical data. PMID:27871671
Clinical evaluation of a miniaturized desktop breath hydrogen analyzer.
Duan, L P; Braden, B; Clement, T; Caspary, W F; Lembcke, B
1994-10-01
A small desktop electrochemical H2 analyzer (EC-60-Hydrogen monitor) was compared with a stationary electrochemical H2 monitor (GMI-exhaled Hydrogen monitor). The EC-60-H2 monitor shows a high degree of precision for repetitive (n = 10) measurements of standard hydrogen mixtures (CV 1-8%). The response time for completion of measurement is shorter than that of the GMI-exhaled H2 monitor (37 sec. vs 53 sec.; p < 0.0001), while reset times are almost identical (54 sec. vs 51 sec. n.s). In a clinical setting, breath H2-concentrations measured with the EC-60-H2 monitor and the GMI-exhaled H2 monitor were in excellent agreement with a linear correlation (Y = 1.12X + 1.022, r2 = 0.9617, n = 115). With increasing H2-concentrations the EC-60-H2 monitor required larger sample volumes for maintaining sufficient precision, and sample volumes greater than 200 ml were required with H2-concentrations > 30 ppm. For routine gastrointestinal function testing, the EC-60-H2 monitor is an satisfactory and reliable, easy to use and inexpensive desktop breath hydrogen analyzer, whereas in patients with difficulty in cooperating (children, people with severe pulmonary insufficiency), special care has to be applied to obtain sufficiently large breath samples.
Note on a modified return period scale for upper-truncated unbounded flood distributions
NASA Astrophysics Data System (ADS)
Bardsley, Earl
2017-01-01
Probability distributions unbounded to the right often give good fits to annual discharge maxima. However, all hydrological processes are in reality constrained by physical upper limits, though not necessarily well defined. A result of this contradiction is that for sufficiently small exceedance probabilities the unbounded distributions anticipate flood magnitudes which are impossibly large. This raises the question of whether displayed return period scales should, as is current practice, have some given number of years, such as 500 years, as the terminating rightmost tick-point. This carries the implication that the scale might be extended indefinitely to the right with a corresponding indefinite increase in flood magnitude. An alternative, suggested here, is to introduce a sufficiently high upper truncation point to the flood distribution and modify the return period scale accordingly. The rightmost tick-mark then becomes infinity, corresponding to the upper truncation point discharge. The truncation point is likely to be set as being above any physical upper bound and the return period scale will change only slightly over all practical return periods of operational interest. The rightmost infinity tick point is therefore proposed, not as an operational measure, but rather to signal in flood plots that the return period scale does not extend indefinitely to the right.
Seifan, Merav; Seifan, Tal; Schiffers, Katja; Jeltsch, Florian; Tielbörger, Katja
2013-02-01
Disturbances' role in shaping communities is well documented but highly disputed. We suggest replacing the overused two-trait trade-off approach with a functional group scheme, constructed from combinations of four key traits that represent four classes of species' responses to disturbances. Using model results and field observations from sites affected by two highly different disturbances, we demonstrated that popular dichotomous trade-offs are not sufficient to explain community dynamics, even if some emerge under certain conditions. Without disturbances, competition was only sufficient to predict species survival but not relative success, which required some escape mechanism (e.g., long-term dormancy). With highly predictable and large-scale disturbances, successful species showed a combination of high individual tolerance to disturbance and, more surprisingly, high competitive ability. When disturbances were less predictable, high individual tolerance and long-term seed dormancy were favored, due to higher environmental uncertainty. Our study demonstrates that theories relying on a small number of predefined trade-offs among traits (e.g., competition-colonization trade-off) may lead to unrealistic results. We suggest that the understanding of disturbance-community relationships can be significantly improved by employing sets of relevant trait assemblies instead of the currently common approach in which trade-offs are assumed in advance.
Vitamin D Deficiency in India: Prevalence, Causalities and Interventions
G, Ritu; Gupta, Ajay
2014-01-01
Vitamin D deficiency prevails in epidemic proportions all over the Indian subcontinent, with a prevalence of 70%–100% in the general population. In India, widely consumed food items such as dairy products are rarely fortified with vitamin D. Indian socioreligious and cultural practices do not facilitate adequate sun exposure, thereby negating potential benefits of plentiful sunshine. Consequently, subclinical vitamin D deficiency is highly prevalent in both urban and rural settings, and across all socioeconomic and geographic strata. Vitamin D deficiency is likely to play an important role in the very high prevalence of rickets, osteoporosis, cardiovascular diseases, diabetes, cancer and infections such as tuberculosis in India. Fortification of staple foods with vitamin D is the most viable population based strategy to achieve vitamin D sufficiency. Unfortunately, even in advanced countries like USA and Canada, food fortification strategies with vitamin D have been only partially effective and have largely failed to attain vitamin D sufficiency. This article reviews the status of vitamin D nutrition in the Indian subcontinent and also the underlying causes for this epidemic. Implementation of population based educational and interventional strategies to combat this scourge require recognition of vitamin D deficiency as a public health problem by the governing bodies so that healthcare funds can be allocated appropriately. PMID:24566435
Drury, Suzanne; Salter, Janine; Baehner, Frederick L; Shak, Steven; Dowsett, Mitch
2010-06-01
To determine whether 0.6 mm cores of formalin-fixed paraffin-embedded (FFPE) tissue, as commonly used to construct immunohistochemical tissue microarrays, may be a valid alternative to tissue sections as source material for quantitative real-time PCR-based transcriptional profiling of breast cancer. Four matched 0.6 mm cores of invasive breast tumour and two 10 microm whole sections were taken from eight FFPE blocks. RNA was extracted and reverse transcribed, and TaqMan assays were performed on the 21 genes of the Oncotype DX Breast Cancer assay. Expression of the 16 recurrence-related genes was normalised to the set of five reference genes, and the recurrence score (RS) was calculated. RNA yield was lower from 0.6 mm cores than from 10 microm whole sections, but was still more than sufficient to perform the assay. RS and single gene data from cores were highly comparable with those from whole sections (RS p=0.005). Greater variability was seen between cores than between sections. FFPE sections are preferable to 0.6 mm cores for RNA profiling in order to maximise RNA yield and to allow for standard histopathological assessment. However, 0.6 mm cores are sufficient and would be appropriate to use for large cohort studies.
Chuang, Emmeline; Dill, Janette; Morgan, Jennifer Craft; Konrad, Thomas R
2012-01-01
Objective To identify high-performance work practices (HPWP) associated with high frontline health care worker (FLW) job satisfaction and perceived quality of care. Methods Cross-sectional survey data from 661 FLWs in 13 large health care employers were collected between 2007 and 2008 and analyzed using both regression and fuzzy-set qualitative comparative analysis. Principal Findings Supervisor support and team-based work practices were identified as necessary for high job satisfaction and high quality of care but not sufficient to achieve these outcomes unless implemented in tandem with other HPWP. Several configurations of HPWP were associated with either high job satisfaction or high quality of care. However, only one configuration of HPWP was sufficient for both: the combination of supervisor support, performance-based incentives, team-based work, and flexible work. These findings were consistent even after controlling for FLW demographics and employer type. Additional research is needed to clarify whether HPWP have differential effects on quality of care in direct care versus administrative workers. Conclusions High-performance work practices that integrate FLWs in health care teams and provide FLWs with opportunities for participative decision making can positively influence job satisfaction and perceived quality of care, but only when implemented as bundles of complementary policies and practices. PMID:22224858
Bonding between oxide ceramics and adhesive cement systems: a systematic review.
Papia, Evaggelia; Larsson, Christel; du Toit, Madeleine; Vult von Steyern, Per
2014-02-01
The following aims were set for this systematic literature review: (a) to make an inventory of existing methods to achieve bondable surfaces on oxide ceramics and (b) to evaluate which methods might provide sufficient bond strength. Current literature of in vitro studies regarding bond strength achieved using different surface treatments on oxide ceramics in combination with adhesive cement systems was selected from PubMed and systematically analyzed and completed with reference tracking. The total number of publications included for aim a was 127 studies, 23 of which were used for aim b. The surface treatments are divided into seven main groups: as-produced, grinding/polishing, airborne particle abrasion, surface coating, laser treatment, acid treatment, and primer treatment. There are large variations, making comparison of the studies difficult. An as-produced surface of oxide ceramic needs to be surface treated to achieve durable bond strength. Abrasive surface treatment and/or silica-coating treatment with the use of primer treatment can provide sufficient bond strength for bonding oxide ceramics. This conclusion, however, needs to be confirmed by clinical studies. There is no universal surface treatment. Consideration should be given to the specific materials to be cemented and to the adhesive cement system to be used. Copyright © 2013 Wiley Periodicals, Inc.
Femtosecond X-ray protein nanocrystallography.
Chapman, Henry N; Fromme, Petra; Barty, Anton; White, Thomas A; Kirian, Richard A; Aquila, Andrew; Hunter, Mark S; Schulz, Joachim; DePonte, Daniel P; Weierstall, Uwe; Doak, R Bruce; Maia, Filipe R N C; Martin, Andrew V; Schlichting, Ilme; Lomb, Lukas; Coppola, Nicola; Shoeman, Robert L; Epp, Sascha W; Hartmann, Robert; Rolles, Daniel; Rudenko, Artem; Foucar, Lutz; Kimmel, Nils; Weidenspointner, Georg; Holl, Peter; Liang, Mengning; Barthelmess, Miriam; Caleman, Carl; Boutet, Sébastien; Bogan, Michael J; Krzywinski, Jacek; Bostedt, Christoph; Bajt, Saša; Gumprecht, Lars; Rudek, Benedikt; Erk, Benjamin; Schmidt, Carlo; Hömke, André; Reich, Christian; Pietschner, Daniel; Strüder, Lothar; Hauser, Günter; Gorke, Hubert; Ullrich, Joachim; Herrmann, Sven; Schaller, Gerhard; Schopper, Florian; Soltau, Heike; Kühnel, Kai-Uwe; Messerschmidt, Marc; Bozek, John D; Hau-Riege, Stefan P; Frank, Matthias; Hampton, Christina Y; Sierra, Raymond G; Starodub, Dmitri; Williams, Garth J; Hajdu, Janos; Timneanu, Nicusor; Seibert, M Marvin; Andreasson, Jakob; Rocker, Andrea; Jönsson, Olof; Svenda, Martin; Stern, Stephan; Nass, Karol; Andritschke, Robert; Schröter, Claus-Dieter; Krasniqi, Faton; Bott, Mario; Schmidt, Kevin E; Wang, Xiaoyu; Grotjohann, Ingo; Holton, James M; Barends, Thomas R M; Neutze, Richard; Marchesini, Stefano; Fromme, Raimund; Schorb, Sebastian; Rupp, Daniela; Adolph, Marcus; Gorkhover, Tais; Andersson, Inger; Hirsemann, Helmut; Potdevin, Guillaume; Graafsma, Heinz; Nilsson, Björn; Spence, John C H
2011-02-03
X-ray crystallography provides the vast majority of macromolecular structures, but the success of the method relies on growing crystals of sufficient size. In conventional measurements, the necessary increase in X-ray dose to record data from crystals that are too small leads to extensive damage before a diffraction signal can be recorded. It is particularly challenging to obtain large, well-diffracting crystals of membrane proteins, for which fewer than 300 unique structures have been determined despite their importance in all living cells. Here we present a method for structure determination where single-crystal X-ray diffraction 'snapshots' are collected from a fully hydrated stream of nanocrystals using femtosecond pulses from a hard-X-ray free-electron laser, the Linac Coherent Light Source. We prove this concept with nanocrystals of photosystem I, one of the largest membrane protein complexes. More than 3,000,000 diffraction patterns were collected in this study, and a three-dimensional data set was assembled from individual photosystem I nanocrystals (∼200 nm to 2 μm in size). We mitigate the problem of radiation damage in crystallography by using pulses briefer than the timescale of most damage processes. This offers a new approach to structure determination of macromolecules that do not yield crystals of sufficient size for studies using conventional radiation sources or are particularly sensitive to radiation damage.
Land management: data availability and process understanding for global change studies.
Erb, Karl-Heinz; Luyssaert, Sebastiaan; Meyfroidt, Patrick; Pongratz, Julia; Don, Axel; Kloster, Silvia; Kuemmerle, Tobias; Fetzel, Tamara; Fuchs, Richard; Herold, Martin; Haberl, Helmut; Jones, Chris D; Marín-Spiotta, Erika; McCallum, Ian; Robertson, Eddy; Seufert, Verena; Fritz, Steffen; Valade, Aude; Wiltshire, Andrew; Dolman, Albertus J
2017-02-01
In the light of daunting global sustainability challenges such as climate change, biodiversity loss and food security, improving our understanding of the complex dynamics of the Earth system is crucial. However, large knowledge gaps related to the effects of land management persist, in particular those human-induced changes in terrestrial ecosystems that do not result in land-cover conversions. Here, we review the current state of knowledge of ten common land management activities for their biogeochemical and biophysical impacts, the level of process understanding and data availability. Our review shows that ca. one-tenth of the ice-free land surface is under intense human management, half under medium and one-fifth under extensive management. Based on our review, we cluster these ten management activities into three groups: (i) management activities for which data sets are available, and for which a good knowledge base exists (cropland harvest and irrigation); (ii) management activities for which sufficient knowledge on biogeochemical and biophysical effects exists but robust global data sets are lacking (forest harvest, tree species selection, grazing and mowing harvest, N fertilization); and (iii) land management practices with severe data gaps concomitant with an unsatisfactory level of process understanding (crop species selection, artificial wetland drainage, tillage and fire management and crop residue management, an element of crop harvest). Although we identify multiple impediments to progress, we conclude that the current status of process understanding and data availability is sufficient to advance with incorporating management in, for example, Earth system or dynamic vegetation models in order to provide a systematic assessment of their role in the Earth system. This review contributes to a strategic prioritization of research efforts across multiple disciplines, including land system research, ecological research and Earth system modelling. © 2016 John Wiley & Sons Ltd.
Adaptable Constrained Genetic Programming: Extensions and Applications
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.
2005-01-01
An evolutionary algorithm applies evolution-based principles to problem solving. To solve a problem, the user defines the space of potential solutions, the representation space. Sample solutions are encoded in a chromosome-like structure. The algorithm maintains a population of such samples, which undergo simulated evolution by means of mutation, crossover, and survival of the fittest principles. Genetic Programming (GP) uses tree-like chromosomes, providing very rich representation suitable for many problems of interest. GP has been successfully applied to a number of practical problems such as learning Boolean functions and designing hardware circuits. To apply GP to a problem, the user needs to define the actual representation space, by defining the atomic functions and terminals labeling the actual trees. The sufficiency principle requires that the label set be sufficient to build the desired solution trees. The closure principle allows the labels to mix in any arity-consistent manner. To satisfy both principles, the user is often forced to provide a large label set, with ad hoc interpretations or penalties to deal with undesired local contexts. This unfortunately enlarges the actual representation space, and thus usually slows down the search. In the past few years, three different methodologies have been proposed to allow the user to alleviate the closure principle by providing means to define, and to process, constraints on mixing the labels in the trees. Last summer we proposed a new methodology to further alleviate the problem by discovering local heuristics for building quality solution trees. A pilot system was implemented last summer and tested throughout the year. This summer we have implemented a new revision, and produced a User's Manual so that the pilot system can be made available to other practitioners and researchers. We have also designed, and partly implemented, a larger system capable of dealing with much more powerful heuristics.
Flexible Meta-Regression to Assess the Shape of the Benzene–Leukemia Exposure–Response Curve
Vlaanderen, Jelle; Portengen, Lützen; Rothman, Nathaniel; Lan, Qing; Kromhout, Hans; Vermeulen, Roel
2010-01-01
Background Previous evaluations of the shape of the benzene–leukemia exposure–response curve (ERC) were based on a single set or on small sets of human occupational studies. Integrating evidence from all available studies that are of sufficient quality combined with flexible meta-regression models is likely to provide better insight into the functional relation between benzene exposure and risk of leukemia. Objectives We used natural splines in a flexible meta-regression method to assess the shape of the benzene–leukemia ERC. Methods We fitted meta-regression models to 30 aggregated risk estimates extracted from nine human observational studies and performed sensitivity analyses to assess the impact of a priori assessed study characteristics on the predicted ERC. Results The natural spline showed a supralinear shape at cumulative exposures less than 100 ppm-years, although this model fitted the data only marginally better than a linear model (p = 0.06). Stratification based on study design and jackknifing indicated that the cohort studies had a considerable impact on the shape of the ERC at high exposure levels (> 100 ppm-years) but that predicted risks for the low exposure range (< 50 ppm-years) were robust. Conclusions Although limited by the small number of studies and the large heterogeneity between studies, the inclusion of all studies of sufficient quality combined with a flexible meta-regression method provides the most comprehensive evaluation of the benzene–leukemia ERC to date. The natural spline based on all data indicates a significantly increased risk of leukemia [relative risk (RR) = 1.14; 95% confidence interval (CI), 1.04–1.26] at an exposure level as low as 10 ppm-years. PMID:20064779
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
A novel alignment-free method for detection of lateral genetic transfer based on TF-IDF.
Cong, Yingnan; Chan, Yao-Ban; Ragan, Mark A
2016-07-25
Lateral genetic transfer (LGT) plays an important role in the evolution of microbes. Existing computational methods for detecting genomic regions of putative lateral origin scale poorly to large data. Here, we propose a novel method based on TF-IDF (Term Frequency-Inverse Document Frequency) statistics to detect not only regions of lateral origin, but also their origin and direction of transfer, in sets of hierarchically structured nucleotide or protein sequences. This approach is based on the frequency distributions of k-mers in the sequences. If a set of contiguous k-mers appears sufficiently more frequently in another phyletic group than in its own, we infer that they have been transferred from the first group to the second. We performed rigorous tests of TF-IDF using simulated and empirical datasets. With the simulated data, we tested our method under different parameter settings for sequence length, substitution rate between and within groups and post-LGT, deletion rate, length of transferred region and k size, and found that we can detect LGT events with high precision and recall. Our method performs better than an established method, ALFY, which has high recall but low precision. Our method is efficient, with runtime increasing approximately linearly with sequence length.
Learning and liking an artificial musical system: Effects of set size and repeated exposure
Loui, Psyche; Wessel, David
2009-01-01
We report an investigation of humans' musical learning ability using a novel musical system. We designed an artificial musical system based on the Bohlen-Pierce scale, a scale very different from Western music. Melodies were composed from chord progressions in the new scale by applying the rules of a finite-state grammar. After exposing participants to sets of melodies, we conducted listening tests to assess learning, including recognition tests, generalization tests, and subjective preference ratings. In Experiment 1, participants were presented with 15 melodies 27 times each. Forced choice results showed that participants were able to recognize previously encountered melodies and generalize their knowledge to new melodies, suggesting internalization of the musical grammar. Preference ratings showed no differentiation among familiar, new, and ungrammatical melodies. In Experiment 2, participants were given 10 melodies 40 times each. Results showed superior recognition but unsuccessful generalization. Additionally, preference ratings were significantly higher for familiar melodies. Results from the two experiments suggest that humans can internalize the grammatical structure of a new musical system following exposure to a sufficiently large set size of melodies, but musical preference results from repeated exposure to a small number of items. This dissociation between grammar learning and preference will be further discussed. PMID:20151034
A field-to-desktop toolchain for X-ray CT densitometry enables tree ring analysis.
De Mil, Tom; Vannoppen, Astrid; Beeckman, Hans; Van Acker, Joris; Van den Bulcke, Jan
2016-06-01
Disentangling tree growth requires more than ring width data only. Densitometry is considered a valuable proxy, yet laborious wood sample preparation and lack of dedicated software limit the widespread use of density profiling for tree ring analysis. An X-ray computed tomography-based toolchain of tree increment cores is presented, which results in profile data sets suitable for visual exploration as well as density-based pattern matching. Two temperate (Quercus petraea, Fagus sylvatica) and one tropical species (Terminalia superba) were used for density profiling using an X-ray computed tomography facility with custom-made sample holders and dedicated processing software. Density-based pattern matching is developed and able to detect anomalies in ring series that can be corrected via interactive software. A digital workflow allows generation of structure-corrected profiles of large sets of cores in a short time span that provide sufficient intra-annual density information for tree ring analysis. Furthermore, visual exploration of such data sets is of high value. The dated profiles can be used for high-resolution chronologies and also offer opportunities for fast screening of lesser studied tropical tree species. © The Author 2016. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Learning and liking an artificial musical system: Effects of set size and repeated exposure.
Loui, Psyche; Wessel, David
2008-10-01
We report an investigation of humans' musical learning ability using a novel musical system. We designed an artificial musical system based on the Bohlen-Pierce scale, a scale very different from Western music. Melodies were composed from chord progressions in the new scale by applying the rules of a finite-state grammar. After exposing participants to sets of melodies, we conducted listening tests to assess learning, including recognition tests, generalization tests, and subjective preference ratings. In Experiment 1, participants were presented with 15 melodies 27 times each. Forced choice results showed that participants were able to recognize previously encountered melodies and generalize their knowledge to new melodies, suggesting internalization of the musical grammar.Preference ratings showed no differentiation among familiar, new, and ungrammatical melodies. In Experiment 2, participants were given 10 melodies 40 times each. Results showed superior recognition but unsuccessful generalization. Additionally, preference ratings were significantly higher for familiar melodies. Results from the two experiments suggest that humans can internalize the grammatical structure of a new musical system following exposure to a sufficiently large set size of melodies, but musical preference results from repeated exposure to a small number of items. This dissociation between grammar learning and preference will be further discussed.
The Linear Interaction Energy Method for the Prediction of Protein Stability Changes Upon Mutation
Wickstrom, Lauren; Gallicchio, Emilio; Levy, Ronald M.
2011-01-01
The coupling of protein energetics and sequence changes is a critical aspect of computational protein design, as well as for the understanding of protein evolution, human disease, and drug resistance. In order to study the molecular basis for this coupling, computational tools must be sufficiently accurate and computationally inexpensive enough to handle large amounts of sequence data. We have developed a computational approach based on the linear interaction energy (LIE) approximation to predict the changes in the free energy of the native state induced by a single mutation. This approach was applied to a set of 822 mutations in 10 proteins which resulted in an average unsigned error of 0.82 kcal/mol and a correlation coefficient of 0.72 between the calculated and experimental ΔΔG values. The method is able to accurately identify destabilizing hot spot mutations however it has difficulty in distinguishing between stabilizing and destabilizing mutations due to the distribution of stability changes for the set of mutations used to parameterize the model. In addition, the model also performs quite well in initial tests on a small set of double mutations. Based on these promising results, we can begin to examine the relationship between protein stability and fitness, correlated mutations, and drug resistance. PMID:22038697
Go Chemistry: A Card Game to Help Students Learn Chemical Formulas
ERIC Educational Resources Information Center
Morris, Todd A.
2011-01-01
For beginning chemistry students, the basic tasks of writing chemical formulas and naming covalent and ionic compounds often pose difficulties and are only sufficiently grasped after extensive practice with homework sets. An enjoyable card game that can replace or, at least, complement nomenclature homework sets is described. "Go Chemistry" is…
Connected Dominating Set Based Topology Control in Wireless Sensor Networks
ERIC Educational Resources Information Center
He, Jing
2012-01-01
Wireless Sensor Networks (WSNs) are now widely used for monitoring and controlling of systems where human intervention is not desirable or possible. Connected Dominating Sets (CDSs) based topology control in WSNs is one kind of hierarchical method to ensure sufficient coverage while reducing redundant connections in a relatively crowded network.…
Aggregation Bias and the Analysis of Necessary and Sufficient Conditions in fsQCA
ERIC Educational Resources Information Center
Braumoeller, Bear F.
2017-01-01
Fuzzy-set qualitative comparative analysis (fsQCA) has become one of the most prominent methods in the social sciences for capturing causal complexity, especially for scholars with small- and medium-"N" data sets. This research note explores two key assumptions in fsQCA's methodology for testing for necessary and sufficient…
Non-Technical Skills in Undergraduate Degrees in Business: Development and Transfer
ERIC Educational Resources Information Center
Jackson, Denise; Hancock, Phil
2010-01-01
The development of discipline-specific skills and knowledge is no longer considered sufficient in graduates of Bachelor level degrees in Business. Higher education providers are becoming increasingly responsible for the development of a generic skill set deemed essential in undergraduates. This required skill set comprises a broad range of…
Gottschling, Marc; Soehner, Sylvia; Zinssmeister, Carmen; John, Uwe; Plötner, Jörg; Schweikert, Michael; Aligizaki, Katerina; Elbrächter, Malte
2012-01-01
The phylogenetic relationships of the Dinophyceae (Alveolata) are not sufficiently resolved at present. The Thoracosphaeraceae (Peridiniales) are the only group of the Alveolata that include members with calcareous coccoid stages; this trait is considered apomorphic. Although the coccoid stage apparently is not calcareous, Bysmatrum has been assigned to the Thoracosphaeraceae based on thecal morphology. We tested the monophyly of the Thoracosphaeraceae using large sets of ribosomal RNA sequence data of the Alveolata including the Dinophyceae. Phylogenetic analyses were performed using Maximum Likelihood and Bayesian approaches. The Thoracosphaeraceae were monophyletic, but included also a number of non-calcareous dinophytes (such as Pentapharsodinium and Pfiesteria) and even parasites (such as Duboscquodinium and Tintinnophagus). Bysmatrum had an isolated and uncertain phylogenetic position outside the Thoracosphaeraceae. The phylogenetic relationships among calcareous dinophytes appear complex, and the assumption of the single origin of the potential to produce calcareous structures is challenged. The application of concatenated ribosomal RNA sequence data may prove promising for phylogenetic reconstructions of the Dinophyceae in future. Copyright © 2011 Elsevier GmbH. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisset, R. N.; Wang, Wenlong; Ticknor, C.
Here, we investigate how single- and multi-vortex-ring states can emerge from a planar dark soliton in three-dimensional (3D) Bose-Einstein condensates (confined in isotropic or anisotropic traps) through bifurcations. We characterize such bifurcations quantitatively using a Galerkin-type approach and find good qualitative and quantitative agreement with our Bogoliubov–de Gennes (BdG) analysis. We also systematically characterize the BdG spectrum of the dark solitons, using perturbation theory, and obtain a quantitative match with our 3D BdG numerical calculations. We then turn our attention to the emergence of single- and multi-vortex-ring states. We systematically capture these as stationary states of the system and quantifymore » their BdG spectra numerically. We found that although the vortex ring may be unstable when bifurcating, its instabilities weaken and may even eventually disappear for sufficiently large chemical potentials and suitable trap settings. For instance, we demonstrate the stability of the vortex ring for an isotropic trap in the large-chemical-potential regime.« less
Vystavna, Yuliya; Diadin, Dmytro; Huneau, Frédéric
2018-05-01
Stable isotopes of hydrogen ( 2 H) and oxygen ( 18 O) of the water molecule were used to assess the relationship between precipitation, surface water and groundwater in a large Russia/Ukraine trans-boundary river basin. Precipitation was sampled from November 2013 to February 2015, and surface water and groundwater were sampled during high and low flow in 2014. A local meteoric water line was defined for the Ukrainian part of the basin. The isotopic seasonality in precipitation was evident with depletion in heavy isotopes in November-March and an enrichment in April-October, indicating continental and temperature effects. Surface water was enriched in stable water isotopes from upstream to downstream sites due to progressive evaporation. Stable water isotopes in groundwater indicated that recharge occurs mainly during winter and spring. A one-year data set is probably not sufficient to report the seasonality of groundwater recharge, but this survey can be used to identify the stable water isotopes framework in a weakly gauged basin for further hydrological and geochemical studies.
A particle finite element method for machining simulations
NASA Astrophysics Data System (ADS)
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
A porous media theory for characterization of membrane blood oxygenation devices
NASA Astrophysics Data System (ADS)
Sano, Yoshihiko; Adachi, Jun; Nakayama, Akira
2013-07-01
A porous media theory has been proposed to characterize oxygen transport processes associated with membrane blood oxygenation devices. For the first time, a rigorous mathematical procedure based a volume averaging procedure has been presented to derive a complete set of the governing equations for the blood flow field and oxygen concentration field. As a first step towards a complete three-dimensional numerical analysis, one-dimensional steady case is considered to model typical membrane blood oxygenator scenarios, and to validate the derived equations. The relative magnitudes of oxygen transport terms are made clear, introducing a dimensionless parameter which measures the distance the oxygen gas travels to dissolve in the blood as compared with the blood dispersion length. This dimensionless number is found so large that the oxygen diffusion term can be neglected in most cases. A simple linear relationship between the blood flow rate and total oxygen transfer rate is found for oxygenators with sufficiently large membrane surface areas. Comparison of the one-dimensional analytic results and available experimental data reveals the soundness of the present analysis.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
The use of Argo for validation and tuning of mixed layer models
NASA Astrophysics Data System (ADS)
Acreman, D. M.; Jeffery, C. D.
We present results from validation and tuning of 1-D ocean mixed layer models using data from Argo floats and data from Ocean Weather Station Papa (145°W, 50°N). Model tests at Ocean Weather Station Papa showed that a bulk model could perform well provided it was tuned correctly. The Large et al. [Large, W.G., McWilliams, J.C., Doney, S.C., 1994. Oceanic vertical mixing: a review and a model with a nonlocal boundary layer parameterisation. Rev. Geophys. 32 (Novermber), 363-403] K-profile parameterisation (KPP) model also gave a good representation of mixed layer depth provided the vertical resolution was sufficiently high. Model tests using data from a single Argo float indicated a tendency for the KPP model to deepen insufficiently over an annual cycle, whereas the tuned bulk model and general ocean turbulence model (GOTM) gave a better representation of mixed layer depth. The bulk model was then tuned using data from a sample of Argo floats and a set of optimum parameters was found; these optimum parameters were consistent with the tuning at OWS Papa.
Bisset, R. N.; Wang, Wenlong; Ticknor, C.; ...
2015-10-01
Here, we investigate how single- and multi-vortex-ring states can emerge from a planar dark soliton in three-dimensional (3D) Bose-Einstein condensates (confined in isotropic or anisotropic traps) through bifurcations. We characterize such bifurcations quantitatively using a Galerkin-type approach and find good qualitative and quantitative agreement with our Bogoliubov–de Gennes (BdG) analysis. We also systematically characterize the BdG spectrum of the dark solitons, using perturbation theory, and obtain a quantitative match with our 3D BdG numerical calculations. We then turn our attention to the emergence of single- and multi-vortex-ring states. We systematically capture these as stationary states of the system and quantifymore » their BdG spectra numerically. We found that although the vortex ring may be unstable when bifurcating, its instabilities weaken and may even eventually disappear for sufficiently large chemical potentials and suitable trap settings. For instance, we demonstrate the stability of the vortex ring for an isotropic trap in the large-chemical-potential regime.« less
Survey of large protein complexes D. vulgaris reveals great structural diversity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, B.-G.; Dong, M.; Liu, H.
2009-08-15
An unbiased survey has been made of the stable, most abundant multi-protein complexes in Desulfovibrio vulgaris Hildenborough (DvH) that are larger than Mr {approx} 400 k. The quaternary structures for 8 of the 16 complexes purified during this work were determined by single-particle reconstruction of negatively stained specimens, a success rate {approx}10 times greater than that of previous 'proteomic' screens. In addition, the subunit compositions and stoichiometries of the remaining complexes were determined by biochemical methods. Our data show that the structures of only two of these large complexes, out of the 13 in this set that have recognizable functions,more » can be modeled with confidence based on the structures of known homologs. These results indicate that there is significantly greater variability in the way that homologous prokaryotic macromolecular complexes are assembled than has generally been appreciated. As a consequence, we suggest that relying solely on previously determined quaternary structures for homologous proteins may not be sufficient to properly understand their role in another cell of interest.« less
Small-scale behavior in distorted turbulent boundary layers at low Reynolds number
NASA Technical Reports Server (NTRS)
Saddoughi, Seyed G.
1994-01-01
During the last three years we have conducted high- and low-Reynolds-number experiments, including hot-wire measurements of the velocity fluctuations, in the test-section-ceiling boundary layer of the 80- by 120-foot Full-Scale Aerodynamics Facility at NASA Ames Research Center, to test the local-isotropy predictions of Kolmogorov's universal equilibrium theory. This hypothesis, which states that at sufficiently high Reynolds numbers the small-scale structures of turbulent motions are independent of large-scale structures and mean deformations, has been used in theoretical studies of turbulence and computational methods such as large-eddy simulation; however, its range of validity in shear flows has been a subject of controversy. The present experiments were planned to enhance our understanding of the local-isotropy hypothesis. Our experiments were divided into two sets. First, measurements were taken at different Reynolds numbers in a plane boundary layer, which is a 'simple' shear flow. Second, experiments were designed to address this question: will our criteria for the existence of local isotropy hold for 'complex' nonequilibrium flows in which extra rates of mean strain are added to the basic mean shear?
Webber, Whitney M.; Li, Ya-Wei
2016-01-01
Managers of large, complex wildlife conservation programs need information on the conservation status of each of many species to help strategically allocate limited resources. Oversimplifying status data, however, runs the risk of missing information essential to strategic allocation. Conservation status consists of two components, the status of threats a species faces and the species’ demographic status. Neither component alone is sufficient to characterize conservation status. Here we present a simple key for scoring threat and demographic changes for species using detailed information provided in free-form textual descriptions of conservation status. This key is easy to use (simple), captures the two components of conservation status without the cost of more detailed measures (sufficient), and can be applied by different personnel to any taxon (consistent). To evaluate the key’s utility, we performed two analyses. First, we scored the threat and demographic status of 37 species recently recommended for reclassification under the Endangered Species Act (ESA) and 15 control species, then compared our scores to two metrics used for decision-making and reports to Congress. Second, we scored the threat and demographic status of all non-plant ESA-listed species from Florida (54 spp.), and evaluated scoring repeatability for a subset of those. While the metrics reported by the U.S. Fish and Wildlife Service (FWS) are often consistent with our scores in the first analysis, the results highlight two problems with the oversimplified metrics. First, we show that both metrics can mask underlying demographic declines or threat increases; for example, ∼40% of species not recommended for reclassification had changes in threats or demography. Second, we show that neither metric is consistent with either threats or demography alone, but conflates the two. The second analysis illustrates how the scoring key can be applied to a substantial set of species to understand overall patterns of ESA implementation. The scoring repeatability analysis shows promise, but indicates thorough training will be needed to ensure consistency. We propose that large conservation programs adopt our simple scoring system for threats and demography. By doing so, program administrators will have better information to monitor program effectiveness and guide their decisions. PMID:27478713
Malcom, Jacob W; Webber, Whitney M; Li, Ya-Wei
2016-01-01
Managers of large, complex wildlife conservation programs need information on the conservation status of each of many species to help strategically allocate limited resources. Oversimplifying status data, however, runs the risk of missing information essential to strategic allocation. Conservation status consists of two components, the status of threats a species faces and the species' demographic status. Neither component alone is sufficient to characterize conservation status. Here we present a simple key for scoring threat and demographic changes for species using detailed information provided in free-form textual descriptions of conservation status. This key is easy to use (simple), captures the two components of conservation status without the cost of more detailed measures (sufficient), and can be applied by different personnel to any taxon (consistent). To evaluate the key's utility, we performed two analyses. First, we scored the threat and demographic status of 37 species recently recommended for reclassification under the Endangered Species Act (ESA) and 15 control species, then compared our scores to two metrics used for decision-making and reports to Congress. Second, we scored the threat and demographic status of all non-plant ESA-listed species from Florida (54 spp.), and evaluated scoring repeatability for a subset of those. While the metrics reported by the U.S. Fish and Wildlife Service (FWS) are often consistent with our scores in the first analysis, the results highlight two problems with the oversimplified metrics. First, we show that both metrics can mask underlying demographic declines or threat increases; for example, ∼40% of species not recommended for reclassification had changes in threats or demography. Second, we show that neither metric is consistent with either threats or demography alone, but conflates the two. The second analysis illustrates how the scoring key can be applied to a substantial set of species to understand overall patterns of ESA implementation. The scoring repeatability analysis shows promise, but indicates thorough training will be needed to ensure consistency. We propose that large conservation programs adopt our simple scoring system for threats and demography. By doing so, program administrators will have better information to monitor program effectiveness and guide their decisions.
High Resolution X-ray-Induced Acoustic Tomography
Xiang, Liangzhong; Tang, Shanshan; Ahmad, Moiz; Xing, Lei
2016-01-01
Absorption based CT imaging has been an invaluable tool in medical diagnosis, biology, and materials science. However, CT requires a large set of projection data and high radiation dose to achieve superior image quality. In this letter, we report a new imaging modality, X-ray Induced Acoustic Tomography (XACT), which takes advantages of high sensitivity to X-ray absorption and high ultrasonic resolution in a single modality. A single projection X-ray exposure is sufficient to generate acoustic signals in 3D space because the X-ray generated acoustic waves are of a spherical nature and propagate in all directions from their point of generation. We demonstrate the successful reconstruction of gold fiducial markers with a spatial resolution of about 350 μm. XACT reveals a new imaging mechanism and provides uncharted opportunities for structural determination with X-ray. PMID:27189746
NASA Astrophysics Data System (ADS)
Rajchl, Martin; Abhari, Kamyar; Stirrat, John; Ukwatta, Eranga; Cantor, Diego; Li, Feng P.; Peters, Terry M.; White, James A.
2014-03-01
Multi-center trials provide the unique ability to investigate novel techniques across a range of geographical sites with sufficient statistical power, the inclusion of multiple operators determining feasibility under a wider array of clinical environments and work-flows. For this purpose, we introduce a new means of distributing pre-procedural cardiac models for image-guided interventions across a large scale multi-center trial. In this method, a single core facility is responsible for image processing, employing a novel web-based interface for model visualization and distribution. The requirements for such an interface, being WebGL-based, are minimal and well within the realms of accessibility for participating centers. We then demonstrate the accuracy of our approach using a single-center pacemaker lead implantation trial with generic planning models.
Proposed roadmap for overcoming legal and financial obstacles to carbon capture and sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, Wendy; Chohen, Leah; Kostakidis-Lianos, Leah
Many existing proposals either lack sufficient concreteness to make carbon capture and geological sequestration (CCGS) operational or fail to focus on a comprehensive, long term framework for its regulation, thus failing to account adequately for the urgency of the issue, the need to develop immediate experience with large scale demonstration projects, or the financial and other incentives required to launch early demonstration projects. We aim to help fill this void by proposing a roadmap to commercial deployment of CCGS in the United States.This roadmap focuses on the legal and financial incentives necessary for rapid demonstration of geological sequestration in themore » absence of national restrictions on CO2 emissions. It weaves together existing federal programs and financing opportunities into a set of recommendations for achieving commercial viability of geological sequestration.« less
Optimum testing of multiple hypotheses in quantum detection theory
NASA Technical Reports Server (NTRS)
Yuen, H. P.; Kennedy, R. S.; Lax, M.
1975-01-01
The problem of specifying the optimum quantum detector in multiple hypotheses testing is considered for application to optical communications. The quantum digital detection problem is formulated as a linear programming problem on an infinite-dimensional space. A necessary and sufficient condition is derived by the application of a general duality theorem specifying the optimum detector in terms of a set of linear operator equations and inequalities. Existence of the optimum quantum detector is also established. The optimality of commuting detection operators is discussed in some examples. The structure and performance of the optimal receiver are derived for the quantum detection of narrow-band coherent orthogonal and simplex signals. It is shown that modal photon counting is asymptotically optimum in the limit of a large signaling alphabet and that the capacity goes to infinity in the absence of a bandwidth limitation.
Wains: a pattern-seeking artificial life species.
de Buitléir, Amy; Russell, Michael; Daly, Mark
2012-01-01
We describe the initial phase of a research project to develop an artificial life framework designed to extract knowledge from large data sets with minimal preparation or ramp-up time. In this phase, we evolved an artificial life population with a new brain architecture. The agents have sufficient intelligence to discover patterns in data and to make survival decisions based on those patterns. The species uses diploid reproduction, Hebbian learning, and Kohonen self-organizing maps, in combination with novel techniques such as using pattern-rich data as the environment and framing the data analysis as a survival problem for artificial life. The first generation of agents mastered the pattern discovery task well enough to thrive. Evolution further adapted the agents to their environment by making them a little more pessimistic, and also by making their brains more efficient.
Computer problem-solving coaches for introductory physics: Design and usability studies
NASA Astrophysics Data System (ADS)
Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew
2016-06-01
The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.
Sensitivity of influenza rapid diagnostic tests to H5N1 and 2009 pandemic H1N1 viruses.
Sakai-Tagawa, Yuko; Ozawa, Makoto; Tamura, Daisuke; Le, Mai thi Quynh; Nidom, Chairul A; Sugaya, Norio; Kawaoka, Yoshihiro
2010-08-01
Simple and rapid diagnosis of influenza is useful for making treatment decisions in the clinical setting. Although many influenza rapid diagnostic tests (IRDTs) are available for the detection of seasonal influenza virus infections, their sensitivity for other viruses, such as H5N1 viruses and the recently emerged swine origin pandemic (H1N1) 2009 virus, remains largely unknown. Here, we examined the sensitivity of 20 IRDTs to various influenza virus strains, including H5N1 and 2009 pandemic H1N1 viruses. Our results indicate that the detection sensitivity to swine origin H1N1 viruses varies widely among IRDTs, with some tests lacking sufficient sensitivity to detect the early stages of infection when the virus load is low.
Nutaro, James J.; Fugate, David L.; Kuruganti, Teja; ...
2015-05-27
We describe a cost-effective retrofit technology that uses collective control of multiple rooftop air conditioning units to reduce the peak power consumption of small and medium commercial buildings. The proposed control uses a model of the building and air conditioning units to select an operating schedule for the air conditioning units that maintains a temperature set point subject to a constraint on the number of units that may operate simultaneously. A prototype of this new control system was built and deployed in a large gymnasium to coordinate four rooftop air conditioning units. Based on data collected while operating this prototype,more » we estimate that the cost savings achieved by reducing peak power consumption is sufficient to repay the cost of the prototype within a year.« less
Did warfare among ancestral hunter-gatherers affect the evolution of human social behaviors?
Bowles, Samuel
2009-06-05
Since Darwin, intergroup hostilities have figured prominently in explanations of the evolution of human social behavior. Yet whether ancestral humans were largely "peaceful" or "warlike" remains controversial. I ask a more precise question: If more cooperative groups were more likely to prevail in conflicts with other groups, was the level of intergroup violence sufficient to influence the evolution of human social behavior? Using a model of the evolutionary impact of between-group competition and a new data set that combines archaeological evidence on causes of death during the Late Pleistocene and early Holocene with ethnographic and historical reports on hunter-gatherer populations, I find that the estimated level of mortality in intergroup conflicts would have had substantial effects, allowing the proliferation of group-beneficial behaviors that were quite costly to the individual altruist.
Pinto, Nicolas; Doukhan, David; DiCarlo, James J; Cox, David D
2009-11-01
While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.
Clark, Anthony J; Gindin, Tatyana; Zhang, Baoshan; Wang, Lingle; Abel, Robert; Murret, Colleen S; Xu, Fang; Bao, Amy; Lu, Nina J; Zhou, Tongqing; Kwong, Peter D; Shapiro, Lawrence; Honig, Barry; Friesner, Richard A
2017-04-07
Direct calculation of relative binding affinities between antibodies and antigens is a long-sought goal. However, despite substantial efforts, no generally applicable computational method has been described. Here, we describe a systematic free energy perturbation (FEP) protocol and calculate the binding affinities between the gp120 envelope glycoprotein of HIV-1 and three broadly neutralizing antibodies (bNAbs) of the VRC01 class. The protocol has been adapted from successful studies of small molecules to address the challenges associated with modeling protein-protein interactions. Specifically, we built homology models of the three antibody-gp120 complexes, extended the sampling times for large bulky residues, incorporated the modeling of glycans on the surface of gp120, and utilized continuum solvent-based loop prediction protocols to improve sampling. We present three experimental surface plasmon resonance data sets, in which antibody residues in the antibody/gp120 interface were systematically mutated to alanine. The RMS error in the large set (55 total cases) of FEP tests as compared to these experiments, 0.68kcal/mol, is near experimental accuracy, and it compares favorably with the results obtained from a simpler, empirical methodology. The correlation coefficient for the combined data set including residues with glycan contacts, R 2 =0.49, should be sufficient to guide the choice of residues for antibody optimization projects, assuming that this level of accuracy can be realized in prospective prediction. More generally, these results are encouraging with regard to the possibility of using an FEP approach to calculate the magnitude of protein-protein binding affinities. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard
2011-01-01
Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .
Pinto, Nicolas; Doukhan, David; DiCarlo, James J.; Cox, David D.
2009-01-01
While many models of biological object recognition share a common set of “broad-stroke” properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model—e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct “parts” have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision. PMID:19956750
Global Hopf bifurcation analysis on a BAM neural network with delays
NASA Astrophysics Data System (ADS)
Sun, Chengjun; Han, Maoan; Pang, Xiaoming
2007-01-01
A delayed differential equation that models a bidirectional associative memory (BAM) neural network with four neurons is considered. By using a global Hopf bifurcation theorem for FDE and a Bendixon's criterion for high-dimensional ODE, a group of sufficient conditions for the system to have multiple periodic solutions are obtained when the sum of delays is sufficiently large.
NASA Astrophysics Data System (ADS)
Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele
2013-12-01
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol-1. The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).
Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele
2013-12-28
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol(-1). The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).
Space Geodesy and the New Madrid Seismic Zone
NASA Astrophysics Data System (ADS)
Smalley, Robert; Ellis, Michael A.
2008-07-01
One of the most contentious issues related to earthquake hazards in the United States centers on the midcontinent and the origin, magnitudes, and likely recurrence intervals of the 1811-1812 New Madrid earthquakes that occurred there. The stakeholder groups in the debate (local and state governments, reinsurance companies, American businesses, and the scientific community) are similar to the stakeholder groups in regions more famous for large earthquakes. However, debate about New Madrid seismic hazard has been fiercer because of the lack of two fundamental components of seismic hazard estimation: an explanatory model for large, midplate earthquakes; and sufficient or sufficiently precise data about the causes, effects, and histories of such earthquakes.
Stability and stabilisation of a class of networked dynamic systems
NASA Astrophysics Data System (ADS)
Liu, H. B.; Wang, D. Q.
2018-04-01
We investigate the stability and stabilisation of a linear time invariant networked heterogeneous system with arbitrarily connected subsystems. A new linear matrix inequality based sufficient and necessary condition for the stability is derived, based on which the stabilisation is provided. The obtained conditions efficiently utilise the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, a sufficient condition only dependent on each individual subsystem is also presented for the stabilisation of the networked systems with a large scale. Numerical simulations show that these conditions are computationally valid in the analysis and synthesis of a large-scale networked system.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon
2016-01-01
Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.
Viscous and Thermal Effects on Hydrodynamic Instability in Liquid-Propellant Combustion
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
2000-01-01
A pulsating form of hydrodynamic instability has recently been shown to arise during the deflagration of liquid propellants in those parameter regimes where the pressure-dependent burning rate is characterized by a negative pressure sensitivity. This type of instability can coexist with the classical cellular, or Landau, form of hydrodynamic instability, with the occurrence of either dependent on whether the pressure sensitivity is sufficiently large or small in magnitude. For the inviscid problem, it has been shown that when the burning rate is realistically allowed to depend on temperature as well as pressure, that sufficiently large values of the temperature sensitivity relative to the pressure sensitivity causes the pulsating form of hydrodynamic instability to become dominant. In that regime, steady, planar burning becomes intrinsically unstable to pulsating disturbances whose wavenumbers are sufficiently small. In the present work, this analysis is extended to the fully viscous case, where it is shown that although viscosity is stabilizing for intermediate and larger wavenumber perturbations, the intrinsic pulsating instability for small wavenumbers remains. Under these conditions, liquid-propellant combustion is predicted to be characterized by large unsteady cells along the liquid/gas interface.
NASA Astrophysics Data System (ADS)
Jiménez del Toro, Oscar; Atzori, Manfredo; Otálora, Sebastian; Andersson, Mats; Eurén, Kristian; Hedlund, Martin; Rönnquist, Peter; Müller, Henning
2017-03-01
The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high-grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions-of-interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high vs. low Gleason grade. Grades 7-8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re-training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.
Thermophysical characteristics of the large main-belt asteroid (349) Dembowska
NASA Astrophysics Data System (ADS)
Yu, Liang Liang; Yang, Bin; Ji, Jianghui; Ip, Wing-Huen
2017-12-01
(349) Dembowska is a large, bright main-belt asteroid that has a fast rotation and an oblique spin axis. It might have experienced partial melting and differentiation. We constrain Dembowska's thermophysical properties, such as thermal inertia, roughness fraction, geometric albedo and effective diameter within 3σ uncertainty of Γ =20^{+12}_{-7} Jm-2 s-0.5 K-1, f_r=0.25^{+0.60}_{-0.25}, p_v=0.309^{+0.026}_{-0.038} and D_eff=155.8^{+7.5}_{-6.2} km, by utilizing the advanced thermophysical model to analyse four sets of thermal infrared data obtained by the Infrared Astronomy Satellite (IRAS), AKARI, the Wide-field Infrared Survey Explorer (WISE) and the Subaru/Cooled Mid-Infrared Camera and Spectrometer (COMICS) at different epochs. In addition, by modelling the thermal light curve observed by WISE, we obtain the rotational phases of each data set. These rotationally resolved data do not reveal significant variations of thermal inertia and roughness across the surface, indicating that the surface of Dembowska should be covered by a dusty regolith layer with few rocks or boulders. Besides, the low thermal inertia of Dembowska shows no significant difference with other asteroids larger than 100 km, which indicates that the dynamical lives of these large asteroids are long enough to make their surfaces have sufficiently low thermal inertia. Furthermore, based on the derived surface thermophysical properties, as well as the known orbital and rotational parameters, we can simulate Dembowska's surface and subsurface temperatures throughout its orbital period. The surface temperature varies from ∼40 to ∼220 K, showing significant seasonal variation, whereas the subsurface temperature achieves equilibrium temperature about 120-160 K below a depth of 30-50 cm.
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
2017-03-27
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less
NASA Astrophysics Data System (ADS)
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
2017-06-01
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO2-CH4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this work, we present a set of fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. The mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.
Van Landeghem, Sofie; De Bodt, Stefanie; Drebert, Zuzanna J; Inzé, Dirk; Van de Peer, Yves
2013-03-01
Despite the availability of various data repositories for plant research, a wealth of information currently remains hidden within the biomolecular literature. Text mining provides the necessary means to retrieve these data through automated processing of texts. However, only recently has advanced text mining methodology been implemented with sufficient computational power to process texts at a large scale. In this study, we assess the potential of large-scale text mining for plant biology research in general and for network biology in particular using a state-of-the-art text mining system applied to all PubMed abstracts and PubMed Central full texts. We present extensive evaluation of the textual data for Arabidopsis thaliana, assessing the overall accuracy of this new resource for usage in plant network analyses. Furthermore, we combine text mining information with both protein-protein and regulatory interactions from experimental databases. Clusters of tightly connected genes are delineated from the resulting network, illustrating how such an integrative approach is essential to grasp the current knowledge available for Arabidopsis and to uncover gene information through guilt by association. All large-scale data sets, as well as the manually curated textual data, are made publicly available, hereby stimulating the application of text mining data in future plant biology studies.
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data
Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.
2017-01-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.
Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V
2016-08-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.
Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions
NASA Astrophysics Data System (ADS)
Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard
2017-12-01
Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.
A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less
NASA Astrophysics Data System (ADS)
Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.
2011-11-01
We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
What Four Million Mappings Can Tell You about Two Hundred Ontologies
NASA Astrophysics Data System (ADS)
Ghazvinian, Amir; Noy, Natalya F.; Jonquet, Clement; Shah, Nigam; Musen, Mark A.
The field of biomedicine has embraced the Semantic Web probably more than any other field. As a result, there is a large number of biomedical ontologies covering overlapping areas of the field. We have developed BioPortal—an open community-based repository of biomedical ontologies. We analyzed ontologies and terminologies in BioPortal and the Unified Medical Language System (UMLS), creating more than 4 million mappings between concepts in these ontologies and terminologies based on the lexical similarity of concept names and synonyms. We then analyzed the mappings and what they tell us about the ontologies themselves, the structure of the ontology repository, and the ways in which the mappings can help in the process of ontology design and evaluation. For example, we can use the mappings to guide users who are new to a field to the most pertinent ontologies in that field, to identify areas of the domain that are not covered sufficiently by the ontologies in the repository, and to identify which ontologies will serve well as background knowledge in domain-specific tools. While we used a specific (but large) ontology repository for the study, we believe that the lessons we learned about the value of a large-scale set of mappings to ontology users and developers are general and apply in many other domains.
Hou, Tingjun; Xu, Xiaojie
2002-12-01
In this study, the relationships between the brain-blood concentration ratio of 96 structurally diverse compounds with a large number of structurally derived descriptors were investigated. The linear models were based on molecular descriptors that can be calculated for any compound simply from a knowledge of its molecular structure. The linear correlation coefficients of the models were optimized by genetic algorithms (GAs), and the descriptors used in the linear models were automatically selected from 27 structurally derived descriptors. The GA optimizations resulted in a group of linear models with three or four molecular descriptors with good statistical significance. The change of descriptor use as the evolution proceeds demonstrates that the octane/water partition coefficient and the partial negative solvent-accessible surface area multiplied by the negative charge are crucial to brain-blood barrier permeability. Moreover, we found that the predictions using multiple QSPR models from GA optimization gave quite good results in spite of the diversity of structures, which was better than the predictions using the best single model. The predictions for the two external sets with 37 diverse compounds using multiple QSPR models indicate that the best linear models with four descriptors are sufficiently effective for predictive use. Considering the ease of computation of the descriptors, the linear models may be used as general utilities to screen the blood-brain barrier partitioning of drugs in a high-throughput fashion.
Identifying personal microbiomes using metagenomic codes
Franzosa, Eric A.; Huang, Katherine; Meadow, James F.; Gevers, Dirk; Lemon, Katherine P.; Bohannan, Brendan J. M.; Huttenhower, Curtis
2015-01-01
Community composition within the human microbiome varies across individuals, but it remains unknown if this variation is sufficient to uniquely identify individuals within large populations or stable enough to identify them over time. We investigated this by developing a hitting set-based coding algorithm and applying it to the Human Microbiome Project population. Our approach defined body site-specific metagenomic codes: sets of microbial taxa or genes prioritized to uniquely and stably identify individuals. Codes capturing strain variation in clade-specific marker genes were able to distinguish among 100s of individuals at an initial sampling time point. In comparisons with follow-up samples collected 30–300 d later, ∼30% of individuals could still be uniquely pinpointed using metagenomic codes from a typical body site; coincidental (false positive) matches were rare. Codes based on the gut microbiome were exceptionally stable and pinpointed >80% of individuals. The failure of a code to match its owner at a later time point was largely explained by the loss of specific microbial strains (at current limits of detection) and was only weakly associated with the length of the sampling interval. In addition to highlighting patterns of temporal variation in the ecology of the human microbiome, this work demonstrates the feasibility of microbiome-based identifiability—a result with important ethical implications for microbiome study design. The datasets and code used in this work are available for download from huttenhower.sph.harvard.edu/idability. PMID:25964341
MR-MOOSE: an advanced SED-fitting tool for heterogeneous multi-wavelength data sets
NASA Astrophysics Data System (ADS)
Drouart, G.; Falkendal, T.
2018-07-01
We present the public release of MR-MOOSE, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from a heterogeneous data set (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, MR-MOOSE handles upper limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly versatile fitting tool for handling increasing source complexity when combining multi-wavelength data sets with fully customisable filter/model data bases. The complete control of the user is one advantage, which avoids the traditional problems related to the `black box' effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of PYTHON and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially generated data sets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA, and VLA data) in the context of extragalactic SED fitting makes MR-MOOSE a particularly attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.
Setchell, Joanna M; Abbott, Kristin M; Gonzalez, Jean-Paul; Knapp, Leslie A
2013-10-01
A large body of evidence suggests that major histocompatibility complex (MHC) genotype influences mate choice. However, few studies have investigated MHC-mediated post-copulatory mate choice under natural, or even semi-natural, conditions. We set out to explore this question in a large semi-free-ranging population of mandrills (Mandrillus sphinx) using MHC-DRB genotypes for 127 parent-offspring triads. First, we showed that offspring MHC heterozygosity correlates positively with parental MHC dissimilarity suggesting that mating among MHC dissimilar mates is efficient in increasing offspring MHC diversity. Second, we compared the haplotypes of the parental dyad with those of the offspring to test whether post-copulatory sexual selection favored offspring with two different MHC haplotypes, more diverse gamete combinations, or greater within-haplotype diversity. Limited statistical power meant that we could only detect medium or large effect sizes. Nevertheless, we found no evidence for selection for heterozygous offspring when parents share a haplotype (large effect size), genetic dissimilarity between parental haplotypes (we could detect an odds ratio of ≥1.86), or within-haplotype diversity (medium-large effect). These findings suggest that comparing parental and offspring haplotypes may be a useful approach to test for post-copulatory selection when matings cannot be observed, as is the case in many study systems. However, it will be extremely difficult to determine conclusively whether post-copulatory selection mechanisms for MHC genotype exist, particularly if the effect sizes are small, due to the difficulty in obtaining a sufficiently large sample. © 2013 Wiley Periodicals, Inc.
DEVELOPMENT OF STANDARDIZED LARGE RIVER BIOASSESSMENT PROTOCOLS (LR-BP) FOR FISH ASSEMBLAGES
We conducted research comparing several methods currently in use for the bioassessment and monitoring of fish and benthic macroinvertebrate assemblages for large rivers. Fish data demonstrate that electrofishing 1000 m of shoreline is sufficient for bioassessments on boatable ri...
Subalgebras of BCK/BCI-Algebras Based on Cubic Soft Sets
Muhiuddin, G.; Jun, Young Bae
2014-01-01
Operations of cubic soft sets including “AND” operation and “OR” operation based on P-orders and R-orders are introduced and some related properties are investigated. An example is presented to show that the R-union of two internal cubic soft sets might not be internal. A sufficient condition is provided, which ensure that the R-union of two internal cubic soft sets is also internal. Moreover, some properties of cubic soft subalgebras of BCK/BCI-algebras based on a given parameter are discussed. PMID:24895652
Ballert, C; Oberhauser, C; Biering-Sørensen, F; Stucki, G; Cieza, A
2012-10-01
Psychometric study analyzing the data of a cross-sectional, multicentric study with 1048 persons with spinal cord injury (SCI). To shed light on how to apply the Brief Core Sets for SCI of the International Classification of Functioning, Disability and Health (ICF) by determining whether the ICF categories contained in the Core Sets capture differences in overall health. Lasso regression was applied using overall health, rated by the patients and health professionals, as dependent variables and the ICF categories of the Comprehensive ICF Core Sets for SCI as independent variables. The ICF categories that best capture differences in overall health refer to areas of life such as self-care, relationships, economic self-sufficiency and community life. Only about 25% of the ICF categories of the Brief ICF Core Sets for the early post-acute and for long-term contexts were selected in the Lasso regression and differentiate, therefore, among levels of overall health. ICF categories such as d570 Looking after one's health, d870 Economic self-sufficiency, d620 Acquisition of goods and services and d910 Community life, which capture changes in overall health in patients with SCI, should be considered in addition to those of the Brief ICF Core Sets in clinical and epidemiological studies in persons with SCI.
Evaluation of a seismic quiescence pattern in southeastern sicily
NASA Astrophysics Data System (ADS)
Mulargia, F.; Broccio, F.; Achilli, V.; Baldi, P.
1985-07-01
Southeastern Sicily experienced a very peculiar seismic activity in historic times, with a long series of ruinous earthquakes. A last large event, with magnitude probably in excess of 7.5, occurred on Jan., 11, 1693, totally destroying the city of Catania and killing 60,000 people. Only a few moderate events were reported since then, and a seismic gap issue has been proposed on this basis. A close scrutiny of the available data further shows that all significant seismic activity ceased after year 1850, suggesting one of the largest quiescence patterns ever encountered. This is examined together with the complex tectonic setting of the region, characterized by a wrenching mechanism with most significant seismicity located in its northern graben structure. An attempt to ascertain the imminence and the size of a future earthquake through commonly accepted empirical relations based on size and duration of the quiescence pattern did not provide any feasible result. A precision levelling survey which we recently completed yielded a relative subsidence of ~ 3 mm/yr, consistent with an aseismic slip on the northern graben structure at a rate of ~ 15 mm/yr. Comparing these results with sedimentological and tidal data suggests that the area is undergoing an accelerated deformation process; this issue is further supported by Rikitake's ultimate strain statistics. If the imminence of a damaging ( M = 5.4) event is strongly favoured by Weibull statistics applied to the time series of occurrence of large events, the accumulated strain does not appear sufficient for a large earthquake ( M ⪸ 7.0). Within the limits of reliability of present semi-empirical approaches we conclude that the available evidence is consistent with the occurrence of a moderate-to-large ( M ≅ 6.0) event in the near future. Several questions regarding the application of simple models to real (and complex) tectonic settings remain nevertheless unanswered.
Dimensionality of Data Matrices with Applications to Gene Expression Profiles
ERIC Educational Resources Information Center
Feng, Xingdong
2009-01-01
Probe-level microarray data are usually stored in matrices. Take a given probe set (gene), for example, each row of the matrix corresponds to an array, and each column corresponds to a probe. Often, people summarize each array by the gene expression level. Is one number sufficient to summarize a whole probe set for a specific gene in an array?…
Case Studies Nested in Fuzzy-Set QCA on Sufficiency: Formalizing Case Selection and Causal Inference
ERIC Educational Resources Information Center
Schneider, Carsten Q.; Rohlfing, Ingo
2016-01-01
Qualitative Comparative Analysis (QCA) is a method for cross-case analyses that works best when complemented with follow-up case studies focusing on the causal quality of the solution and its constitutive terms, the underlying causal mechanisms, and potentially omitted conditions. The anchorage of QCA in set theory demands criteria for follow-up…
ERIC Educational Resources Information Center
Schmitt, Ara J.; Hale, Andrea D.; McCallum, Elizabeth; Mauck, Brittany
2011-01-01
Word reading accommodations are commonly applied in the general education setting in an attempt to improve student comprehension and learning of curriculum content. This study examined the effects of listening-while-reading (LWR) and silent reading (SR) using text-to-speech assistive technology on the comprehension of 25 middle-school remedial…
A Generalization of the Euler-Fermat Theorem
ERIC Educational Resources Information Center
Harger, Robert T.; Harvey, Melinda E.
2003-01-01
This note considers the problem of determining, for fixed k and m, all values of r, 0 [less than] r [less than] [empty set](m), such that k[superscript [empty set](m)+1] [equivalent to] k[superscript r](mod m). More generally, if k, m and c are given, necessary and sufficient conditions are given for k[superscript c] [equivalent to] k[superscript…
On 2- and 3-person games on polyhedral sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belenky, A.S.
1994-12-31
Special classes of 3 person games are considered where the sets of players` allowable strategies are polyhedral and the payoff functions are defined as maxima, on a polyhedral set, of certain kind of sums of linear and bilinear functions. Necessary and sufficient conditions, which are easy to verify, for a Nash point in these games are established, and a finite method, based on these conditions, for calculating Nash points is proposed. It is shown that the game serves as a generalization of a model for a problem of waste products evacuation from a territory. The method makes it possible tomore » reduce calculation of a Nash point to solving some linear and quadratic programming problems formulated on the basis of the original 3-person game. A class of 2-person games on connected polyhedral sets is considered, with the payoff function being a sum of two linear functions and one bilinear function. Necessary and sufficient conditions are established for the min-max, the max-min, and for a certain equilibrium. It is shown that the corresponding points can be calculated from auxiliary linear programming problems formulated on the basis of the master game.« less
We conducted research comparing several methods currently in use for the bioassessment and monitoring of fish and benthic macroinvertebrate assemblages of large rivers. Fish data demonstrate that electrofishing 1000 m of shoreline is sufficient for bioassessments on boatable riv...
A research agenda for a people-centred approach to energy access in the urbanizing global south
NASA Astrophysics Data System (ADS)
Broto, Vanesa Castán; Stevens, Lucy; Ackom, Emmanuel; Tomei, Julia; Parikh, Priti; Bisaga, Iwona; To, Long Seng; Kirshner, Joshua; Mulugetta, Yacob
2017-10-01
Energy access is typically viewed as a problem for rural areas, but people living in urban settings also face energy challenges that have not received sufficient attention. A revised agenda in research and practice that puts the user and local planning complexities centre stage is needed to change the way we look at energy access in urban areas, to understand the implications of the concentration of vulnerable people in slums and to identify opportunities for planned management and innovation that can deliver urban energy transitions while leaving no one behind. Here, we propose a research agenda focused on three key issues: understanding the needs of urban energy users; enabling the use of context-specific, disaggregated data; and engaging with effective modes of energy and urban governance. This agenda requires interdisciplinary scholarship across the social and physical sciences to support local action and deliver large-scale, inclusive transformations.
Relaxing the cosmological constant: a proof of concept
NASA Astrophysics Data System (ADS)
Alberte, Lasma; Creminelli, Paolo; Khmelnitsky, Andrei; Pirtskhalava, David; Trincherini, Enrico
2016-12-01
We propose a technically natural scenario whereby an initially large cosmological constant (c.c.) is relaxed down to the observed value due to the dynamics of a scalar evolving on a very shallow potential. The model crucially relies on a sector that violates the null energy condition (NEC) and gets activated only when the Hubble rate becomes sufficiently small — of the order of the present one. As a result of NEC violation, this low-energy universe evolves into inflation, followed by reheating and the standard Big Bang cosmology. The symmetries of the theory force the c.c. to be the same before and after the NEC-violating phase, so that a late-time observer sees an effective c.c. of the correct magnitude. Importantly, our model allows neither for eternal inflation nor for a set of possible values of dark energy, the latter fixed by the parameters of the theory.
The Black Hole Safari: Big Game Hunting in 30+ Massive Galaxies
NASA Astrophysics Data System (ADS)
McConnell, Nicholas J.; Ma, Chung-Pei; Janish, Ryan; Gebhardt, Karl; Lauer, Tod R.; Graham, James R.
2015-01-01
The current census of the most massive black holes in the local universe turns up an odd variety of galaxy hosts: central galaxies in rich clusters, second- or lower-ranked cluster members, and compact relics from the early universe. More extensive campaigns are required to explore the number density and environmental distribution of these monsters. Over the past three years we have collected a large set of stellar kinematic data with sufficient resolution to detect the gravitational signatures of supermassive black holes with MBH > 109 MSun. This Black Hole Safari targets enormous galaxies at the centers of nearby galaxy clusters, as well as their similarly luminous counterparts in weaker galaxy groups. To date we have observed more than 30 early-type galaxies with integral-field spectrographs on the Keck, Gemini North, and Gemini South telescopes. Here I present preliminary stellar kinematics from 10 objects.
Forward design of a complex enzyme cascade reaction
Hold, Christoph; Billerbeck, Sonja; Panke, Sven
2016-01-01
Enzymatic reaction networks are unique in that one can operate a large number of reactions under the same set of conditions concomitantly in one pot, but the nonlinear kinetics of the enzymes and the resulting system complexity have so far defeated rational design processes for the construction of such complex cascade reactions. Here we demonstrate the forward design of an in vitro 10-membered system using enzymes from highly regulated biological processes such as glycolysis. For this, we adapt the characterization of the biochemical system to the needs of classical engineering systems theory: we combine online mass spectrometry and continuous system operation to apply standard system theory input functions and to use the detailed dynamic system responses to parameterize a model of sufficient quality for forward design. This allows the facile optimization of a 10-enzyme cascade reaction for fine chemical production purposes. PMID:27677244
Destabilizing turbulence in pipe flow
NASA Astrophysics Data System (ADS)
Kühnen, Jakob; Song, Baofang; Scarselli, Davide; Budanur, Nazmi Burak; Riedl, Michael; Willis, Ashley P.; Avila, Marc; Hof, Björn
2018-04-01
Turbulence is the major cause of friction losses in transport processes and it is responsible for a drastic drag increase in flows over bounding surfaces. While much effort is invested into developing ways to control and reduce turbulence intensities1-3, so far no methods exist to altogether eliminate turbulence if velocities are sufficiently large. We demonstrate for pipe flow that appropriate distortions to the velocity profile lead to a complete collapse of turbulence and subsequently friction losses are reduced by as much as 90%. Counterintuitively, the return to laminar motion is accomplished by initially increasing turbulence intensities or by transiently amplifying wall shear. Since neither the Reynolds number nor the shear stresses decrease (the latter often increase), these measures are not indicative of turbulence collapse. Instead, an amplification mechanism4,5 measuring the interaction between eddies and the mean shear is found to set a threshold below which turbulence is suppressed beyond recovery.
Random sex determination: When developmental noise tips the sex balance.
Perrin, Nicolas
2016-12-01
Sex-determining factors are usually assumed to be either genetic or environmental. The present paper aims at drawing attention to the potential contribution of developmental noise, an important but often-neglected component of phenotypic variance. Mutual inhibitions between male and female pathways make sex a bistable equilibrium, such that random fluctuations in the expression of genes at the top of the cascade are sufficient to drive individual development toward one or the other stable state. Evolutionary modeling shows that stochastic sex determinants should resist elimination by genetic or environmental sex determinants under ecologically meaningful settings. On the empirical side, many sex-determination systems traditionally considered as environmental or polygenic actually provide evidence for large components of stochasticity. In reviewing the field, I argue that sex-determination systems should be considered within a three-ends continuum, rather than the classical two-ends continuum. © 2016 WILEY Periodicals, Inc.
RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT.
Carlis, John; Bruso, Kelsey
2012-03-01
Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n(2)) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing.
Complex disease and phenotype mapping in the domestic dog
Hayward, Jessica J.; Castelhano, Marta G.; Oliveira, Kyle C.; Corey, Elizabeth; Balkman, Cheryl; Baxter, Tara L.; Casal, Margret L.; Center, Sharon A.; Fang, Meiying; Garrison, Susan J.; Kalla, Sara E.; Korniliev, Pavel; Kotlikoff, Michael I.; Moise, N. S.; Shannon, Laura M.; Simpson, Kenneth W.; Sutter, Nathan B.; Todhunter, Rory J.; Boyko, Adam R.
2016-01-01
The domestic dog is becoming an increasingly valuable model species in medical genetics, showing particular promise to advance our understanding of cancer and orthopaedic disease. Here we undertake the largest canine genome-wide association study to date, with a panel of over 4,200 dogs genotyped at 180,000 markers, to accelerate mapping efforts. For complex diseases, we identify loci significantly associated with hip dysplasia, elbow dysplasia, idiopathic epilepsy, lymphoma, mast cell tumour and granulomatous colitis; for morphological traits, we report three novel quantitative trait loci that influence body size and one that influences fur length and shedding. Using simulation studies, we show that modestly larger sample sizes and denser marker sets will be sufficient to identify most moderate- to large-effect complex disease loci. This proposed design will enable efficient mapping of canine complex diseases, most of which have human homologues, using far fewer samples than required in human studies. PMID:26795439
Vibration characteristics of a steadily rotating slender ring
NASA Technical Reports Server (NTRS)
Lallman, F. J.
1980-01-01
Partial differential equations are derived to describe the structural vibrations of a uniform homogeneous ring which is very flexible because the radius is very large compared with the cross sectional dimensions. Elementary beam theory is used and small deflections are assumed in the derivation. Four sets of structural modes are examined: bending and compression modes in the plane of the ring; bending modes perpendicular to the plane of the ring; and twisting modes about the centroid of the ring cross section. Spatial and temporal characteristics of these modes, presented in terms of vibration frequencies and ratios between vibration amplitudes, are demonstrated in several figures. Given a sufficiently high rotational rate, the dynamics of the ring approach those of a vibrating string. In this case, the velocity of traveling wave in the material of the ring approaches in velocity of the material relative to inertial space, resulting in structural modes which are almost stationary in space.
RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT
Bruso, Kelsey
2012-01-01
Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n2) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923
Gender and the transmission of civic engagement: assessing the influences on youth civic activity.
Matthews, Todd L; Hempel, Lynn M; Howell, Frank M
2010-01-01
The study of civic activity has become a central focus for many social scientists over the past decade, generating considerable research and debate. Previous studies have largely overlooked the role of youth socialization into civic life, most notably in the settings of home and school. Further, differences along gender lines in civic capacity have not been given sufficient attention in past studies. This study adds to the literature by examining the potential pathways in the development of youth civic activity and potential, utilizing both gender-neutral and gender-specific structural equation modeling of data from the 1996 National Household Education Survey. Results indicate that involvement by parents in their child's schooling plays a crucial, mediating role in the relationship between adult and youth civic activity. Gender differences are minimal; thus adult school involvement is crucial for transmitting civic culture from parents to both female and male youth.
An automated data management/analysis system for space shuttle orbiter tiles. [stress analysis
NASA Technical Reports Server (NTRS)
Giles, G. L.; Ballas, M.
1982-01-01
An engineering data management system was combined with a nonlinear stress analysis program to provide a capability for analyzing a large number of tiles on the space shuttle orbiter. Tile geometry data and all data necessary of define the tile loads environment accessed automatically as needed for the analysis of a particular tile or a set of tiles. User documentation provided includes: (1) description of computer programs and data files contained in the system; (2) definitions of all engineering data stored in the data base; (3) characteristics of the tile anaytical model; (4) instructions for preparation of user input; and (5) a sample problem to illustrate use of the system. Description of data, computer programs, and analytical models of the tile are sufficiently detailed to guide extension of the system to include additional zones of tiles and/or additional types of analyses
Inservice Training of Primary Teachers Through Interactive Video Technology: An Indian Experience
NASA Astrophysics Data System (ADS)
Maheshwari, A. N.; Raina, V. K.
1998-01-01
India has yet to achieve elementary education for all children. Among the centrally sponsored initiatives to improve education are Operation Blackboard, to provide sufficient teachers and buildings, Minimum Levels of Learning, which set achievement targets, and the Special Orientation Programme for Primary School Teachers (SOPT). This article focuses on the last of these and describes the new technology used to train teachers so that the losses in transmission inherent in the cascade model are avoided. Interactive Video Technology involving the Indira Gandhi Open University and the Indian Space Research Organisation was used experimentally in seven-day training courses for primary school teachers in 20 centres in Karnataka State, providing one-way video transmissions and telephone feedback to experts from the centres. The responses from teachers and their trainers indicate considerable potential for the exploitation of new technology where large numbers of teachers require training.
NASA Astrophysics Data System (ADS)
Baxter, J. Erik; Winstanley, Elizabeth
2016-02-01
We investigate the stability of spherically symmetric, purely magnetic, soliton and black hole solutions of four-dimensional 𝔰𝔲(N) Einstein-Yang-Mills theory with a negative cosmological constant Λ. These solutions are described by N - 1 magnetic gauge field functions ωj. We consider linear, spherically symmetric, perturbations of these solutions. The perturbations decouple into two sectors, known as the sphaleronic and gravitational sectors. For any N, there are no instabilities in the sphaleronic sector if all the magnetic gauge field functions ωj have no zeros and satisfy a set of N - 1 inequalities. In the gravitational sector, we prove that there are solutions which have no instabilities in a neighbourhood of stable embedded 𝔰𝔲(2) solutions, provided the magnitude of the cosmological constant |" separators=" Λ | is sufficiently large.
A vibration-based health monitoring program for a large and seismically vulnerable masonry dome
NASA Astrophysics Data System (ADS)
Pecorelli, M. L.; Ceravolo, R.; De Lucia, G.; Epicoco, R.
2017-05-01
Vibration-based health monitoring of monumental structures must rely on efficient and, as far as possible, automatic modal analysis procedures. Relatively low excitation energy provided by traffic, wind and other sources is usually sufficient to detect structural changes, as those produced by earthquakes and extreme events. Above all, in-operation modal analysis is a non-invasive diagnostic technique that can support optimal strategies for the preservation of architectural heritage, especially if complemented by model-driven procedures. In this paper, the preliminary steps towards a fully automated vibration-based monitoring of the world’s largest masonry oval dome (internal axes of 37.23 by 24.89 m) are presented. More specifically, the paper reports on signal treatment operations conducted to set up the permanent dynamic monitoring system of the dome and to realise a robust automatic identification procedure. Preliminary considerations on the effects of temperature on dynamic parameters are finally reported.
Medicine, material science and security: the versatility of the coded-aperture approach.
Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A
2014-03-06
The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.
Reflections on governance models for the clinical translation of stem cells.
Sugarman, Jeremy
2010-01-01
Governance models for the oversight of human embryonic stem cell research have been proposed which mirror in large part familiar oversight mechanisms for research with human subjects and non-human animals. While such models are in principle readily endorsable, there are a set of concerns related to their implementation--such as ensuring that an elaborated informed consent process and conducting long-term monitoring of research subjects are tenable--which suggest areas where gathering data may facilitate more appropriate oversight. In addition, it is unclear whether a new governance model based at individual institutions are sufficient to address the ethical issues inherent to this research. Regardless, some of the concerns that have arisen in considering the appropriate governance of stem cell research, particularly the important translational pathway of innovation in contrast to staged research, transparency and publication, and social justice, may be useful in science and translational research more broadly.
Generalized Teleportation and Entanglement Recycling
NASA Astrophysics Data System (ADS)
Strelchuk, Sergii; Horodecki, Michał; Oppenheim, Jonathan
2013-01-01
We introduce new teleportation protocols which are generalizations of the original teleportation protocols that use the Pauli group and the port-based teleportation protocols, introduced by Hiroshima and Ishizaka, that use the symmetric permutation group. We derive sufficient conditions for a set of operations, which in general need not form a group, to give rise to a teleportation protocol and provide examples of such schemes. This generalization leads to protocols with novel properties and is needed to push forward new schemes of computation based on them. Port-based teleportation protocols and our generalizations use a large resource state consisting of N singlets to teleport only a single qubit state reliably. We provide two distinct protocols which recycle the resource state to teleport multiple states with error linearly increasing with their number. The first protocol consists of sequentially teleporting qubit states, and the second teleports them in a bulk.
Generalized teleportation and entanglement recycling.
Strelchuk, Sergii; Horodecki, Michał; Oppenheim, Jonathan
2013-01-04
We introduce new teleportation protocols which are generalizations of the original teleportation protocols that use the Pauli group and the port-based teleportation protocols, introduced by Hiroshima and Ishizaka, that use the symmetric permutation group. We derive sufficient conditions for a set of operations, which in general need not form a group, to give rise to a teleportation protocol and provide examples of such schemes. This generalization leads to protocols with novel properties and is needed to push forward new schemes of computation based on them. Port-based teleportation protocols and our generalizations use a large resource state consisting of N singlets to teleport only a single qubit state reliably. We provide two distinct protocols which recycle the resource state to teleport multiple states with error linearly increasing with their number. The first protocol consists of sequentially teleporting qubit states, and the second teleports them in a bulk.
NASA Astrophysics Data System (ADS)
Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.
We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.
Kuorikoski, Jaakko; Marchionni, Caterina
2014-12-01
We examine the diversity of strategies of modelling networks in (micro) economics and (analytical) sociology. Field-specific conceptions of what explaining (with) networks amounts to or systematic preference for certain kinds of explanatory factors are not sufficient to account for differences in modelling methodologies. We argue that network models in both sociology and economics are abstract models of network mechanisms and that differences in their modelling strategies derive to a large extent from field-specific conceptions of the way in which a good model should be a general one. Whereas the economics models aim at unification, the sociological models aim at a set of mechanism schemas that are extrapolatable to the extent that the underlying psychological mechanisms are general. These conceptions of generality induce specific biases in mechanistic explanation and are related to different views of when knowledge from different fields should be seen as relevant.
NASA Technical Reports Server (NTRS)
1991-01-01
James Antaki and a group of researchers from the University of Pittsburgh School of Medicine used many elements of the Technology Utilization Program while looking for a way to visualize and track material points within the heart muscle. What they needed were tiny artificial "eggs" containing copper sulfate solution, small enough (about 2 mm in diameter) that they would not injure the heart, and large enough to be seen in Magnetic Resonance Imaging (MRI) images; they also had to be biocompatible and tough enough to withstand the beating of the muscle. The group could not make nor buy sufficient containers. After reading an article on microspheres in NASA Tech Briefs, and a complete set of reports on microencapsulation from the Jet Propulsion Laboratory (JPL), JPL put Antaki in touch with Dr.Taylor Wang of Vanderbilt University who helped construct the myocardial markers. The research is expected to lead to improved understanding of how the heart works and what takes place when it fails.
Chouard, C H
2001-07-01
Noise is responsible for cochlear and general damages. Hearing loss and tinnitus greatly depend on sound intensity and duration. Short-duration sound of sufficient intensity (gunshot or explosion) will not be described because they are not currently encountered in our normal urban environment. Sound levels of less than 75 d (A) are unlikely to cause permanent hearing loss, while sound levels of about 85 d (A) with exposures of 8 h per day will produce permanent hearing loss after many years. Popular and largely amplified music is today one of the most dangerous causes of noise induced hearing loss. The intensity of noises (airport, highway) responsible for stress and general consequences (cardiovascular) is generally lower. Individual noise sensibility depends on several factors. Strategies to prevent damage from sound exposure should include the use of individual hearing protection devices, education programs beginning with school-age children, consumer guidance, increased product noise labelling, and hearing conservation programs for occupational settings.
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
The Set of Diagnostics for the First Operation Campaign of the Wendelstein 7-X Stellarator
NASA Astrophysics Data System (ADS)
König, Ralf; Baldzuhn, J.; Biel, W.; Biedermann, C.; Bosch, H. S.; Bozhenkov, S.; Bräuer, T.; Brotas de Carvalho, B.; Burhenn, R.; Buttenschön, B.; Cseh, G.; Czarnecka, A.; Endler, M.; Erckmann, V.; Estrada, T.; Geiger, J.; Grulke, O.; Hartmann, D.; Hathiramani, D.; Hirsch, M.; Jabłonski, S.; Jakubowski, M.; Kaczmarczyk, J.; Klinger, T.; Klose, S.; Kocsis, G.; Kornejew, P.; Krämer-Flecken, A.; Kremeyer, T.; Krychowiak, M.; Kubkowska, M.; Langenberg, A.; Laqua, H. P.; Laux, M.; Liang, Y.; Lorenz, A.; Marchuk, A. O.; Moncada, V.; Neubauer, O.; Neuner, U.; Oosterbeek, J. W.; Otte, M.; Pablant, N.; Pasch, E.; Pedersen, T. S.; Rahbarnia, K.; Ryc, L.; Schmitz, O.; Schneider, W.; Schuhmacher, H.; Schweer, B.; Stange, T.; Thomsen, H.; Travere, J.-M.; Szepesi, T.; Wenzel, U.; Werner, A.; Wiegel, B.; Windisch, T.; Wolf, R.; Wurden, G. A.; Zhang, D.; Zimbal, A.; Zoletnik, S.; the W7-X Team
2015-10-01
Wendelstein 7-X (W7-X) is a large optimized stellarator (B=2.5T, V=30m3) aiming at demonstrating the reactor relevance of the optimized stellarators. In 2015 W7-X will begin its first operation phase (OP1.1) with five inertially cooled inboard limiters made of graphite. Assuming the heat loads can be spread out evenly between the limiters, 1 second discharges at 2 MW of ECRH heating power could be run in OP1.1. The expected plasma parameters will be sufficient to demonstrate the readiness of the installed diagnostics and even to run a first physics program. The diagnostics available for this first operation phase, including some special limiter diagnostics, and their capabilities are being presented. A shorter version of this contribution is due to be published in PoS at: 1st EPS conference on Plasma Diagnostics
NASA Technical Reports Server (NTRS)
Jones, C. B.; Smetana, F. O.
1979-01-01
It was found that if the upper and lower ends of a collector were opened, large free convention currents may be set up between the collector surface and the cover glass(es) which can result in appreciable heat rejection. If the collector is so designed that both plates surfaces are exposed to convection currents when the upper and lower ends of the collector enclosure are opened, the heat rejection rate is 300 watts sq m when the plate is 13 C above ambient. This is sufficient to permit a collector array designed to provide 100 percent of the heating needs of a home to reject the accumulated daily air conditioning load during the course of a summer night. This also permits the overall energy requirements for cooling to be reduced by at least 15 percent and shift the load on the utility entirely to the nighttime hours.
Responses of large mammals to climate change.
Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan
2014-01-01
Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change.
Responses of large mammals to climate change
Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan
2014-01-01
Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change. PMID:27583293
Crowdsourcing quality control for Dark Energy Survey images
Melchior, P.
2016-07-01
We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomersmore » but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less
Crowdsourcing quality control for Dark Energy Survey images
NASA Astrophysics Data System (ADS)
Melchior, P.; Sheldon, E.; Drlica-Wagner, A.; Rykoff, E. S.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Doel, P.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Jarvis, M.; Kuehn, K.; Li, T. S.; Maia, M. A. G.; March, M.; Marshall, J. L.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Wester, W.; Zhang, Y.
2016-07-01
We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomers but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo version at http://des-exp-checker.pmelchior.net.
Miniaturized inertial impactor for personal airborne particulate monitoring: Prototyping
NASA Astrophysics Data System (ADS)
Pasini, Silvia; Bianchi, Elena; Dubini, Gabriele; Cortelezzi, Luca
2017-11-01
Computational fluid dynamic (CFD) simulations allowed us to conceive and design a miniaturized inertial impactor able to collect fine airborne particulate matter (PM10, PM2.5 and PM1). We created, by 3D printing, a prototype of the impactor. We first performed a set of experiments by applying a suction pump to the outlets and sampling the airborne particulate of our laboratory. The analysis of the slide showed a collection of a large number of particles, spanning a wide range of sizes, organized in a narrow band located below the exit of the nozzle. In order to show that our miniaturized inertial impactor can be truly used as a personal air-quality monitor, we performed a second set of experiments where the suction needed to produce the airflow through the impactor is generated by a human being inhaling through the outlets of the prototype. To guarantee a number of particles sufficient to perform a quantitative characterization, we collected particles performing ten consecutive deep inhalations. Finally, the potentiality for realistic applications of our miniaturized inertial impactor used in combination with a miniaturized single-particle detector will be discussed. CARIPLO Fundation - project MINUTE (Grant No. 2011-2118).
Primers-4-Yeast: a comprehensive web tool for planning primers for Saccharomyces cerevisiae.
Yofe, Ido; Schuldiner, Maya
2014-02-01
The budding yeast Saccharomyces cerevisiae is a key model organism of functional genomics, due to its ease and speed of genetic manipulations. In fact, in this yeast, the requirement for homologous sequences for recombination purposes is so small that 40 base pairs (bp) are sufficient. Hence, an enormous variety of genetic manipulations can be performed by simply planning primers with the correct homology, using a defined set of transformation plasmids. Although designing primers for yeast transformations and for the verification of their correct insertion is a common task in all yeast laboratories, primer planning is usually done manually and a tool that would enable easy, automated primer planning for the yeast research community is still lacking. Here we introduce Primers-4-Yeast, a web tool that allows primers to be designed in batches for S. cerevisiae gene-targeting transformations, and for the validation of correct insertions. This novel tool enables fast, automated, accurate primer planning for large sets of genes, introduces consistency in primer planning and is therefore suggested to serve as a standard in yeast research. Primers-4-Yeast is available at: http://www.weizmann.ac.il/Primers-4-Yeast Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Vonroos, O. H.
1982-01-01
A theory of deep point defects imbedded in otherwise perfect semiconductor crystals is developed with the aid of pseudopotentials. The dominant short-range forces engendered by the impurity are sufficiently weakened in all cases where the cancellation theorem of the pseudopotential formalism is operative. Thus, effective-mass-like equations exhibiting local effective potentials derived from nonlocal pseudopotentials are shown to be valid for a large class of defects. A two-band secular determinant for the energy eigenvalues of deep defects is also derived from the set of integral equations which corresponds to the set of differential equations of the effective-mass type. Subsequently, the theory in its simplest form, is applied to the system Al(x)Ga(1-x)As:Se. It is shown that the one-electron donor level of Se within the forbidden gap of Al(x)Ga(1-x)As as a function of the AlAs mole fraction x reaches its maximum of about 300 meV (as measured from the conduction band edge) at the cross-over from the direct to the indirect band-gap at x = 0.44 in agreement with experiments.
Prediction and validation of the energy dissipation of a friction damper
NASA Astrophysics Data System (ADS)
Lopez, I.; Nijmeijer, H.
2009-12-01
Friction dampers can be a cheap and efficient way to reduce the vibration levels of a wide range of mechanical systems. In the present work it is shown that the maximum energy dissipation and corresponding optimum friction force of friction dampers with stiff localized contacts and large relative displacements within the contact, can be determined with sufficient accuracy using a dry (Coulomb) friction model. Both the numerical calculations with more complex friction models and the experimental results in a laboratory test set-up show that these two quantities are relatively robust properties of a system with friction. The numerical calculations are performed with several friction models currently used in the literature. For the stick phase smooth approximations like viscous damping or the arctan function are considered but also the non-smooth switch friction model is used. For the slip phase several models of the Stribeck effect are used. The test set-up for the laboratory experiments consists of a mass sliding on parallel ball-bearings, where additional friction is created by a sledge attached to the mass, which is pre-stressed against a friction plate. The measured energy dissipation is in good agreement with the theoretical results for Coulomb friction.
Learning Sequential Composition Control.
Najafi, Esmaeil; Babuska, Robert; Lopes, Gabriel A D
2016-11-01
Sequential composition is an effective supervisory control method for addressing control problems in nonlinear dynamical systems. It executes a set of controllers sequentially to achieve a control specification that cannot be realized by a single controller. As these controllers are designed offline, sequential composition cannot address unmodeled situations that might occur during runtime. This paper proposes a learning approach to augment the standard sequential composition framework by using online learning to handle unforeseen situations. New controllers are acquired via learning and added to the existing supervisory control structure. In the proposed setting, learning experiments are restricted to take place within the domain of attraction (DOA) of the existing controllers. This guarantees that the learning process is safe (i.e., the closed loop system is always stable). In addition, the DOA of the new learned controller is approximated after each learning trial. This keeps the learning process short as learning is terminated as soon as the DOA of the learned controller is sufficiently large. The proposed approach has been implemented on two nonlinear systems: 1) a nonlinear mass-damper system and 2) an inverted pendulum. The results show that in both cases a new controller can be rapidly learned and added to the supervisory control structure.
Del Savio, Lorenzo; Prainsack, Barbara; Buyx, Alena
2017-08-01
The establishment of databases for research in human microbiomics is dependent on the recruitment of sufficient numbers and diversity of participants. Factors that support or impede participant recruitment in studies of this type have not yet been studied. We report the results of a survey aimed at establishing the motivations of participants in the British Gut Project, a research project that relies on volunteers to provide samples and to help fund the project. The two most frequently reported motivations for participation were altruism and solidarity. Low education levels appeared to be a recruitment obstacle. More than half of our 151 respondents said they would participate in further citizen-science projects; 38% said they would not participate in a similar project if it was for-profit or in a project that did not release data sets in repositories accessible to scientists (30%). The desire to take part in research was reported as a key motivation for participation in the British Gut Project (BGP). Such prosocial motivations can be mobilized for the establishment of large data sets for research.Genet Med advance online publication 26 January 2017.
Crowdsourcing quality control for Dark Energy Survey images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, P.
We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomersmore » but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less
Gottschalk, Caroline; Fischer, Rico
2017-03-01
Different contexts with high versus low conflict frequencies require a specific attentional control involvement, i.e., strong attentional control for high conflict contexts and less attentional control for low conflict contexts. While it is assumed that the corresponding control set can be activated upon stimulus presentation at the respective context (e.g., upper versus lower location), the actual features that trigger control set activation are to date not described. Here, we ask whether the perceptual priming of the location context by an abrupt onset of irrelevant stimuli is sufficient in activating the context-specific attentional control set. For example, the mere onset of a stimulus might disambiguate the relevant location context and thus, serve as a low-level perceptual trigger mechanism that activates the context-specific attentional control set. In Experiment 1 and 2, the onsets of task-relevant and task-irrelevant (distracter) stimuli were manipulated at each context location to compete for triggering the activation of the appropriate control set. In Experiment 3, a prior training session enabled distracter stimuli to establish contextual control associations of their own before entering the test session. Results consistently showed that the mere onset of a task-irrelevant stimulus (with or without a context-control association) is not sufficient to activate the context-associated attentional control set by disambiguating the relevant context location. Instead, we argue that the identification of the relevant stimulus at the respective context is a precondition to trigger the activation of the context-associated attentional control set.
Lear, Bridget C; Zhang, Luoying; Allada, Ravi
2009-07-01
Discrete clusters of circadian clock neurons temporally organize daily behaviors such as sleep and wake. In Drosophila, a network of just 150 neurons drives two peaks of timed activity in the morning and evening. A subset of these neurons expresses the neuropeptide pigment dispersing factor (PDF), which is important for promoting morning behavior as well as maintaining robust free-running rhythmicity in constant conditions. Yet, how PDF acts on downstream circuits to mediate rhythmic behavior is unknown. Using circuit-directed rescue of PDF receptor mutants, we show that PDF targeting of just approximately 30 non-PDF evening circadian neurons is sufficient to drive morning behavior. This function is not accompanied by large changes in core molecular oscillators in light-dark, indicating that PDF RECEPTOR likely regulates the output of these cells under these conditions. We find that PDF also acts on this focused set of non-PDF neurons to regulate both evening activity phase and period length, consistent with modest resetting effects on core oscillators. PDF likely acts on more distributed pacemaker neuron targets, including the PDF neurons themselves, to regulate rhythmic strength. Here we reveal defining features of the circuit-diagram for PDF peptide function in circadian behavior, revealing the direct neuronal targets of PDF as well as its behavioral functions at those sites. These studies define a key direct output circuit sufficient for multiple PDF dependent behaviors.
Lavis, John N
2006-01-01
Public policymakers must contend with a particular set of institutional arrangements that govern what can be done to address any given issue, pressure from a variety of interest groups about what they would like to see done to address any given issue, and a range of ideas (including research evidence) about how best to address any given issue. Rarely do processes exist that can get optimally packaged high-quality and high-relevance research evidence into the hands of public policymakers when they most need it, which is often in hours and days, not months and years. In Canada, a variety of efforts have been undertaken to address the factors that have been found to increase the prospects for research use, including the production of systematic reviews that meet the shorter term (but not urgent) needs of public policymakers and encouraging partnerships between researchers and policymakers that allow for their interaction around the tasks of asking and answering relevant questions. Much less progress has been made in making available research evidence to inform the urgent needs of public policymakers and in addressing attitudinal barriers and capacity limitations. In the future, knowledge-translation processes, particularly push efforts and efforts to facilitate user pull, should be undertaken on a sufficiently large scale and with a sufficiently rigorous evaluation so that robust conclusions can be drawn about their effectiveness.
Genomic resources for identification of the minimal N2 -fixing symbiotic genome.
diCenzo, George C; Zamani, Maryam; Milunovic, Branislava; Finan, Turlough M
2016-09-01
The lack of an appropriate genomic platform has precluded the use of gain-of-function approaches to study the rhizobium-legume symbiosis, preventing the establishment of the genes necessary and sufficient for symbiotic nitrogen fixation (SNF) and potentially hindering synthetic biology approaches aimed at engineering this process. Here, we describe the development of an appropriate system by reverse engineering Sinorhizobium meliloti. Using a novel in vivo cloning procedure, the engA-tRNA-rmlC (ETR) region, essential for cell viability and symbiosis, was transferred from Sinorhizobium fredii to the ancestral location on the S. meliloti chromosome, rendering the ETR region on pSymB redundant. A derivative of this strain lacking both the large symbiotic replicons (pSymA and pSymB) was constructed. Transfer of pSymA and pSymB back into this strain restored symbiotic capabilities with alfalfa. To delineate the location of the single-copy genes essential for SNF on these replicons, we screened a S. meliloti deletion library, representing > 95% of the 2900 genes of the symbiotic replicons, for their phenotypes with alfalfa. Only four loci, accounting for < 12% of pSymA and pSymB, were essential for SNF. These regions will serve as our preliminary target of the minimal set of horizontally acquired genes necessary and sufficient for SNF. © 2016 Society for Applied Microbiology and John Wiley & Sons Ltd.
Global health goals: lessons from the worldwide effort to eradicate poliomyelitis.
Aylward, R Bruce; Acharya, Arnab; England, Sarah; Agocs, Mary; Linkins, Jennifer
2003-09-13
The Global Polio Eradication Initiative was launched in 1988. Assessment of the politics, production, financing, and economics of this international effort has suggested six lessons that might be pertinent to the pursuit of other global health goals. First, such goals should be based on technically sound strategies with proven operational feasibility in a large geographical area. Second, before launching an initiative, an informed collective decision must be negotiated and agreed in an appropriate international forum to keep to a minimum long-term risks in financing and implementation. Third, if substantial community engagement is envisaged, efficient deployment of sufficient resources at that level necessitates a defined, time-limited input by the community within a properly managed partnership. Fourth, although the so-called fair-share concept is arguably the best way to finance such goals, its limitations must be recognised early and alternative strategies developed for settings where it does not work. Fifth, international health goals must be designed and pursued within existing health systems if they are to secure and sustain broad support. Finally, countries, regions, or populations most likely to delay the achievement of a global health goal should be identified at the outset to ensure provision of sufficient resources and attention. The greatest threats to poliomyelitis eradication are a financing gap of US 210 million dollars and difficulties in strategy implementation in at most five countries.
Femtosecond X-ray protein nanocrystallography
Chapman, Henry N.; Fromme, Petra; Barty, Anton; White, Thomas A.; Kirian, Richard A.; Aquila, Andrew; Hunter, Mark S.; Schulz, Joachim; DePonte, Daniel P.; Weierstall, Uwe; Doak, R. Bruce; Maia, Filipe R. N. C.; Martin, Andrew V.; Schlichting, Ilme; Lomb, Lukas; Coppola, Nicola; Shoeman, Robert L.; Epp, Sascha W.; Hartmann, Robert; Rolles, Daniel; Rudenko, Artem; Foucar, Lutz; Kimmel, Nils; Weidenspointner, Georg; Holl, Peter; Liang, Mengning; Barthelmess, Miriam; Caleman, Carl; Boutet, Sébastien; Bogan, Michael J.; Krzywinski, Jacek; Bostedt, Christoph; Bajt, Saša; Gumprecht, Lars; Rudek, Benedikt; Erk, Benjamin; Schmidt, Carlo; Hömke, André; Reich, Christian; Pietschner, Daniel; Strüder, Lothar; Hauser, Günter; Gorke, Hubert; Ullrich, Joachim; Herrmann, Sven; Schaller, Gerhard; Schopper, Florian; Soltau, Heike; Kühnel, Kai-Uwe; Messerschmidt, Marc; Bozek, John D.; Hau-Riege, Stefan P.; Frank, Matthias; Hampton, Christina Y.; Sierra, Raymond G.; Starodub, Dmitri; Williams, Garth J.; Hajdu, Janos; Timneanu, Nicusor; Seibert, M. Marvin; Andreasson, Jakob; Rocker, Andrea; Jönsson, Olof; Svenda, Martin; Stern, Stephan; Nass, Karol; Andritschke, Robert; Schröter, Claus-Dieter; Krasniqi, Faton; Bott, Mario; Schmidt, Kevin E.; Wang, Xiaoyu; Grotjohann, Ingo; Holton, James M.; Barends, Thomas R. M.; Neutze, Richard; Marchesini, Stefano; Fromme, Raimund; Schorb, Sebastian; Rupp, Daniela; Adolph, Marcus; Gorkhover, Tais; Andersson, Inger; Hirsemann, Helmut; Potdevin, Guillaume; Graafsma, Heinz; Nilsson, Björn; Spence, John C. H.
2012-01-01
X-ray crystallography provides the vast majority of macromolecular structures, but the success of the method relies on growing crystals of sufficient size. In conventional measurements, the necessary increase in X-ray dose to record data from crystals that are too small leads to extensive damage before a diffraction signal can be recorded1-3. It is particularly challenging to obtain large, well-diffracting crystals of membrane proteins, for which fewer than 300 unique structures have been determined despite their importance in all living cells. Here we present a method for structure determination where single-crystal X-ray diffraction ‘snapshots’ are collected from a fully hydrated stream of nanocrystals using femtosecond pulses from a hard-X-ray free-electron laser, the Linac Coherent Light Source4. We prove this concept with nanocrystals of photosystem I, one of the largest membrane protein complexes5. More than 3,000,000 diffraction patterns were collected in this study, and a three-dimensional data set was assembled from individual photosystem I nanocrystals (~200 nm to 2 μm in size). We mitigate the problem of radiation damage in crystallography by using pulses briefer than the timescale of most damage processes6. This offers a new approach to structure determination of macromolecules that do not yield crystals of sufficient size for studies using conventional radiation sources or are particularly sensitive to radiation damage. PMID:21293373
Tan, Mingsheng; Stone, Douglas R; Triana, Joseph C; Almagri, Abdulgader F; Fiksel, Gennady; Ding, Weixing; Sarff, John S; McCollam, Karsten J; Li, Hong; Liu, Wandong
2017-02-01
A 40-channel capacitive probe has been developed to measure the electrostatic fluctuations associated with the tearing modes deep into Madison Symmetric Torus (MST) reversed field pinch plasma. The capacitive probe measures the ac component of the plasma potential via the voltage induced on stainless steel electrodes capacitively coupled with the plasma through a thin annular layer of boron nitride (BN) dielectric (also serves as the particle shield). When bombarded by the plasma electrons, BN provides a sufficiently large secondary electron emission for the induced voltage to be very close to the plasma potential. The probe consists of four stalks each with ten cylindrical capacitors that are radially separated by 1.5 cm. The four stalks are arranged on a 1.3 cm square grid so that at each radial position, there are four electrodes forming a square grid. Every two adjacent radial sets of four electrodes form a cube. The fluctuating electric field can be calculated by the gradient of the plasma potential fluctuations at the eight corners of the cube. The probe can be inserted up to 15 cm (r/a = 0.7) into the plasma. The capacitive probe has a frequency bandwidth from 13 Hz to 100 kHz, amplifier-circuit limit, sufficient for studying the tearing modes (5-30 kHz) in the MST reversed-field pinch.
On Evaluating Brain Tissue Classifiers without a Ground Truth
Martin-Fernandez, Marcos; Ungar, Lida; Nakamura, Motoaki; Koo, Min-Seong; McCarley, Robert W.; Shenton, Martha E.
2009-01-01
In this paper, we present a set of techniques for the evaluation of brain tissue classifiers on a large data set of MR images of the head. Due to the difficulty of establishing a gold standard for this type of data, we focus our attention on methods which do not require a ground truth, but instead rely on a common agreement principle. Three different techniques are presented: the Williams’ index, a measure of common agreement; STAPLE, an Expectation Maximization algorithm which simultaneously estimates performance parameters and constructs an estimated reference standard; and Multidimensional Scaling, a visualization technique to explore similarity data. We apply these different evaluation methodologies to a set eleven different segmentation algorithms on forty MR images. We then validate our evaluation pipeline by building a ground truth based on human expert tracings. The evaluations with and without a ground truth are compared. Our findings show that comparing classifiers without a gold standard can provide a lot of interesting information. In particular, outliers can be easily detected, strongly consistent or highly variable techniques can be readily discriminated, and the overall similarity between different techniques can be assessed. On the other hand, we also find that some information present in the expert segmentations is not captured by the automatic classifiers, suggesting that common agreement alone may not be sufficient for a precise performance evaluation of brain tissue classifiers. PMID:17532646
Homeyer, Nadine; Stoll, Friederike; Hillisch, Alexander; Gohlke, Holger
2014-08-12
Correctly ranking compounds according to their computed relative binding affinities will be of great value for decision making in the lead optimization phase of industrial drug discovery. However, the performance of existing computationally demanding binding free energy calculation methods in this context is largely unknown. We analyzed the performance of the molecular mechanics continuum solvent, the linear interaction energy (LIE), and the thermodynamic integration (TI) approach for three sets of compounds from industrial lead optimization projects. The data sets pose challenges typical for this early stage of drug discovery. None of the methods was sufficiently predictive when applied out of the box without considering these challenges. Detailed investigations of failures revealed critical points that are essential for good binding free energy predictions. When data set-specific features were considered accordingly, predictions valuable for lead optimization could be obtained for all approaches but LIE. Our findings lead to clear recommendations for when to use which of the above approaches. Our findings also stress the important role of expert knowledge in this process, not least for estimating the accuracy of prediction results by TI, using indicators such as the size and chemical structure of exchanged groups and the statistical error in the predictions. Such knowledge will be invaluable when it comes to the question which of the TI results can be trusted for decision making.
Alchemical prediction of hydration free energies for SAMPL
Mobley, David L.; Liu, Shaui; Cerutti, David S.; Swope, William C.; Rice, Julia E.
2013-01-01
Hydration free energy calculations have become important tests of force fields. Alchemical free energy calculations based on molecular dynamics simulations provide a rigorous way to calculate these free energies for a particular force field, given sufficient sampling. Here, we report results of alchemical hydration free energy calculations for the set of small molecules comprising the 2011 Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) challenge. Our calculations are largely based on the Generalized Amber Force Field (GAFF) with several different charge models, and we achieved RMS errors in the 1.4-2.2 kcal/mol range depending on charge model, marginally higher than what we typically observed in previous studies1-5. The test set consists of ethane, biphenyl, and a dibenzyl dioxin, as well as a series of chlorinated derivatives of each. We found that, for this set, using high-quality partial charges from MP2/cc-PVTZ SCRF RESP fits provided marginally improved agreement with experiment over using AM1-BCC partial charges as we have more typically done, in keeping with our recent findings5. Switching to OPLS Lennard-Jones parameters with AM1-BCC charges also improves agreement with experiment. We also find a number of chemical trends within each molecular series which we can explain, but there are also some surprises, including some that are captured by the calculations and some that are not. PMID:22198475
Ontology modularization to improve semantic medical image annotation.
Wennerberg, Pinar; Schulz, Klaus; Buitelaar, Paul
2011-02-01
Searching for medical images and patient reports is a significant challenge in a clinical setting. The contents of such documents are often not described in sufficient detail thus making it difficult to utilize the inherent wealth of information contained within them. Semantic image annotation addresses this problem by describing the contents of images and reports using medical ontologies. Medical images and patient reports are then linked to each other through common annotations. Subsequently, search algorithms can more effectively find related sets of documents on the basis of these semantic descriptions. A prerequisite to realizing such a semantic search engine is that the data contained within should have been previously annotated with concepts from medical ontologies. One major challenge in this regard is the size and complexity of medical ontologies as annotation sources. Manual annotation is particularly time consuming labor intensive in a clinical environment. In this article we propose an approach to reducing the size of clinical ontologies for more efficient manual image and text annotation. More precisely, our goal is to identify smaller fragments of a large anatomy ontology that are relevant for annotating medical images from patients suffering from lymphoma. Our work is in the area of ontology modularization, which is a recent and active field of research. We describe our approach, methods and data set in detail and we discuss our results. Copyright © 2010 Elsevier Inc. All rights reserved.
Lee, Hwa-Young; Kang, Minah
2015-01-01
This paper aims to investigate whether good governance of a recipient country is a necessary condition and what combinations of factors including governance factor are sufficient for low prevalence of HIV/AIDS in HIV/AIDS aid recipient countries during the period of 2002-2010. For this, Fuzzy-set Qualitative Comparative Analysis (QCA) was used. Nine potential attributes for a causal configuration for low HIV/AIDS prevalence were identified through a review of previous studies. For each factor, full membership, full non-membership, and crossover point were specified using both author's knowledge and statistical information of the variables. Calibration and conversion to a fuzzy-set score were conducted using Fs/QCA 2.0 and probabilistic tests for necessary and sufficiency were performed by STATA 11. The result suggested that governance is the necessary condition for low prevalence of HIV/AIDS in a recipient country. From sufficiency test, two pathways were resulted. The low level of governance can lead to low level of HIV/AIDS prevalence when it is combined with other favorable factors, especially, low economic inequality, high economic development and high health expenditure. However, strengthening governance is a more practical measure to keep low prevalence of HIV/AIDS because it is hard to achieve both economic development and economic quality. This study highlights that a comprehensive policy measure is the key for achieving low prevalence of HIV/AIDS in recipient country. PMID:26617451
Scharpf, Joseph; Haffey, Timothy; Rajasekaran, Karthik; Lorenz, Robert; McBride, Jennifer
2015-01-01
In certain cases, the recurrent laryngeal nerve (RLN) has to be sacrificed. This often results in an inadequate length of residual RLN to be used in a reinnervation procedure. We investigated the length of the distal stump of the RLN from the inferior border of the inferior pharyngeal constrictor muscle (IPCM), where it is frequently compromised, to its entrance into the larynx. Our objective was to determine whether this residual nerve stock was sufficient for margin clearance and neurorrhaphy. Cadaveric study Recurrent laryngeal nerves were identified in fresh frozen cadavers. The IPCM was divided, revealing the distal stump of the RLN, which was measured. Dissection was performed in 20 cadavers (40 nerves). The average length of the right RLN and the left RLN from the IPCM until it entered the larynx was 15mm and 14mm, respectively. All residual RLN remnants were of sufficient length for neurorrhaphy. Concomitant RLN reinnervation procedures in the setting of nerve sacrifice are not well described. A barrier to reinnervation in this setting may be insufficient residual nerve length for a neurorrhaphy. Often, when the RLN is sacrificed intraoperatively either iatrogenically or due to tumor invasion, it is close to the cricoarytenoid joint, at the inferior border of the IPCM. This study demonstrates that by splitting the IPCM, sufficient length can be obtained for neurorrhaphy. Copyright © 2015 Elsevier Inc. All rights reserved.
Lee, Hwa-Young; Yang, Bong-Min; Kang, Minah
2015-11-01
This paper aims to investigate whether good governance of a recipient country is a necessary condition and what combinations of factors including governance factor are sufficient for low prevalence of HIV/AIDS in HIV/AIDS aid recipient countries during the period of 2002-2010. For this, Fuzzy-set Qualitative Comparative Analysis (QCA) was used. Nine potential attributes for a causal configuration for low HIV/AIDS prevalence were identified through a review of previous studies. For each factor, full membership, full non-membership, and crossover point were specified using both author's knowledge and statistical information of the variables. Calibration and conversion to a fuzzy-set score were conducted using Fs/QCA 2.0 and probabilistic tests for necessary and sufficiency were performed by STATA 11. The result suggested that governance is the necessary condition for low prevalence of HIV/AIDS in a recipient country. From sufficiency test, two pathways were resulted. The low level of governance can lead to low level of HIV/AIDS prevalence when it is combined with other favorable factors, especially, low economic inequality, high economic development and high health expenditure. However, strengthening governance is a more practical measure to keep low prevalence of HIV/AIDS because it is hard to achieve both economic development and economic quality. This study highlights that a comprehensive policy measure is the key for achieving low prevalence of HIV/AIDS in recipient country.
An Investigation Into Low Fuel Pressure Warnings on a Macchi-Viper Aircraft
1988-05-01
was sufficient To activate the low pressure warning light. The pressure switch is normally set to a differential of between 2.5 - 3 psi. Partial...only a 2.1 psig margin for light illumination, if the pressure switch is set at 3 psig, and gives little scope for extra pipe or filter losses when... pressure switch is set between 2.5 - 3 psig. Any untoward pressure resistance in the fuel delivery line and filtering system would soon erode this
Pan-European stochastic flood event set
NASA Astrophysics Data System (ADS)
Kadlec, Martin; Pinto, Joaquim G.; He, Yi; Punčochář, Petr; Kelemen, Fanni D.; Manful, Desmond; Palán, Ladislav
2017-04-01
Impact Forecasting (IF), the model development center of Aon Benfield, has been developing a large suite of catastrophe flood models on probabilistic bases for individual countries in Europe. Such natural catastrophes do not follow national boundaries: for example, the major flood in 2016 was responsible for the Europe's largest insured loss of USD3.4bn and affected Germany, France, Belgium, Austria and parts of several other countries. Reflecting such needs, IF initiated a pan-European flood event set development which combines cross-country exposures with country based loss distributions to provide more insightful data to re/insurers. Because the observed discharge data are not available across the whole Europe in sufficient quantity and quality to permit a detailed loss evaluation purposes, a top-down approach was chosen. This approach is based on simulating precipitation from a GCM/RCM model chain followed by a calculation of discharges using rainfall-runoff modelling. IF set up this project in a close collaboration with Karlsruhe Institute of Technology (KIT) regarding the precipitation estimates and with University of East Anglia (UEA) in terms of the rainfall-runoff modelling. KIT's main objective is to provide high resolution daily historical and stochastic time series of key meteorological variables. A purely dynamical downscaling approach with the regional climate model COSMO-CLM (CCLM) is used to generate the historical time series, using re-analysis data as boundary conditions. The resulting time series are validated against the gridded observational dataset E-OBS, and different bias-correction methods are employed. The generation of the stochastic time series requires transfer functions between large-scale atmospheric variables and regional temperature and precipitation fields. These transfer functions are developed for the historical time series using reanalysis data as predictors and bias-corrected CCLM simulated precipitation and temperature as predictands. Finally, the transfer functions are applied to a large ensemble of GCM simulations with forcing corresponding to present day climate conditions to generate highly resolved stochastic time series of precipitation and temperature for several thousand years. These time series form the input for the rainfall-runoff model developed by the UEA team. It is a spatially distributed model adapted from the HBV model and will be calibrated for individual basins using historical discharge data. The calibrated model will be driven by the precipitation time series generated by the KIT team to simulate discharges at a daily time step. The uncertainties in the simulated discharges will be analysed using multiple model parameter sets. A number of statistical methods will be used to assess return periods, changes in the magnitudes, changes in the characteristics of floods such as time base and time to peak, and spatial correlations of large flood events. The Pan-European flood stochastic event set will permit a better view of flood risk for market applications.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Mechanism of explosive eruptions of Kilauea Volcano, Hawaii
Dvorak, J.J.
1992-01-01
A small explosive eruption of Kilauea Volcano, Hawaii, occurred in May 1924. The eruption was preceded by rapid draining of a lava lake and transfer of a large volume of magma from the summit reservoir to the east rift zone. This lowered the magma column, which reduced hydrostatic pressure beneath Halemaumau and allowed groundwater to flow rapidly into areas of hot rock, producing a phreatic eruption. A comparison with other events at Kilauea shows that the transfer of a large volume of magma out of the summit reservoir is not sufficient to produce a phreatic eruption. For example, the volume transferred at the beginning of explosive activity in May 1924 was less than the volumes transferred in March 1955 and January-February 1960, when no explosive activity occurred. Likewise, draining of a lava lake and deepening of the floor of Halemaumau, which occurred in May 1922 and August 1923, were not sufficient to produce explosive activity. A phreatic eruption of Kilauea requires both the transfer of a large volume of magma from the summit reservoir and the rapid removal of magma from near the surface, where the surrounding rocks have been heated to a sufficient temperature to produce steam explosions when suddenly contacted by groundwater. ?? 1992 Springer-Verlag.
Metabolic rates of giant pandas inform conservation strategies.
Fei, Yuxiang; Hou, Rong; Spotila, James R; Paladino, Frank V; Qi, Dunwu; Zhang, Zhihe
2016-06-06
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.
Metabolic rates of giant pandas inform conservation strategies
NASA Astrophysics Data System (ADS)
Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe
2016-06-01
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.
Metabolic rates of giant pandas inform conservation strategies
Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe
2016-01-01
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction. PMID:27264109
Pippel, Kristina; Meinck, M; Lübke, N
2017-06-01
Mobile geriatric rehabilitation can be provided in the setting of nursing homes, short-term care (STC) facilities and exclusively in private homes. This study analyzed the common features and differences of mobile rehabilitation interventions in various settings. Stratified by setting 1,879 anonymized mobile geriatric rehabilitation treatments between 2011 and 2014 from 11 participating institutions were analyzed with respect to patient, process and outcome-related features. Significant differences between the settings nursing home (n = 514, 27 %), STC (n = 167, 9 %) and private homes (n = 1198, 64 %) were evident for mean age (83 years, 83 years and 80 years, respectively), percentage of women (72 %, 64 % and 55 %), degree of dependency on pre-existing care (92 %, 76 % and 64 %), total treatment sessions (TS, 38 TS, 42 TS and 41 TS), treatment duration (54 days, 61 days and 58 days) as well as the Barthel index at the start of rehabilitation (34 points, 39 points and 46 points) and the gain in the Barthel index (15 points, 21 points and 18 points), whereby the gain in the capacity for self-sufficiency was significant in all settings. The setting-specific evaluation of mobile geriatric rehabilitation showed differences for relevant patient, process and outcome-related features. Compared to inpatient rehabilitation mobile rehabilitation in all settings made an above average contribution to the rehabilitation of patients with pre-existing dependency on care. The gains in the capacity for self-sufficiency achieved in all settings support the efficacy of mobile geriatric rehabilitation under the current prerequisites for applicability.
NASA Technical Reports Server (NTRS)
Leitmann, G.; Liu, H. S.
1977-01-01
Dynamic systems were considered subject to control by two agents, one of whom desires that no trajectory of the system emanating from outside a given set, intersects that set no matter what the admissible actions of the other agent. Constructive conditions sufficient to yield a feedback control for the agent seeking avoidance were employed to deduce an evader control for the planar pursuit-evasion problem with bounded normal accelerations.
Information Processing Research.
1988-05-01
concentrated mainly on the Hitech chess machine, which achieves its success from parallelism in the right places. Hitech has now reached a National rating...includes local user workstations, a set of central server workstations each acting as a host for a Warp machine, and a few Warp multiprocessors. The... successful completion. A quorum for an operation is any such set of sites. Neces- sary and sufficient constraints on quorum intersections are derived
Determination of nonlinear genetic architecture using compressed sensing.
Ho, Chiu Man; Hsu, Stephen D H
2015-01-01
One of the fundamental problems of modern genomics is to extract the genetic architecture of a complex trait from a data set of individual genotypes and trait values. Establishing this important connection between genotype and phenotype is complicated by the large number of candidate genes, the potentially large number of causal loci, and the likely presence of some nonlinear interactions between different genes. Compressed Sensing methods obtain solutions to under-constrained systems of linear equations. These methods can be applied to the problem of determining the best model relating genotype to phenotype, and generally deliver better performance than simply regressing the phenotype against each genetic variant, one at a time. We introduce a Compressed Sensing method that can reconstruct nonlinear genetic models (i.e., including epistasis, or gene-gene interactions) from phenotype-genotype (GWAS) data. Our method uses L1-penalized regression applied to nonlinear functions of the sensing matrix. The computational and data resource requirements for our method are similar to those necessary for reconstruction of linear genetic models (or identification of gene-trait associations), assuming a condition of generalized sparsity, which limits the total number of gene-gene interactions. An example of a sparse nonlinear model is one in which a typical locus interacts with several or even many others, but only a small subset of all possible interactions exist. It seems plausible that most genetic architectures fall in this category. We give theoretical arguments suggesting that the method is nearly optimal in performance, and demonstrate its effectiveness on broad classes of nonlinear genetic models using simulated human genomes and the small amount of currently available real data. A phase transition (i.e., dramatic and qualitative change) in the behavior of the algorithm indicates when sufficient data is available for its successful application. Our results indicate that predictive models for many complex traits, including a variety of human disease susceptibilities (e.g., with additive heritability h (2)∼0.5), can be extracted from data sets comprised of n ⋆∼100s individuals, where s is the number of distinct causal variants influencing the trait. For example, given a trait controlled by ∼10 k loci, roughly a million individuals would be sufficient for application of the method.
Teaching Students about Plagiarism: An Internet Solution to an Internet Problem
ERIC Educational Resources Information Center
Snow, Eleanour
2006-01-01
The Internet has changed the ways that students think, learn, and write. Students have large amounts of information, largely anonymous and without clear copyright information, literally at their fingertips. Without sufficient guidance, the inappropriate use of this information seems inevitable. Plagiarism among college students is rising, due to…
A COMPARISON OF SIX BENTHIC MACROINVERTEBRATE SAMPLING METHODS IN FOUR LARGE RIVERS
In 1999, a study was conducted to compare six macroinvertebrate sampling methods in four large (boatable) rivers that drain into the Ohio River. Two methods each were adapted from existing methods used by the USEPA, USGS and Ohio EPA. Drift nets were unable to collect a suffici...
NASA Astrophysics Data System (ADS)
Wördenweber, Roger; Hollmann, Eugen; Poltiasev, Michael; Neumüller, Heinz-Werner
2003-05-01
This paper addresses the development of a technically relevant sputter-deposition process for YBa2Cu3O7-delta films. First, the simulation of the particle transport from target to substrate indicates that only at a reduced pressure of p approx 1-10 Pa can a sufficiently large deposition rate and homogeneous stoichiometric distribution of the particles during large-area deposition be expected. The results of the simulations are generally confirmed by deposition experiments on CeO2 buffered sapphire and LaAlO3 substrates using a magnetron sputtering system suitable for large-area deposition. However, it is shown that in addition to the effect of scattering during particle transport, the conditions at the substrate lead to a selective growth of Y-Ba-Cu-O phases that, among others, strongly affect the growth rate. For example, the growth rate is more than three times larger for optimized parameters compared to the same set of parameters but at 100 K lower substrate temperature. Stoichiometrical and structural perfect films can be grown at low pressure (p < 10 Pa). However, the superconducting transition temperature of these films is reduced. The Tc reduction seems to be correlated with the c-axis length of YBa2Cu3O7-delta. Two possible explanations for the increased c-axis length and the correlated reduced transition temperature are discussed, i.e. reduced oxygen content and strong cation site disorder due to the heavy particle bombardment.
Pathways from marine protected area design and management to ecological success
2015-01-01
Using an international dataset compiled from 121 sites in 87 marine protected areas (MPAs) globally (Edgar et al., 2014), I assessed how various configurations of design and management conditions affected MPA ecological performance, measured in terms of fish species richness and biomass. The set-theoretic approach used Boolean algebra to identify pathways that combined up to five ‘NEOLI’ (No-take, Enforced, Old, Large, Isolated) conditions and that were sufficient for achieving positive, and negative, ecological outcomes. Ecological isolation was overwhelming the most important condition affecting ecological outcomes but Old and Large were also conditions important for achieving high levels of biomass among large fishes (jacks, groupers, sharks). Solution coverage was uniformly low (<0.35) for all models of positive ecological performance suggesting the presence of numerous other conditions and pathways to ecological success that did not involve the NEOLI conditions. Solution coverage was higher (>0.50) for negative results (i.e., the absence of high biomass) among the large commercially-exploited fishes, implying asymmetries in how MPAs may rebuild populations on the one hand and, on the other, protect against further decline. The results revealed complex interactions involving MPA design, implementation, and management conditions that affect MPA ecological performance. In general terms, the presence of no-take regulations and effective enforcement were insufficient to ensure MPA effectiveness on their own. Given the central role of ecological isolation in securing ecological benefits from MPAs, site selection in the design phase appears critical for success. PMID:26644975
Acoustic Enrichment of Extracellular Vesicles from Biological Fluids.
Ku, Anson; Lim, Hooi Ching; Evander, Mikael; Lilja, Hans; Laurell, Thomas; Scheding, Stefan; Ceder, Yvonne
2018-06-11
Extracellular vesicles (EVs) have emerged as a rich source of biomarkers providing diagnostic and prognostic information in diseases such as cancer. Large-scale investigations into the contents of EVs in clinical cohorts are warranted, but a major obstacle is the lack of a rapid, reproducible, efficient, and low-cost methodology to enrich EVs. Here, we demonstrate the applicability of an automated acoustic-based technique to enrich EVs, termed acoustic trapping. Using this technology, we have successfully enriched EVs from cell culture conditioned media and urine and blood plasma from healthy volunteers. The acoustically trapped samples contained EVs ranging from exosomes to microvesicles in size and contained detectable levels of intravesicular microRNAs. Importantly, this method showed high reproducibility and yielded sufficient quantities of vesicles for downstream analysis. The enrichment could be obtained from a sample volume of 300 μL or less, an equivalent to 30 min of enrichment time, depending on the sensitivity of downstream analysis. Taken together, acoustic trapping provides a rapid, automated, low-volume compatible, and robust method to enrich EVs from biofluids. Thus, it may serve as a novel tool for EV enrichment from large number of samples in a clinical setting with minimum sample preparation.
Mohammed, Ali I; Gritton, Howard J; Tseng, Hua-an; Bucklin, Mark E; Yao, Zhaojie; Han, Xue
2016-02-08
Advances in neurotechnology have been integral to the investigation of neural circuit function in systems neuroscience. Recent improvements in high performance fluorescent sensors and scientific CMOS cameras enables optical imaging of neural networks at a much larger scale. While exciting technical advances demonstrate the potential of this technique, further improvement in data acquisition and analysis, especially those that allow effective processing of increasingly larger datasets, would greatly promote the application of optical imaging in systems neuroscience. Here we demonstrate the ability of wide-field imaging to capture the concurrent dynamic activity from hundreds to thousands of neurons over millimeters of brain tissue in behaving mice. This system allows the visualization of morphological details at a higher spatial resolution than has been previously achieved using similar functional imaging modalities. To analyze the expansive data sets, we developed software to facilitate rapid downstream data processing. Using this system, we show that a large fraction of anatomically distinct hippocampal neurons respond to discrete environmental stimuli associated with classical conditioning, and that the observed temporal dynamics of transient calcium signals are sufficient for exploring certain spatiotemporal features of large neural networks.
High-frequency signal and noise estimates of CSR GRACE RL04
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.
2012-12-01
A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.
Radiative PQ breaking and the Higgs boson mass
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio
2015-06-01
The small and negative value of the Standard Model Higgs quartic coupling at high scales can be understood in terms of anthropic selection on a landscape where large and negative values are favored: most universes have a very short-lived electroweak vacuum and typical observers are in universes close to the corresponding metastability boundary. We provide a simple example of such a landscape with a Peccei-Quinn symmetry breaking scale generated through dimensional transmutation and supersymmetry softly broken at an intermediate scale. Large and negative contributions to the Higgs quartic are typically generated on integrating out the saxion field. Cancellations among these contributions are forced by the anthropic requirement of a sufficiently long-lived electroweak vacuum, determining the multiverse distribution for the Higgs quartic in a similar way to that of the cosmological constant. This leads to a statistical prediction of the Higgs boson mass that, for a wide range of parameters, yields the observed value within the 1σ statistical uncertainty of ˜ 5 GeV originating from the multiverse distribution. The strong CP problem is solved and single-component axion dark matter is predicted, with an abundance that can be understood from environmental selection. A more general setting for the Higgs mass prediction is discussed.
Scanning wave photopolymerization enables dye-free alignment patterning of liquid crystals
Hisano, Kyohei; Aizawa, Miho; Ishizu, Masaki; Kurata, Yosuke; Nakano, Wataru; Akamatsu, Norihisa; Barrett, Christopher J.; Shishido, Atsushi
2017-01-01
Hierarchical control of two-dimensional (2D) molecular alignment patterns over large areas is essential for designing high-functional organic materials and devices. However, even by the most powerful current methods, dye molecules that discolor and destabilize the materials need to be doped in, complicating the process. We present a dye-free alignment patterning technique, based on a scanning wave photopolymerization (SWaP) concept, that achieves a spatial light–triggered mass flow to direct molecular order using scanning light to propagate the wavefront. This enables one to generate macroscopic, arbitrary 2D alignment patterns in a wide variety of optically transparent polymer films from various polymerizable mesogens with sufficiently high birefringence (>0.1) merely by single-step photopolymerization, without alignment layers or polarized light sources. A set of 150,000 arrays of a radial alignment pattern with a size of 27.4 μm × 27.4 μm were successfully inscribed by SWaP, in which each individual pattern is smaller by a factor of 104 than that achievable by conventional photoalignment methods. This dye-free inscription of microscopic, complex alignment patterns over large areas provides a new pathway for designing higher-performance optical and mechanical devices. PMID:29152567
Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter
2014-01-13
Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.
Cooling system for continuous metal casting machines
Draper, Robert; Sumpman, Wayne C.; Baker, Robert J.; Williams, Robert S.
1988-01-01
A continuous metal caster cooling system is provided in which water is supplied in jets from a large number of small nozzles 19 against the inner surface of rim 13 at a temperature and with sufficient pressure that the velocity of the jets is sufficiently high that the mode of heat transfer is substantially by forced convection, the liquid being returned from the cooling chambers 30 through return pipes 25 distributed interstitially among the nozzles.
Cooling system for continuous metal casting machines
Draper, R.; Sumpman, W.C.; Baker, R.J.; Williams, R.S.
1988-06-07
A continuous metal caster cooling system is provided in which water is supplied in jets from a large number of small nozzles against the inner surface of rim at a temperature and with sufficient pressure that the velocity of the jets is sufficiently high that the mode of heat transfer is substantially by forced convection, the liquid being returned from the cooling chambers through return pipes distributed interstitially among the nozzles. 9 figs.
An Eulerian time filtering technique to study large-scale transient flow phenomena
NASA Astrophysics Data System (ADS)
Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric
2009-10-01
Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.
Acoustic equations of state for simple lattice Boltzmann velocity sets.
Viggen, Erlend Magnus
2014-07-01
The lattice Boltzmann (LB) method typically uses an isothermal equation of state. This is not sufficient to simulate a number of acoustic phenomena where the equation of state cannot be approximated as linear and constant. However, it is possible to implement variable equations of state by altering the LB equilibrium distribution. For simple velocity sets with velocity components ξ(iα)∈(-1,0,1) for all i, these equilibria necessarily cause error terms in the momentum equation. These error terms are shown to be either correctable or negligible at the cost of further weakening the compressibility. For the D1Q3 velocity set, such an equilibrium distribution is found and shown to be unique. Its sound propagation properties are found for both forced and free waves, with some generality beyond D1Q3. Finally, this equilibrium distribution is applied to a nonlinear acoustics simulation where both mechanisms of nonlinearity are simulated with good results. This represents an improvement on previous such simulations and proves that the compressibility of the method is still sufficiently strong even for nonlinear acoustics.
CMS results in the Combined Computing Readiness Challenge CCRC'08
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Bauerdick, L.; CMS Collaboration
2009-12-01
During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed workflows - are presented and discussed.
Methods and apparatus of analyzing electrical power grid data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.
Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less
Commutative semigroups of real and complex matrices. [with use of the jordan form
NASA Technical Reports Server (NTRS)
Brown, D. R.
1974-01-01
The computation of divergence is studied. Covariance matrices to be analyzed admit a common diagonalization, or even triangulation. Sufficient conditions are given for such phenomena to take place, the arguments cover both real and complex matrices, and are not restricted to Hermotian or other special forms. Specifically, it is shown to be sufficient that the matrices in question commute in order to admit a common triangulation. Several results hold in the case that the matrices in question form a closed and bounded set, rather than only in the finite case.
Utilization Bound of Non-preemptive Fixed Priority Schedulers
NASA Astrophysics Data System (ADS)
Park, Moonju; Chae, Jinseok
It is known that the schedulability of a non-preemptive task set with fixed priority can be determined in pseudo-polynomial time. However, since Rate Monotonic scheduling is not optimal for non-preemptive scheduling, the applicability of existing polynomial time tests that provide sufficient schedulability conditions, such as Liu and Layland's bound, is limited. This letter proposes a new sufficient condition for non-preemptive fixed priority scheduling that can be used for any fixed priority assignment scheme. It is also shown that the proposed schedulability test has a tighter utilization bound than existing test methods.
The prevalence of resonances among large-a transneptunian objects
NASA Astrophysics Data System (ADS)
Gladman, Brett; Volk, Kathryn; Van Laerhoven, Christa
2018-04-01
The detached population consists of transneptunian objects (TNOs) with large semi-major axes and sufficiently high perihelia (roughly q>38 au, but there is no simple cut). However, what constitutes 'large semi-major axis' has been, and continues to be, unclear. Once beyond the apehlia of the classical Kuiper Belt (which extends out to about 60 au), objects with semimajor axes from a=60-150 au can be detached, but there are a reasonable number of objects in this range known to be in mean-motion resonances with Neptune. Beyond a=150 au, however, it is a widely-held belief that resonances become `unimportant', and that a q>38 au cut (or sometimes q>50 au) with a>150 au isolates a set of large semimajor axis detached objects. However, once semimajor axes become this large, the orbit determination of the object discovered near perihelion becomes a much harder task then for low-a TNOs. Because small velocity differences near the perihelion of large-a orbits cause large changes the fitted orbital in semimajor axis, extremely good and long baseline astrometry is required to reduce the semimajor axis uncertainty to be smaller than the few tenths of an astronomical unit widths of mean motion resonances. By carefully analyzing the astrometric data of all known large semimajor axis objects, we show that a very large fraction of the objects are in fact likely in high-order mean-motion resonances with Neptune. This prevealence for actually being resonant with Neptune would imply that hypothesized planets are problematic as they would remove the detached objects from these resonances. Instead, we favor a view in which the large-a population is the surviving remnant of a massive early scattering disk, whose surviving members are sculpted mostly by diffusive gravitational interactions with the four giant planets over the last four gigayears, but whose initial emplacement mechanism (in particular: perihelion lifting mechanism) is still unclear but of critical importance to the early Solar System's evolution.
Effect of H-wave polarization on laser radar detection of partially convex targets in random media.
El-Ocla, Hosam
2010-07-01
A study on the performance of laser radar cross section (LRCS) of conducting targets with large sizes is investigated numerically in free space and random media. The LRCS is calculated using a boundary value method with beam wave incidence and H-wave polarization. Considered are those elements that contribute to the LRCS problem including random medium strength, target configuration, and beam width. The effect of the creeping waves, stimulated by H-polarization, on the LRCS behavior is manifested. Targets taking large sizes of up to five wavelengths are sufficiently larger than the beam width and are sufficient for considering fairly complex targets. Scatterers are assumed to have analytical partially convex contours with inflection points.
Twisted versus braided magnetic flux ropes in coronal geometry. II. Comparative behaviour
NASA Astrophysics Data System (ADS)
Prior, C.; Yeates, A. R.
2016-06-01
Aims: Sigmoidal structures in the solar corona are commonly associated with magnetic flux ropes whose magnetic field lines are twisted about a mutual axis. Their dynamical evolution is well studied, with sufficient twisting leading to large-scale rotation (writhing) and vertical expansion, possibly leading to ejection. Here, we investigate the behaviour of flux ropes whose field lines have more complex entangled/braided configurations. Our hypothesis is that this internal structure will inhibit the large-scale morphological changes. Additionally, we investigate the influence of the background field within which the rope is embedded. Methods: A technique for generating tubular magnetic fields with arbitrary axial geometry and internal structure, introduced in part I of this study, provides the initial conditions for resistive-MHD simulations. The tubular fields are embedded in a linear force-free background, and we consider various internal structures for the tubular field, including both twisted and braided topologies. These embedded flux ropes are then evolved using a 3D MHD code. Results: Firstly, in a background where twisted flux ropes evolve through the expected non-linear writhing and vertical expansion, we find that flux ropes with sufficiently braided/entangled interiors show no such large-scale changes. Secondly, embedding a twisted flux rope in a background field with a sigmoidal inversion line leads to eventual reversal of the large-scale rotation. Thirdly, in some cases a braided flux rope splits due to reconnection into two twisted flux ropes of opposing chirality - a phenomenon previously observed in cylindrical configurations. Conclusions: Sufficiently complex entanglement of the magnetic field lines within a flux rope can suppress large-scale morphological changes of its axis, with magnetic energy reduced instead through reconnection and expansion. The structure of the background magnetic field can significantly affect the changing morphology of a flux rope.
NASA Astrophysics Data System (ADS)
Siebenmorgen, R.; Voshchinnikov, N. V.; Bagnulo, S.; Cox, N. L. J.; Cami, J.; Peest, C.
2018-03-01
It is well known that the dust properties of the diffuse interstellar medium exhibit variations towards different sight-lines on a large scale. We have investigated the variability of the dust characteristics on a small scale, and from cloud-to-cloud. We use low-resolution spectro-polarimetric data obtained in the context of the Large Interstellar Polarisation Survey (LIPS) towards 59 sight-lines in the Southern Hemisphere, and we fit these data using a dust model composed of silicate and carbon particles with sizes from the molecular to the sub-micrometre domain. Large (≥6 nm) silicates of prolate shape account for the observed polarisation. For 32 sight-lines we complement our data set with UVES archive high-resolution spectra, which enable us to establish the presence of single-cloud or multiple-clouds towards individual sight-lines. We find that the majority of these 35 sight-lines intersect two or more clouds, while eight of them are dominated by a single absorbing cloud. We confirm several correlations between extinction and parameters of the Serkowski law with dust parameters, but we also find previously undetected correlations between these parameters that are valid only in single-cloud sight-lines. We find that interstellar polarisation from multiple-clouds is smaller than from single-cloud sight-lines, showing that the presence of a second or more clouds depolarises the incoming radiation. We find large variations of the dust characteristics from cloud-to-cloud. However, when we average a sufficiently large number of clouds in single-cloud or multiple-cloud sight-lines, we always retrieve similar mean dust parameters. The typical dust abundances of the single-cloud cases are [C]/[H] = 92 ppm and [Si]/[H] = 20 ppm.
Volume-change indicator for molding plastic
NASA Technical Reports Server (NTRS)
Heler, W. C.
1979-01-01
Monitor consisting of two concentric disks measures change in volume of charge during compression/displacement molding. Device enables operator to decide whether process pressure and temperature are set properly or whether sufficient material has been placed in mold.
29 CFR 1990.146 - Issues to be considered in the rulemaking.
Code of Federal Regulations, 2011 CFR
2011-07-01
... set forth in § 1990.103, including whether the scientific studies are reliable; (c) Whether the... and arguments that are submitted in accordance with § 1990.145 are sufficient to warrant amendment of...
29 CFR 1990.146 - Issues to be considered in the rulemaking.
Code of Federal Regulations, 2012 CFR
2012-07-01
... set forth in § 1990.103, including whether the scientific studies are reliable; (c) Whether the... and arguments that are submitted in accordance with § 1990.145 are sufficient to warrant amendment of...
29 CFR 1990.146 - Issues to be considered in the rulemaking.
Code of Federal Regulations, 2014 CFR
2014-07-01
... set forth in § 1990.103, including whether the scientific studies are reliable; (c) Whether the... and arguments that are submitted in accordance with § 1990.145 are sufficient to warrant amendment of...
29 CFR 1990.146 - Issues to be considered in the rulemaking.
Code of Federal Regulations, 2013 CFR
2013-07-01
... set forth in § 1990.103, including whether the scientific studies are reliable; (c) Whether the... and arguments that are submitted in accordance with § 1990.145 are sufficient to warrant amendment of...
A strategy to improve priority setting in developing countries.
Kapiriri, Lydia; Martin, Douglas K
2007-09-01
Because the demand for health services outstrips the available resources, priority setting is one of the most difficult issues faced by health policy makers, particularly those in developing countries. Priority setting in developing countries is fraught with uncertainty due to lack of credible information, weak priority setting institutions, and unclear priority setting processes. Efforts to improve priority setting in these contexts have focused on providing information and tools. In this paper we argue that priority setting is a value laden and political process, and although important, the available information and tools are not sufficient to address the priority setting challenges in developing countries. Additional complementary efforts are required. Hence, a strategy to improve priority setting in developing countries should also include: (i) capturing current priority setting practices, (ii) improving the legitimacy and capacity of institutions that set priorities, and (iii) developing fair priority setting processes.
Antiretroviral Therapy for HIV-2 Infection: Recommendations for Management in Low-Resource Settings
Peterson, Kevin; Jallow, Sabelle; Rowland-Jones, Sarah L.; de Silva, Thushan I.
2011-01-01
HIV-2 contributes approximately a third to the prevalence of HIV in West Africa and is present in significant amounts in several low-income countries outside of West Africa with historical ties to Portugal. It complicates HIV diagnosis, requiring more expensive and technically demanding testing algorithms. Natural polymorphisms and patterns in the development of resistance to antiretrovirals are reviewed, along with their implications for antiretroviral therapy. Nonnucleoside reverse transcriptase inhibitors, crucial in standard first-line regimens for HIV-1 in many low-income settings, have no effect on HIV-2. Nucleoside analogues alone are not sufficiently potent enough to achieve durable virologic control. Some protease inhibitors, in particular those without ritonavir boosting, are not sufficiently effective against HIV-2. Following review of the available evidence and taking the structure and challenges of antiretroviral care in West Africa into consideration, the authors make recommendations and highlight the needs of special populations. PMID:21490779
Diversity and Community Can Coexist.
Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael
2016-03-01
We examine the (in)compatibility of diversity and sense of community by means of agent-based models based on the well-known Schelling model of residential segregation and Axelrod model of cultural dissemination. We find that diversity and highly clustered social networks, on the assumptions of social tie formation based on spatial proximity and homophily, are incompatible when agent features are immutable, and this holds even for multiple independent features. We include both mutable and immutable features into a model that integrates Schelling and Axelrod models, and we find that even for multiple independent features, diversity and highly clustered social networks can be incompatible on the assumptions of social tie formation based on spatial proximity and homophily. However, this incompatibility breaks down when cultural diversity can be sufficiently large, at which point diversity and clustering need not be negatively correlated. This implies that segregation based on immutable characteristics such as race can possibly be overcome by sufficient similarity on mutable characteristics based on culture, which are subject to a process of social influence, provided a sufficiently large "scope of cultural possibilities" exists. © Society for Community Research and Action 2016.
TCR-engineered, customized, antitumor T cells for cancer immunotherapy: advantages and limitations.
Chhabra, Arvind
2011-01-05
The clinical outcome of the traditional adoptive cancer immunotherapy approaches involving the administration of donor-derived immune effectors, expanded ex vivo, has not met expectations. This could be attributed, in part, to the lack of sufficient high-avidity antitumor T-cell precursors in most cancer patients, poor immunogenicity of cancer cells, and the technological limitations to generate a sufficiently large number of tumor antigen-specific T cells. In addition, the host immune regulatory mechanisms and immune homeostasis mechanisms, such as activation-induced cell death (AICD), could further limit the clinical efficacy of the adoptively administered antitumor T cells. Since generation of a sufficiently large number of potent antitumor immune effectors for adoptive administration is critical for the clinical success of this approach, recent advances towards generating customized donor-specific antitumor-effector T cells by engrafting human peripheral blood-derived T cells with a tumor-associated antigen-specific transgenic T-cell receptor (TCR) are quite interesting. This manuscript provides a brief overview of the TCR engineering-based cancer immunotherapy approach, its advantages, and the current limitations.
Generic pure quantum states as steady states of quasi-local dissipative dynamics
NASA Astrophysics Data System (ADS)
Karuvade, Salini; Johnson, Peter D.; Ticozzi, Francesco; Viola, Lorenza
2018-04-01
We investigate whether a generic pure state on a multipartite quantum system can be the unique asymptotic steady state of locality-constrained purely dissipative Markovian dynamics. In the tripartite setting, we show that the problem is equivalent to characterizing the solution space of a set of linear equations and establish that the set of pure states obeying the above property has either measure zero or measure one, solely depending on the subsystems’ dimension. A complete analytical characterization is given when the central subsystem is a qubit. In the N-partite case, we provide conditions on the subsystems’ size and the nature of the locality constraint, under which random pure states cannot be quasi-locally stabilized generically. Also, allowing for the possibility to approximately stabilize entangled pure states that cannot be exact steady states in settings where stabilizability is generic, our results offer insights into the extent to which random pure states may arise as unique ground states of frustration-free parent Hamiltonians. We further argue that, to a high probability, pure quantum states sampled from a t-design enjoy the same stabilizability properties of Haar-random ones as long as suitable dimension constraints are obeyed and t is sufficiently large. Lastly, we demonstrate a connection between the tasks of quasi-local state stabilization and unique state reconstruction from local tomographic information, and provide a constructive procedure for determining a generic N-partite pure state based only on knowledge of the support of any two of the reduced density matrices of about half the parties, improving over existing results.
Evaluation of Existing Methods for Human Blood mRNA Isolation and Analysis for Large Studies
Meyer, Anke; Paroni, Federico; Günther, Kathrin; Dharmadhikari, Gitanjali; Ahrens, Wolfgang; Kelm, Sørge; Maedler, Kathrin
2016-01-01
Aims Prior to implementing gene expression analyses from blood to a larger cohort study, an evaluation to set up a reliable and reproducible method is mandatory but challenging due to the specific characteristics of the samples as well as their collection methods. In this pilot study we optimized a combination of blood sampling and RNA isolation methods and present reproducible gene expression results from human blood samples. Methods The established PAXgeneTM blood collection method (Qiagen) was compared with the more recent TempusTM collection and storing system. RNA from blood samples collected by both systems was extracted on columns with the corresponding Norgen and PAX RNA extraction Kits. RNA quantity and quality was compared photometrically, with Ribogreen and by Real-Time PCR analyses of various reference genes (PPIA, β-ACTIN and TUBULIN) and exemplary of SIGLEC-7. Results Combining different sampling methods and extraction kits caused strong variations in gene expression. The use of PAXgeneTM and TempusTM collection systems resulted in RNA of good quality and quantity for the respective RNA isolation system. No large inter-donor variations could be detected for both systems. However, it was not possible to extract sufficient RNA of good quality with the PAXgeneTM RNA extraction system from samples collected by TempusTM collection tubes. Comparing only the Norgen RNA extraction methods, RNA from blood collected either by the TempusTM or PAXgeneTM collection system delivered sufficient amount and quality of RNA, but the TempusTM collection delivered higher RNA concentration compared to the PAXTM collection system. The established Pre-analytix PAXgeneTM RNA extraction system together with the PAXgeneTM blood collection system showed lowest CT-values, i.e. highest RNA concentration of good quality. Expression levels of all tested genes were stable and reproducible. Conclusions This study confirms that it is not possible to mix or change sampling or extraction strategies during the same study because of large variations of RNA yield and expression levels. PMID:27575051
Local short-duration precipitation extremes in Sweden: observations, forecasts and projections
NASA Astrophysics Data System (ADS)
Olsson, Jonas; Berg, Peter; Simonsson, Lennart
2015-04-01
Local short-duration precipitation extremes (LSPEs) are a key driver of hydrological hazards, notably in steep catchments with thin soils and in urban environments. The triggered floodings, landslides, etc., have large consequences for society in terms of both economy and health. Accurate estimations of LSPEs on both climatological time-scales (past, present, future) and in real-time is thus of great importance for improved hydrological predictions as well as design of constructions and infrastructure affected by hydrological fluxes. Analysis of LSPEs is, however, associated with various limitations and uncertainties. These are to a large degree associated with the small-scale nature of the meteorological processes behind LSPEs and the associated requirements on observation sensors as well as model descriptions. Some examples of causes for the limitations involved are given in the following. - Observations: High-resolution data sets available for LSPE analyses are often limited to either relatively long series from one or a few stations or relatively short series from larger station networks. Radar data have excellent resolutions in both time and space but the estimated local precipitation intensity is still highly uncertain. New and promising techniques (e.g. microwave links) are still in their infancy. - Weather forecasts (short-range): Although forecasts with the required spatial resolution for potential generation of LSPEs (around 2-4 km) are becoming operationally available, the actual forecast precision of LSPEs is largely unknown. Forecasted LSPEs may be displaced in time or, more critically, in space which strongly affects the possibility to assess hydrological risk. - Climate projections: The spatial resolution of the current RCM generation (around 25 km) is not sufficient for proper description of LSPEs. Statistical post-processing (i.e. downscaling) is required which adds substantial uncertainty to the final result. Ensemble generation of sufficiently high-resolution RCM projections is not yet computationally feasible. In this presentation, examples of recent research in Sweden related to these aspects will be given with some main findings shown and discussed. Finally, some ongoing and future research directions will be outlined (the former hopefully accompanied by some brand-new results).
Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?
Tajbakhsh, Nima; Shin, Jae Y; Gurudu, Suryakanth R; Hurst, R Todd; Kendall, Christopher B; Gotway, Michael B; Jianming Liang
2016-05-01
Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.
Van Landeghem, Sofie; De Bodt, Stefanie; Drebert, Zuzanna J.; Inzé, Dirk; Van de Peer, Yves
2013-01-01
Despite the availability of various data repositories for plant research, a wealth of information currently remains hidden within the biomolecular literature. Text mining provides the necessary means to retrieve these data through automated processing of texts. However, only recently has advanced text mining methodology been implemented with sufficient computational power to process texts at a large scale. In this study, we assess the potential of large-scale text mining for plant biology research in general and for network biology in particular using a state-of-the-art text mining system applied to all PubMed abstracts and PubMed Central full texts. We present extensive evaluation of the textual data for Arabidopsis thaliana, assessing the overall accuracy of this new resource for usage in plant network analyses. Furthermore, we combine text mining information with both protein–protein and regulatory interactions from experimental databases. Clusters of tightly connected genes are delineated from the resulting network, illustrating how such an integrative approach is essential to grasp the current knowledge available for Arabidopsis and to uncover gene information through guilt by association. All large-scale data sets, as well as the manually curated textual data, are made publicly available, hereby stimulating the application of text mining data in future plant biology studies. PMID:23532071
Brader, J M; Siebenbürger, M; Ballauff, M; Reinheimer, K; Wilhelm, M; Frey, S J; Weysser, F; Fuchs, M
2010-12-01
Using a combination of theory, experiment, and simulation we investigate the nonlinear response of dense colloidal suspensions to large amplitude oscillatory shear flow. The time-dependent stress response is calculated using a recently developed schematic mode-coupling-type theory describing colloidal suspensions under externally applied flow. For finite strain amplitudes the theory generates a nonlinear response, characterized by significant higher harmonic contributions. An important feature of the theory is the prediction of an ideal glass transition at sufficiently strong coupling, which is accompanied by the discontinuous appearance of a dynamic yield stress. For the oscillatory shear flow under consideration we find that the yield stress plays an important role in determining the nonlinearity of the time-dependent stress response. Our theoretical findings are strongly supported by both large amplitude oscillatory experiments (with Fourier transform rheology analysis) on suspensions of thermosensitive core-shell particles dispersed in water and Brownian dynamics simulations performed on a two-dimensional binary hard-disk mixture. In particular, theory predicts nontrivial values of the exponents governing the final decay of the storage and loss moduli as a function of strain amplitude which are in good agreement with both simulation and experiment. A consistent set of parameters in the presented schematic model achieves to jointly describe linear moduli, nonlinear flow curves, and large amplitude oscillatory spectroscopy.
Accurate prediction of personalized olfactory perception from large-scale chemoinformatic features.
Li, Hongyang; Panwar, Bharat; Omenn, Gilbert S; Guan, Yuanfang
2018-02-01
The olfactory stimulus-percept problem has been studied for more than a century, yet it is still hard to precisely predict the odor given the large-scale chemoinformatic features of an odorant molecule. A major challenge is that the perceived qualities vary greatly among individuals due to different genetic and cultural backgrounds. Moreover, the combinatorial interactions between multiple odorant receptors and diverse molecules significantly complicate the olfaction prediction. Many attempts have been made to establish structure-odor relationships for intensity and pleasantness, but no models are available to predict the personalized multi-odor attributes of molecules. In this study, we describe our winning algorithm for predicting individual and population perceptual responses to various odorants in the DREAM Olfaction Prediction Challenge. We find that random forest model consisting of multiple decision trees is well suited to this prediction problem, given the large feature spaces and high variability of perceptual ratings among individuals. Integrating both population and individual perceptions into our model effectively reduces the influence of noise and outliers. By analyzing the importance of each chemical feature, we find that a small set of low- and nondegenerative features is sufficient for accurate prediction. Our random forest model successfully predicts personalized odor attributes of structurally diverse molecules. This model together with the top discriminative features has the potential to extend our understanding of olfactory perception mechanisms and provide an alternative for rational odorant design.
NASA Astrophysics Data System (ADS)
Ng, C. S.; Bhattacharjee, A.
1996-08-01
A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.
Governance of the International Linear Collider Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, B.; /Oxford U.; Barish, B.
Governance models for the International Linear Collider Project are examined in the light of experience from similar international projects around the world. Recommendations for one path which could be followed to realize the ILC successfully are outlined. The International Linear Collider (ILC) is a unique endeavour in particle physics; fully international from the outset, it has no 'host laboratory' to provide infrastructure and support. The realization of this project therefore presents unique challenges, in scientific, technical and political arenas. This document outlines the main questions that need to be answered if the ILC is to become a reality. It describesmore » the methodology used to harness the wisdom displayed and lessons learned from current and previous large international projects. From this basis, it suggests both general principles and outlines a specific model to realize the ILC. It recognizes that there is no unique model for such a laboratory and that there are often several solutions to a particular problem. Nevertheless it proposes concrete solutions that the authors believe are currently the best choices in order to stimulate discussion and catalyze proposals as to how to bring the ILC project to fruition. The ILC Laboratory would be set up by international treaty and be governed by a strong Council to whom a Director General and an associated Directorate would report. Council would empower the Director General to give strong management to the project. It would take its decisions in a timely manner, giving appropriate weight to the financial contributions of the member states. The ILC Laboratory would be set up for a fixed term, capable of extension by agreement of all the partners. The construction of the machine would be based on a Work Breakdown Structure and value engineering and would have a common cash fund sufficiently large to allow the management flexibility to optimize the project's construction. Appropriate contingency, clearly apportioned at both a national and global level, is essential if the project is to be realised. Finally, models for running costs and decommissioning at the conclusion of the ILC project are proposed. This document represents an interim report of the bodies and individuals studying these questions inside the structure set up and supervised by the International Committee for Future Accelerators (ICFA). It represents a request for comment to the international community in all relevant disciplines, scientific, technical and most importantly, political. Many areas require further study and some, in particular the site selection process, have not yet progressed sufficiently to be addressed in detail in this document. Discussion raised by this document will be vital in framing the final proposals due to be published in 2012 in the Technical Design Report being prepared by the Global Design Effort of the ILC.« less
Assays for the activities of polyamine biosynthetic enzymes using intact tissues
Rakesh Minocha; Stephanie Long; Hisae Maki; Subhash C. Minocha
1999-01-01
Traditionally, most enzyme assays utilize homogenized cell extracts with or without dialysis. Homogenization and centrifugation of large numbers of samples for screening of mutants and transgenic cell lines is quite cumbersome and generally requires sufficiently large amounts (hundreds of milligrams) of tissue. However, in situations where the tissue is available in...
Monitoring conservation success in a large oak woodland landscape
Rich Reiner; Emma Underwood; John-O Niles
2002-01-01
Monitoring is essential in understanding the success or failure of a conservation project and provides the information needed to conduct adaptive management. Although there is a large body of literature on monitoring design, it fails to provide sufficient information to practitioners on how to organize and apply monitoring when implementing landscape-scale conservation...
Three sets of crystallographic sub-planar structures in quartz formed by tectonic deformation
NASA Astrophysics Data System (ADS)
Derez, Tine; Pennock, Gill; Drury, Martyn; Sintubin, Manuel
2016-05-01
In quartz, multiple sets of fine planar deformation microstructures that have specific crystallographic orientations parallel to planes with low Miller-Bravais indices are commonly considered as shock-induced planar deformation features (PDFs) diagnostic of shock metamorphism. Using polarized light microscopy, we demonstrate that up to three sets of tectonically induced sub-planar fine extinction bands (FEBs), sub-parallel to the basal, γ, ω, and π crystallographic planes, are common in vein quartz in low-grade tectonometamorphic settings. We conclude that the observation of multiple (2-3) sets of fine scale, closely spaced, crystallographically controlled, sub-planar microstructures is not sufficient to unambiguously distinguish PDFs from tectonic FEBs.
Solving the critical thermal bowing in 3C-SiC/Si(111) by a tilting Si pillar architecture
NASA Astrophysics Data System (ADS)
Albani, Marco; Marzegalli, Anna; Bergamaschini, Roberto; Mauceri, Marco; Crippa, Danilo; La Via, Francesco; von Känel, Hans; Miglio, Leo
2018-05-01
The exceptionally large thermal strain in few-micrometers-thick 3C-SiC films on Si(111), causing severe wafer bending and cracking, is demonstrated to be elastically quenched by substrate patterning in finite arrays of Si micro-pillars, sufficiently large in aspect ratio to allow for lateral pillar tilting, both by simulations and by preliminary experiments. In suspended SiC patches, the mechanical problem is addressed by finite element method: both the strain relaxation and the wafer curvature are calculated at different pillar height, array size, and film thickness. Patches as large as required by power electronic devices (500-1000 μm in size) show a remarkable residual strain in the central area, unless the pillar aspect ratio is made sufficiently large to allow peripheral pillars to accommodate the full film retraction. A sublinear relationship between the pillar aspect ratio and the patch size, guaranteeing a minimal curvature radius, as required for wafer processing and micro-crack prevention, is shown to be valid for any heteroepitaxial system.
Large Angle Transient Dynamics (LATDYN) user's manual
NASA Technical Reports Server (NTRS)
Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.
1991-01-01
A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.
System for producing a uniform rubble bed for in situ processes
Galloway, T.R.
1983-07-05
A method and a cutter are disclosed for producing a large cavity filled with a uniform bed of rubblized oil shale or other material, for in situ processing. A raise drill head has a hollow body with a generally circular base and sloping upper surface. A hollow shaft extends from the hollow body. Cutter teeth are mounted on the upper surface of the body and relatively small holes are formed in the body between the cutter teeth. Relatively large peripheral flutes around the body allow material to drop below the drill head. A pilot hole is drilled into the oil shale deposit. The pilot hole is reamed into a large diameter hole by means of a large diameter raise drill head or cutter to produce a cavity filled with rubble. A flushing fluid, such as air, is circulated through the pilot hole during the reaming operation to remove fines through the raise drill, thereby removing sufficient material to create sufficient void space, and allowing the larger particles to fill the cavity and provide a uniform bed of rubblized oil shale. 4 figs.
Targeted stock identification using multilocus genotype 'familyprinting'
Letcher, B.H.; King, T.L.
1999-01-01
We present an approach to stock identification of small, targeted populations that uses multilocus microsatellite genotypes of individual mating adults to uniquely identify first- and second-generation offspring in a mixture. We call the approach 'familyprinting'; unlike DNA fingerprinting where tissue samples of individuals are matched, offspring from various families are assigned to pairs of parents or sets of four grandparents with known genotypes. The basic unit of identification is the family, but families can be nested within a variety of stock units ranging from naturally reproducing groups of fish in a small tributary or pond from which mating adults can be sampled to large or small collections of families produced in hatcheries and stocked in specific locations. We show that, with as few as seven alleles per locus using four loci without error, first-generation offspring can be uniquely assigned to the correct family. For second-generation applications in a hatchery more alleles per locus (10) and loci (10) are required for correct assignment of all offspring to the correct set of grandparents. Using microsatellite DNA variation from an Atlantic salmon (Salmo solar) restoration river (Connecticut River, USA), we also show that this population contains sufficient genetic diversity in sea-run returns for 100% correct first, generation assignment and 97% correct second-generation assignment using 14 loci. We are currently using first- and second-generation familyprinting in this population with the ultimate goal of identifying stocking tributary. In addition to within-river familyprinting, there also appears to be sufficient genetic diversity within and between Atlantic salmon populations for identification of 'familyprinted' fish in a mixture of multiple populations. We also suggest that second-generation familyprinting with multiple populations may also provide a tool for examining stock structure. Familyprinting with microsatellite DNA markers is a viable method for identification of offspring of randomly mating adults from small, targeted stocks and should provide a useful addition to current mixed stock analyses with genetic markers.
Acoustic Scattering Models of Zooplankton and Microstructure
1997-09-30
shelled (gastropods), and gas-bearing ( siphonophores )). 5) LABORATORY EXPERIMENTATION: ZOOPLANKTON. An extensive set of laboratory measurements on the...zooplankton ( siphonophores and pteropods) that have high enough target strengths and occur in sufficiently high numbers that they could interfere with
47 CFR 1.1308 - Consideration of environmental assessments (EAs); findings of no significant impact.
Code of Federal Regulations, 2011 CFR
2011-10-01
... which shall explain the environmental consequences of the proposal and set forth sufficient analysis for the Bureau or the Commission to reach a determination that the proposal will or will not have a...
Children’s Environmental Health 2005 - A Summary of EPA Activities
Children may not be sufficiently protected by regulatory standards set based on risks to adults. EPA has forged partnerships and taken steps to protect children's health from contaminants and pollutants in air, drinking water, and food.
Kiraz, Nuri; Oz, Yasemin; Aslan, Huseyin; Erturan, Zayre; Ener, Beyza; Akdagli, Sevtap Arikan; Muslumanoglu, Hamza; Cetinkaya, Zafer
2015-10-01
Although conventional identification of pathogenic fungi is based on the combination of tests evaluating their morphological and biochemical characteristics, they can fail to identify the less common species or the differentiation of closely related species. In addition these tests are time consuming, labour-intensive and require experienced personnel. We evaluated the feasibility and sufficiency of DNA extraction by Whatman FTA filter matrix technology and DNA sequencing of D1-D2 region of the large ribosomal subunit gene for identification of clinical isolates of 21 yeast and 160 moulds in our clinical mycology laboratory. While the yeast isolates were identified at species level with 100% homology, 102 (63.75%) clinically important mould isolates were identified at species level, 56 (35%) isolates at genus level against fungal sequences existing in DNA databases and two (1.25%) isolates could not be identified. Consequently, Whatman FTA filter matrix technology was a useful method for extraction of fungal DNA; extremely rapid, practical and successful. Sequence analysis strategy of D1-D2 region of the large ribosomal subunit gene was found considerably sufficient in identification to genus level for the most clinical fungi. However, the identification to species level and especially discrimination of closely related species may require additional analysis. © 2015 Blackwell Verlag GmbH.
Proton velocity ring-driven instabilities and their dependence on the ring speed: Linear theory
NASA Astrophysics Data System (ADS)
Min, Kyungguk; Liu, Kaijun; Gary, S. Peter
2017-08-01
Linear dispersion theory is used to study the Alfvén-cyclotron, mirror and ion Bernstein instabilities driven by a tenuous (1%) warm proton ring velocity distribution with a ring speed, vr, varying between 2vA and 10vA, where vA is the Alfvén speed. Relatively cool background protons and electrons are assumed. The modeled ring velocity distributions are unstable to both the Alfvén-cyclotron and ion Bernstein instabilities whose maximum growth rates are roughly a linear function of the ring speed. The mirror mode, which has real frequency ωr=0, becomes the fastest growing mode for sufficiently large vr/vA. The mirror and Bernstein instabilities have maximum growth at propagation oblique to the background magnetic field and become more field-aligned with an increasing ring speed. Considering its largest growth rate, the mirror mode, in addition to the Alfvén-cyclotron mode, can cause pitch angle diffusion of the ring protons when the ring speed becomes sufficiently large. Moreover, because the parallel phase speed, v∥ph, becomes sufficiently small relative to vr, the low-frequency Bernstein waves can also aid the pitch angle scattering of the ring protons for large vr. Potential implications of including these two instabilities at oblique propagation on heliospheric pickup ion dynamics are discussed.
Second-order optimality conditions for problems with C1 data
NASA Astrophysics Data System (ADS)
Ginchev, Ivan; Ivanov, Vsevolod I.
2008-04-01
In this paper we obtain second-order optimality conditions of Karush-Kuhn-Tucker type and Fritz John one for a problem with inequality constraints and a set constraint in nonsmooth settings using second-order directional derivatives. In the necessary conditions we suppose that the objective function and the active constraints are continuously differentiable, but their gradients are not necessarily locally Lipschitz. In the sufficient conditions for a global minimum we assume that the objective function is differentiable at and second-order pseudoconvex at , a notion introduced by the authors [I. Ginchev, V.I. Ivanov, Higher-order pseudoconvex functions, in: I.V. Konnov, D.T. Luc, A.M. Rubinov (Eds.), Generalized Convexity and Related Topics, in: Lecture Notes in Econom. and Math. Systems, vol. 583, Springer, 2007, pp. 247-264], the constraints are both differentiable and quasiconvex at . In the sufficient conditions for an isolated local minimum of order two we suppose that the problem belongs to the class C1,1. We show that they do not hold for C1 problems, which are not C1,1 ones. At last a new notion parabolic local minimum is defined and it is applied to extend the sufficient conditions for an isolated local minimum from problems with C1,1 data to problems with C1 one.
Is health workforce sustainability in Australia and New Zealand a realistic policy goal?
Buchan, James M; Naccarella, Lucio; Brooks, Peter M
2011-05-01
This paper assesses what health workforce 'sustainability' might mean for Australia and New Zealand, given the policy direction set out in the World Health Organization draft code on international recruitment of health workers. The governments in both countries have in the past made policy statements about the desirability of health workforce 'self-sufficiency', but OECD data show that both have a high level of dependence on internationally recruited health professionals relative to most other OECD countries. The paper argues that if a target of 'self-sufficiency' or sustainability were to be based on meeting health workforce requirements from home based training, both Australia and New Zealand fall far short of this measure, and continue to be active recruiters. The paper stresses that there is no common agreed definition of what health workforce 'self-sufficiency', or 'sustainability' is in practice, and that without an agreed definition it will be difficult for policy-makers to move the debate on to reaching agreement and possibly setting measurable targets or timelines for achievement. The paper concludes that any policy decisions related to health workforce sustainability will also have to taken in the context of a wider community debate on what is required of a health system and how is it to be funded.
Barnes, S.-J.; Zientek, M.L.; Severson, M.J.
1997-01-01
The tectonic setting of intraplate magmas, typically a plume intersecting a rift, is ideal for the development of Ni - Cu - platinum-group element-bearing sulphides. The plume transports metal-rich magmas close to the mantle - crust boundary. The interaction of the rift and plume permits rapid transport of the magma into the crust, thus ensuring that no sulphides are lost from the magma en route to the crust. The rift may contain sediments which could provide the sulphur necessary to bring about sulphide saturation in the magmas. The plume provides large volumes of mafic magma; thus any sulphides that form can collect metals from a large volume of magma and consequently the sulphides will be metal rich. The large volume of magma provides sufficient heat to release large quantities of S from the crust, thus providing sufficient S to form a large sulphide deposit. The composition of the sulphides varies on a number of scales: (i) there is a variation between geographic areas, in which sulphides from the Noril'sk - Talnakh area are the richest in metals and those from the Muskox intrusion are poorest in metals; (ii) there is a variation between textural types of sulphides, in which disseminated sulphides are generally richer in metals than the associated massive and matrix sulphides; and (iii) the massive and matrix sulphides show a much wider range of compositions than the disseminated sulphides, and on the basis of their Ni/Cu ratio the massive and matrix sulphides can be divided into Cu rich and Fe rich. The Cu-rich sulphides are also enriched in Pt, Pd, and Au; in contrast, the Fe-rich sulphides are enriched in Fe, Os, Ir, Ru, and Rh. Nickel concentrations are similar in both. Differences in the composition between the sulphides from different areas may be attributed to a combination of differences in composition of the silicate magma from which the sulphides segregated and differences in the ratio of silicate to sulphide liquid (R factors). The higher metal content of the disseminated sulphides relative to the massive and matrix sulphides may be due to the fact that the disseminated sulphides equilibrated with a larger volume of magma than massive and matrix sulphides. The difference in composition between the Cu- and Fe-rich sulphides may be the result of the fractional crystallization of monosulphide solid solution from a sulphide liquid, with the Cu-rich sulphides representing the liquid and the Fe-rich sulphides representing the cumulate.
OrthoSelect: a protocol for selecting orthologous groups in phylogenomics.
Schreiber, Fabian; Pick, Kerstin; Erpenbeck, Dirk; Wörheide, Gert; Morgenstern, Burkhard
2009-07-16
Phylogenetic studies using expressed sequence tags (EST) are becoming a standard approach to answer evolutionary questions. Such studies are usually based on large sets of newly generated, unannotated, and error-prone EST sequences from different species. A first crucial step in EST-based phylogeny reconstruction is to identify groups of orthologous sequences. From these data sets, appropriate target genes are selected, and redundant sequences are eliminated to obtain suitable sequence sets as input data for tree-reconstruction software. Generating such data sets manually can be very time consuming. Thus, software tools are needed that carry out these steps automatically. We developed a flexible and user-friendly software pipeline, running on desktop machines or computer clusters, that constructs data sets for phylogenomic analyses. It automatically searches assembled EST sequences against databases of orthologous groups (OG), assigns ESTs to these predefined OGs, translates the sequences into proteins, eliminates redundant sequences assigned to the same OG, creates multiple sequence alignments of identified orthologous sequences and offers the possibility to further process this alignment in a last step by excluding potentially homoplastic sites and selecting sufficiently conserved parts. Our software pipeline can be used as it is, but it can also be adapted by integrating additional external programs. This makes the pipeline useful for non-bioinformaticians as well as to bioinformatic experts. The software pipeline is especially designed for ESTs, but it can also handle protein sequences. OrthoSelect is a tool that produces orthologous gene alignments from assembled ESTs. Our tests show that OrthoSelect detects orthologs in EST libraries with high accuracy. In the absence of a gold standard for orthology prediction, we compared predictions by OrthoSelect to a manually created and published phylogenomic data set. Our tool was not only able to rebuild the data set with a specificity of 98%, but it detected four percent more orthologous sequences. Furthermore, the results OrthoSelect produces are in absolut agreement with the results of other programs, but our tool offers a significant speedup and additional functionality, e.g. handling of ESTs, computing sequence alignments, and refining them. To our knowledge, there is currently no fully automated and freely available tool for this purpose. Thus, OrthoSelect is a valuable tool for researchers in the field of phylogenomics who deal with large quantities of EST sequences. OrthoSelect is written in Perl and runs on Linux/Mac OS X. The tool can be downloaded at (http://gobics.de/fabian/orthoselect.php).
7 CFR 1925.3 - Servicing taxes.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, RURAL BUSINESS-COOPERATIVE... the Government's security interests. Any unusual situations that may arise with respect to tax... through routine servicing of loans by emphasizing the advantages of setting aside sufficient income to...
Comparison of Aerodynamic Resistance Parameterizations and Implications for Dry Deposition Modeling
Nitrogen deposition data used to support the secondary National Ambient Air Quality Standards and critical loads research derives from both measurements and modeling. Data sets with spatial coverage sufficient for regional scale deposition assessments are currently generated fro...
Competing Activation during Fantasy Text Comprehension
ERIC Educational Resources Information Center
Creer, Sarah D.; Cook, Anne E.; O'Brien, Edward J.
2018-01-01
During comprehension, readers' general world knowledge and contextual information compete for influence during integration and validation. Fantasy narratives, in which general world knowledge often conflicts with fantastical events, provide a setting to examine this competition. Experiment 1 showed that with sufficient elaboration, contextual…
Enhancing the Modeling of PFOA Pharmacokinetics with Bayesian Analysis
The detail sufficient to describe the pharmacokinetics (PK) for perfluorooctanoic acid (PFOA) and the methods necessary to combine information from multiple data sets are both subjects of ongoing investigation. Bayesian analysis provides tools to accommodate these goals. We exa...
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes
NASA Astrophysics Data System (ADS)
Tsoy, A. S.; Snegirev, A. Yu.
2015-09-01
The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.
Cohen, Ted; Jenkins, Helen E.; Lu, Chunling; McLaughlin, Megan; Floyd, Katherine; Zignol, Matteo
2015-01-01
SUMMARY Background Multidrug resistant tuberculosis (MDR-TB) poses serious challenges for tuberculosis control in many settings, but trends of MDR-TB have been difficult to measure. Methods We analyzed surveillance and population-representative survey data collected worldwide by the World Health Organization between 1993 and 2012. We examined setting-specific patterns associated with linear trends in the estimated per capita rate of MDR-TB among new notified TB cases to generate hypotheses about factors associated with trends in the transmission of highly drug resistant tuberculosis. Results 59 countries and 39 sub-national settings had at least three years of data, but less than 10% of the population in the WHO-designated 27-high MDR-TB burden settings were in areas with sufficient data to track trends. Among settings in which the majority of MDR-TB was autochthonous, we found 10 settings with statistically significant linear trends in per capita rates of MDR-TB among new notified TB cases. Five of these settings had declining trends (Estonia, Latvia, Macao, Hong Kong, and Portugal) ranging from decreases of 3-14% annually, while five had increasing trends (four individual oblasts of the Russian Federation and Botswana) ranging from 14-20% annually. In unadjusted analysis, better surveillance indicators and higher GDP per capita were associated with declining MDR-TB, while a higher existing absolute burden of MDR-TB was associated with an increasing trend. Conclusions Only a small fraction of countries in which the burden of MDR-TB is concentrated currently have sufficient surveillance data to estimate trends in drug-resistant TB. Where trend analysis was possible, smaller absolute burdens of MDR-TB and more robust surveillance systems were associated with declining per capita rates of MDR-TB among new notified cases. PMID:25458783
A conceptual framework of computations in mid-level vision
Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P.
2014-01-01
If a picture is worth a thousand words, as an English idiom goes, what should those words—or, rather, descriptors—capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i) biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii) sufficiently robust to apply in practice on realistic images; and (iii) able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization) and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation, and so on), and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization). Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model's behavior and its limitations. PMID:25566044
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hongyi; Sivapalan, Murugesu; Tian, Fuqiang
Inspired by the Dunne diagram, the climatic and landscape controls on the partitioning of annual runoff into its various components (Hortonian and Dunne overland flow and subsurface stormflow) are assessed quantitatively, from a purely theoretical perspective. A simple distributed hydrologic model has been built sufficient to simulate the effects of different combinations of climate, soil, and topography on the runoff generation processes. The model is driven by a sequence of simple hypothetical precipitation events, for a large combination of climate and landscape properties, and hydrologic responses at the catchment scale are obtained through aggregation of grid-scale responses. It is found,more » first, that the water balance responses, including relative contributions of different runoff generation mechanisms, could be related to a small set of dimensionless similarity parameters. These capture the competition between the wetting, drying, storage, and drainage functions underlying the catchment responses, and in this way, provide a quantitative approximation of the conceptual Dunne diagram. Second, only a subset of all hypothetical catchment/climate combinations is found to be ‘‘behavioral,’’ in terms of falling sufficiently close to the Budyko curve, describing mean annual runoff as a function of climate aridity. Furthermore, these behavioral combinations are mostly consistent with the qualitative picture presented in the Dunne diagram, indicating clearly the commonality between the Budyko curve and the Dunne diagram. These analyses also suggest clear interrelationships amongst the ‘‘behavioral’’ climate, soil, and topography parameter combinations, implying these catchment properties may be constrained to be codependent in order to satisfy the Budyko curve.« less
A conceptual framework of computations in mid-level vision.
Kubilius, Jonas; Wagemans, Johan; Op de Beeck, Hans P
2014-01-01
If a picture is worth a thousand words, as an English idiom goes, what should those words-or, rather, descriptors-capture? What format of image representation would be sufficiently rich if we were to reconstruct the essence of images from their descriptors? In this paper, we set out to develop a conceptual framework that would be: (i) biologically plausible in order to provide a better mechanistic understanding of our visual system; (ii) sufficiently robust to apply in practice on realistic images; and (iii) able to tap into underlying structure of our visual world. We bring forward three key ideas. First, we argue that surface-based representations are constructed based on feature inference from the input in the intermediate processing layers of the visual system. Such representations are computed in a largely pre-semantic (prior to categorization) and pre-attentive manner using multiple cues (orientation, color, polarity, variation in orientation, and so on), and explicitly retain configural relations between features. The constructed surfaces may be partially overlapping to compensate for occlusions and are ordered in depth (figure-ground organization). Second, we propose that such intermediate representations could be formed by a hierarchical computation of similarity between features in local image patches and pooling of highly-similar units, and reestimated via recurrent loops according to the task demands. Finally, we suggest to use datasets composed of realistically rendered artificial objects and surfaces in order to better understand a model's behavior and its limitations.
Long-term care for people with dementia: environmental design guidelines.
Fleming, Richard; Purandare, Nitin
2010-11-01
A large and growing number of people with dementia are being cared for in long-term care. The empirical literature on the design of environments for people with dementia contains findings that can be helpful in the design of these environments. A schema developed by Marshall in 2001 provides a means of reviewing the literature against a set of recommendations. The aims of this paper are to assess the strength of the evidence for these recommendations and to identify those recommendations that could be used as the basis for guidelines to assist in the design of long term care facilities for people with dementia. The literature was searched for articles published after 1980, evaluating an intervention utilizing the physical environment, focused on the care of people with dementia and incorporating a control group, pre-test-post-test, cross sectional or survey design. A total of 156 articles were identified as relevant and subjected to an evaluation of their methodological strength. Of these, 57 articles were identified as being sufficiently strong to be reviewed. Designers may confidently use unobtrusive safety measures; vary ambience, size and shape of spaces; provide single rooms; maximize visual access; and control levels of stimulation. There is less agreement on the usefulness of signage, homelikeness, provision for engagement in ordinary activities, small size and the provision of outside space. There is sufficient evidence available to come to a consensus on guiding principles for the design of long term environments for people with dementia.
Scaling in two-fluid pinch-off
NASA Astrophysics Data System (ADS)
Pommer, Chris; Harris, Michael; Basaran, Osman
2010-11-01
The physics of two-fluid pinch-off, which arises whenever drops, bubbles, or jets of one fluid are ejected from a nozzle into another fluid, is scientifically important and technologically relevant. While the breakup of a drop in a passive environment is well understood, the physics of pinch-off when both the inner and outer fluids are dynamically active remains inadequately understood. Here, the breakup of a compound jet whose core and shell are incompressible Newtonian fluids is analyzed computationally when the interior is a "bubble" and the exterior is a liquid. The numerical method employed is an implicit method of lines ALE algorithm which uses finite elements with elliptic mesh generation and adaptive finite differences for time integration. Thus, the new approach neither starts with a priori idealizations, as has been the case with previous computations, nor is limited to length scales above that set by the wavelength of visible light as in any experimental study. In particular, three distinct responses are identified as the ratio m of the outer fluid's viscosity to the inner fluid's viscosity is varied. For small m, simulations show that the minimum neck radius r initially scales with time τ before breakup as r ˜0.58° (in accord with previous experiments and inviscid fluid models) but that r ˜τ once r becomes sufficiently small. For intermediate and large values of m, r ˜&αcirc;, where the exponent α may not equal one, once again as r becomes sufficiently small.
Lohße, Anna; Ullrich, Susanne; Katzmann, Emanuel; Borg, Sarah; Wanner, Gerd; Richter, Michael; Voigt, Birgit; Schweder, Thomas; Schüler, Dirk
2011-01-01
Bacterial magnetosomes are membrane-enveloped, nanometer-sized crystals of magnetite, which serve for magnetotactic navigation. All genes implicated in the synthesis of these organelles are located in a conserved genomic magnetosome island (MAI). We performed a comprehensive bioinformatic, proteomic and genetic analysis of the MAI in Magnetospirillum gryphiswaldense. By the construction of large deletion mutants we demonstrate that the entire region is dispensable for growth, and the majority of MAI genes have no detectable function in magnetosome formation and could be eliminated without any effect. Only <25% of the region comprising four major operons could be associated with magnetite biomineralization, which correlated with high expression of these genes and their conservation among magnetotactic bacteria. Whereas only deletion of the mamAB operon resulted in the complete loss of magnetic particles, deletion of the conserved mms6, mamGFDC, and mamXY operons led to severe defects in morphology, size and organization of magnetite crystals. However, strains in which these operons were eliminated together retained the ability to synthesize small irregular crystallites, and weakly aligned in magnetic fields. This demonstrates that whereas the mamGFDC, mms6 and mamXY operons have crucial and partially overlapping functions for the formation of functional magnetosomes, the mamAB operon is the only region of the MAI, which is necessary and sufficient for magnetite biomineralization. Our data further reduce the known minimal gene set required for magnetosome formation and will be useful for future genome engineering approaches. PMID:22043287
Structure of weakly 2-dependent siphons
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh; Chen, Jiun-Ting
2013-09-01
Deadlocks arising from insufficiently marked siphons in flexible manufacturing systems can be controlled by adding monitors to each siphon - too many for large systems. Li and Zhou add monitors to elementary siphons only while controlling the rest of (called dependent) siphons by adjusting control depth variables of elementary siphons. Only a linear number of monitors are required. The control of weakly dependent siphons (WDSs) is rather conservative since only positive terms were considered. The structure for strongly dependent siphons (SDSs) has been studied earlier. Based on this structure, the optimal sequence of adding monitors has been discovered earlier. Better controllability has been discovered to achieve faster and more permissive control. The results have been extended earlier to S3PGR2 (systems of simple sequential processes with general resource requirements). This paper explores the structures for WDSs, which, as found in this paper, involve elementary resource circuits interconnecting at more than (for SDSs, exactly) one resource place. This saves the time to compute compound siphons, their complementary sets and T-characteristic vectors. Also it allows us (1) to improve the controllability of WDSs and control siphons and (2) to avoid the time to find independent vectors for elementary siphons. We propose a sufficient and necessary test for adjusting control depth variables in S3PR (systems of simple sequential processes with resources) to avoid the sufficient-only time-consuming linear integer programming test (LIP) (Nondeterministic Polynomial (NP) time complete problem) required previously for some cases.
Green, Carolyn J; Fortin, Patricia; Maclure, Malcolm; Macgregor, Art; Robinson, Sylvia
2006-12-01
Improvement of chronic disease management in primary care entails monitoring indicators of quality over time and across patients and practices. Informatics tools are needed, yet implementing them remains challenging. To identify critical success factors enabling the translation of clinical and operational knowledge about effective and efficient chronic care management into primary care practice. A prospective case study of positive deviants using key informant interviews, process observation, and document review. A chronic disease management (CDM) collaborative of primary care physicians with documented improvement in adherence to clinical practice guidelines using a web-based patient registry system with CDM guideline-based flow sheet. Thirty community-based physician participants using predominantly paper records, plus a project management team including the physician lead, project manager, evaluator and support team. A critical success factor (CSF) analysis of necessary and sufficient pathways to the translation of knowledge into clinical practice. A web-based CDM 'toolkit' was found to be a direct CSF that allowed this group of physicians to improve their practice by tracking patient care processes using evidence-based clinical practice guideline-based flow sheets. Moreover, the information and communication technology 'factor' was sufficient for success only as part of a set of seven direct CSF components including: health delivery system enhancements, organizational partnerships, funding mechanisms, project management, practice models, and formal knowledge translation practices. Indirect factors that orchestrated success through the direct factor components were also identified. A central insight of this analysis is that a comprehensive quality improvement model was the CSF that drew this set of factors into a functional framework for successful knowledge translation. In complex primary care settings environment where physicians have low adoption rates of electronic tools to support the care of patients with chronic conditions, successful implementation may require a set of interrelated system and technology factors.
Characterizing uncertain sea-level rise projections to support investment decisions.
Sriver, Ryan L; Lempert, Robert J; Wikman-Svahn, Per; Keller, Klaus
2018-01-01
Many institutions worldwide are considering how to include uncertainty about future changes in sea-levels and storm surges into their investment decisions regarding large capital infrastructures. Here we examine how to characterize deeply uncertain climate change projections to support such decisions using Robust Decision Making analysis. We address questions regarding how to confront the potential for future changes in low probability but large impact flooding events due to changes in sea-levels and storm surges. Such extreme events can affect investments in infrastructure but have proved difficult to consider in such decisions because of the deep uncertainty surrounding them. This study utilizes Robust Decision Making methods to address two questions applied to investment decisions at the Port of Los Angeles: (1) Under what future conditions would a Port of Los Angeles decision to harden its facilities against extreme flood scenarios at the next upgrade pass a cost-benefit test, and (2) Do sea-level rise projections and other information suggest such conditions are sufficiently likely to justify such an investment? We also compare and contrast the Robust Decision Making methods with a full probabilistic analysis. These two analysis frameworks result in similar investment recommendations for different idealized future sea-level projections, but provide different information to decision makers and envision different types of engagement with stakeholders. In particular, the full probabilistic analysis begins by aggregating the best scientific information into a single set of joint probability distributions, while the Robust Decision Making analysis identifies scenarios where a decision to invest in near-term response to extreme sea-level rise passes a cost-benefit test, and then assembles scientific information of differing levels of confidence to help decision makers judge whether or not these scenarios are sufficiently likely to justify making such investments. Results highlight the highly-localized and context dependent nature of applying Robust Decision Making methods to inform investment decisions.
Characterizing uncertain sea-level rise projections to support investment decisions
Lempert, Robert J.; Wikman-Svahn, Per; Keller, Klaus
2018-01-01
Many institutions worldwide are considering how to include uncertainty about future changes in sea-levels and storm surges into their investment decisions regarding large capital infrastructures. Here we examine how to characterize deeply uncertain climate change projections to support such decisions using Robust Decision Making analysis. We address questions regarding how to confront the potential for future changes in low probability but large impact flooding events due to changes in sea-levels and storm surges. Such extreme events can affect investments in infrastructure but have proved difficult to consider in such decisions because of the deep uncertainty surrounding them. This study utilizes Robust Decision Making methods to address two questions applied to investment decisions at the Port of Los Angeles: (1) Under what future conditions would a Port of Los Angeles decision to harden its facilities against extreme flood scenarios at the next upgrade pass a cost-benefit test, and (2) Do sea-level rise projections and other information suggest such conditions are sufficiently likely to justify such an investment? We also compare and contrast the Robust Decision Making methods with a full probabilistic analysis. These two analysis frameworks result in similar investment recommendations for different idealized future sea-level projections, but provide different information to decision makers and envision different types of engagement with stakeholders. In particular, the full probabilistic analysis begins by aggregating the best scientific information into a single set of joint probability distributions, while the Robust Decision Making analysis identifies scenarios where a decision to invest in near-term response to extreme sea-level rise passes a cost-benefit test, and then assembles scientific information of differing levels of confidence to help decision makers judge whether or not these scenarios are sufficiently likely to justify making such investments. Results highlight the highly-localized and context dependent nature of applying Robust Decision Making methods to inform investment decisions. PMID:29414978
The microwave radiometer spacecraft: A design study
NASA Technical Reports Server (NTRS)
Wright, R. L. (Editor)
1981-01-01
A large passive microwave radiometer spacecraft with near all weather capability of monitoring soil moisture for global crop forecasting was designed. The design, emphasizing large space structures technology, characterized the mission hardware at the conceptual level in sufficient detail to identify enabling and pacing technologies. Mission and spacecraft requirements, design and structural concepts, electromagnetic concepts, and control concepts are addressed.
ERIC Educational Resources Information Center
Bowman, Thomas G.
2012-01-01
The athletic training profession is in the midst of a large increase in demand for health care professionals for the physically active. In order to meet demand, directors of athletic training education programs (ATEPs) are challenged with providing sufficient graduates. There has been a large increase in ATEPs nationwide since educational reform…
Entropy production during an isothermal phase transition in the early universe
NASA Astrophysics Data System (ADS)
Kaempfer, B.
The analytical model of Lodenquai and Dixit (1983) and of Bonometto and Matarrese (1983) of an isothermal era in the early universe is extended here to arbitrary temperatures. It is found that a sufficiently large supercooling gives rise to a large entropy production which may significantly dilute the primordial monopole or baryon to entropy ratio. Whether such large supercooling can be achieved depends on the characteristics of the nucleation process.
Dooley, Christopher J; Tenore, Francesco V; Gayzik, F Scott; Merkle, Andrew C
2018-04-27
Biological tissue testing is inherently susceptible to the wide range of variability specimen to specimen. A primary resource for encapsulating this range of variability is the biofidelity response corridor or BRC. In the field of injury biomechanics, BRCs are often used for development and validation of both physical, such as anthropomorphic test devices, and computational models. For the purpose of generating corridors, post-mortem human surrogates were tested across a range of loading conditions relevant to under-body blast events. To sufficiently cover the wide range of input conditions, a relatively small number of tests were performed across a large spread of conditions. The high volume of required testing called for leveraging the capabilities of multiple impact test facilities, all with slight variations in test devices. A method for assessing similitude of responses between test devices was created as a metric for inclusion of a response in the resulting BRC. The goal of this method was to supply a statistically sound, objective method to assess the similitude of an individual response against a set of responses to ensure that the BRC created from the set was affected primarily by biological variability, not anomalies or differences stemming from test devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Constraining the equation of state of neutron stars from binary mergers.
Takami, Kentaro; Rezzolla, Luciano; Baiotti, Luca
2014-08-29
Determining the equation of state of matter at nuclear density and hence the structure of neutron stars has been a riddle for decades. We show how the imminent detection of gravitational waves from merging neutron star binaries can be used to solve this riddle. Using a large number of accurate numerical-relativity simulations of binaries with nuclear equations of state, we find that the postmerger emission is characterized by two distinct and robust spectral features. While the high-frequency peak has already been associated with the oscillations of the hypermassive neutron star produced by the merger and depends on the equation of state, a new correlation emerges between the low-frequency peak, related to the merger process, and the total compactness of the stars in the binary. More importantly, such a correlation is essentially universal, thus providing a powerful tool to set tight constraints on the equation of state. If the mass of the binary is known from the inspiral signal, the combined use of the two frequency peaks sets four simultaneous constraints to be satisfied. Ideally, even a single detection would be sufficient to select one equation of state over the others. We test our approach with simulated data and verify it works well for all the equations of state considered.
Control of potassium excretion: a Paleolithic perspective.
Halperin, Mitchell L; Cheema-Dhadli, Surinder; Lin, Shih-Hua; Kamel, Kamel S
2006-07-01
Regulation of potassium (K) excretion was examined in an experimental setting that reflects the dietary conditions for humans in Paleolithic times (high, episodic intake of K with organic anions; low intake of NaCl), because this is when major control mechanisms were likely to have developed. The major control of K secretion in this setting is to regulate the number of luminal K channels in the cortical collecting duct. Following a KCl load, the K concentration in the medullary interstitial compartment rose; the likely source of this medullary K was its absorption by the H/K-ATPase in the inner medullary collecting duct. As a result of the higher medullary K concentration, the absorption of Na and Cl was inhibited in the loop of Henle, and this led to an increased distal delivery of a sufficient quantity of Na to raise K excretion markedly, while avoiding a large natriuresis. In addition, because K in the diet was accompanied by 'future' bicarbonate, a role for bicarbonate in the control of K secretion via 'selecting' whether aldosterone would be a NaCl-conserving or a kaliuretic hormone is discussed. This way of examining the control of K excretion provides new insights into clinical disorders with an abnormal plasma K concentration secondary to altered K excretion, and also into the pathophysiology of calcium-containing kidney stones.
Fahimi, Fatemeh; Guan, Cuntai; Wooi Boon Goh; Kai Keng Ang; Choon Guan Lim; Tih Shih Lee
2017-07-01
Measuring attention from electroencephalogram (EEG) has found applications in the treatment of Attention Deficit Hyperactivity Disorder (ADHD). It is of great interest to understand what features in EEG are most representative of attention. Intensive research has been done in the past and it has been proven that frequency band powers and their ratios are effective features in detecting attention. However, there are still unanswered questions, like, what features in EEG are most discriminative between attentive and non-attentive states? Are these features common among all subjects or are they subject-specific and must be optimized for each subject? Using Mutual Information (MI) to perform subject-specific feature selection on a large data set including 120 ADHD children, we found that besides theta beta ratio (TBR) which is commonly used in attention detection and neurofeedback, the relative beta power and theta/(alpha+beta) (TBAR) are also equally significant and informative for attention detection. Interestingly, we found that the relative theta power (which is also commonly used) may not have sufficient discriminative information itself (it is informative only for 3.26% of ADHD children). We have also demonstrated that although these features (relative beta power, TBR and TBAR) are the most important measures to detect attention on average, different subjects have different set of most discriminative features.
wACSF—Weighted atom-centered symmetry functions as descriptors in machine learning potentials
NASA Astrophysics Data System (ADS)
Gastegger, M.; Schwiedrzik, L.; Bittermann, M.; Berzsenyi, F.; Marquetand, P.
2018-06-01
We introduce weighted atom-centered symmetry functions (wACSFs) as descriptors of a chemical system's geometry for use in the prediction of chemical properties such as enthalpies or potential energies via machine learning. The wACSFs are based on conventional atom-centered symmetry functions (ACSFs) but overcome the undesirable scaling of the latter with an increasing number of different elements in a chemical system. The performance of these two descriptors is compared using them as inputs in high-dimensional neural network potentials (HDNNPs), employing the molecular structures and associated enthalpies of the 133 855 molecules containing up to five different elements reported in the QM9 database as reference data. A substantially smaller number of wACSFs than ACSFs is needed to obtain a comparable spatial resolution of the molecular structures. At the same time, this smaller set of wACSFs leads to a significantly better generalization performance in the machine learning potential than the large set of conventional ACSFs. Furthermore, we show that the intrinsic parameters of the descriptors can in principle be optimized with a genetic algorithm in a highly automated manner. For the wACSFs employed here, we find however that using a simple empirical parametrization scheme is sufficient in order to obtain HDNNPs with high accuracy.
Implementation of genetic conservation practices in a muskellunge propagation and stocking program
Jennings, Martin J.; Sloss, Brian L.; Hatzenbeler, Gene R.; Kampa, Jeffrey M.; Simonson, Timothy D.; Avelallemant, Steven P.; Lindenberger, Gary A.; Underwood, Bruce D.
2010-01-01
Conservation of genetic resources is a challenging issue for agencies managing popular sport fishes. To address the ongoing potential for genetic risks, we developed a comprehensive set of recommendations to conserve genetic diversity of muskellunge (Esox masquinongy) in Wisconsin, and evaluated the extent to which the recommendations can be implemented. Although some details are specific to Wisconsin's muskellunge propagation program, many of the practical issues affecting implementation are applicable to other species and production systems. We developed guidelines to restrict future broodstock collection operations to lakes with natural reproduction and to develop a set of brood lakes to use on a rotational basis within regional stock boundaries, but implementation will require considering lakes with variable stocking histories. Maintaining an effective population size sufficient to minimize the risk of losing alleles requires limiting broodstock collection to large lakes. Recommendations to better approximate the temporal distribution of spawning in hatchery operations and randomize selection of brood fish are feasible. Guidelines to modify rearing and distribution procedures face some logistic constraints. An evaluation of genetic diversity of hatchery-produced fish during 2008 demonstrated variable success representing genetic variation of the source population. Continued evaluation of hatchery operations will optimize operational efficiency while moving toward genetic conservation goals.
Implementation of genetic conservation practices in a muskellunge propagation and stocking program
Jennings, Martin J.; Sloss, Brian L.; Hatzenbeler, Gene R.; Kampa, Jeffrey M.; Simonson, Timothy D.; Avelallemant, Steven P.; Lindenberger, Gary A.; Underwood, Bruce D.
2010-01-01
Conservation of genetic resources is a challenging issue for agencies managing popular sport fishes. To address the ongoing potential for genetic risks, we developed a comprehensive set of recommendations to conserve genetic diversity of muskellunge (Esox masquinongy) in Wisconsin, and evaluated the extent to which the recommendations can be implemented. Although some details are specific to Wisconsin's muskellunge propagation program, many of the practical issues affecting implementation are applicable to other species and production systems. We developed guidelines to restrict future brood stock collection operations to lakes with natural reproduction and to develop a set of brood lakes to use on a rotational basis within regional stock boundaries, but implementation will require considering lakes with variable stocking histories. Maintaining an effective population size sufficient to minimize the risk of losing alleles requires limiting brood stock collection to large lakes. Recommendations to better approximate the temporal distribution of spawning in hatchery operations and randomize selection of brood fish are feasible. Guidelines to modify rearing and distribution procedures face some logistic constraints. An evaluation of genetic diversity of hatchery-produced fish during 2008 demonstrated variable success representing genetic variation of the source population. Continued evaluation of hatchery operations will optimize operational efficiency while moving toward genetic conservation goals.
Unregistered health care staff's perceptions of 12 hour shifts: an interview study.
Thomson, Louise; Schneider, Justine; Hare Duke, Laurie
2017-10-01
The purpose of the study was to explore unregistered health care staff's perceptions of 12 hour shifts on work performance and patient care. Many unregistered health care staff work 12 hour shifts, but it is unclear whether these are compatible with good quality care or work performance. Twenty five health care assistants from a range of care settings with experience of working 12 hour shifts took part in interviews or focus groups. A wide range of views emerged on the perceived impact of 12 hour shifts in different settings. Negative outcomes were perceived to occur when 12 hour shifts were combined with short-staffing, consecutive long shifts, high work demands, insufficient breaks and working with unfamiliar colleagues. Positive outcomes were perceived to be more likely in a context of control over shift patterns, sufficient staffing levels, and a supportive team climate. The perceived relationship between 12 hour shifts and patient care and work performance varies by patient context and wider workplace factors, but largely focuses on the ability to deliver relational aspects of care. Nursing managers need to consider the role of other workplace factors, such as shift patterns and breaks, when implementing 12 hour shifts with unregistered health care staff. © 2017 John Wiley & Sons Ltd.
Nakazato, Takeru; Bono, Hidemasa
2017-01-01
Abstract It is important for public data repositories to promote the reuse of archived data. In the growing field of omics science, however, the increasing number of submissions of high-throughput sequencing (HTSeq) data to public repositories prevents users from choosing a suitable data set from among the large number of search results. Repository users need to be able to set a threshold to reduce the number of results to obtain a suitable subset of high-quality data for reanalysis. We calculated the quality of sequencing data archived in a public data repository, the Sequence Read Archive (SRA), by using the quality control software FastQC. We obtained quality values for 1 171 313 experiments, which can be used to evaluate the suitability of data for reuse. We also visualized the data distribution in SRA by integrating the quality information and metadata of experiments and samples. We provide quality information of all of the archived sequencing data, which enable users to obtain sufficient quality sequencing data for reanalyses. The calculated quality data are available to the public in various formats. Our data also provide an example of enhancing the reuse of public data by adding metadata to published research data by a third party. PMID:28449062
Verheijen, Lieneke M; Aerts, Rien; Bönisch, Gerhard; Kattge, Jens; Van Bodegom, Peter M
2016-01-01
Plant functional types (PFTs) aggregate the variety of plant species into a small number of functionally different classes. We examined to what extent plant traits, which reflect species' functional adaptations, can capture functional differences between predefined PFTs and which traits optimally describe these differences. We applied Gaussian kernel density estimation to determine probability density functions for individual PFTs in an n-dimensional trait space and compared predicted PFTs with observed PFTs. All possible combinations of 1-6 traits from a database with 18 different traits (total of 18 287 species) were tested. A variety of trait sets had approximately similar performance, and 4-5 traits were sufficient to classify up to 85% of the species into PFTs correctly, whereas this was 80% for a bioclimatically defined tree PFT classification. Well-performing trait sets included combinations of correlated traits that are considered functionally redundant within a single plant strategy. This analysis quantitatively demonstrates how structural differences between PFTs are reflected in functional differences described by particular traits. Differentiation between PFTs is possible despite large overlap in plant strategies and traits, showing that PFTs are differently positioned in multidimensional trait space. This study therefore provides the foundation for important applications for predictive ecology. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
NASA Technical Reports Server (NTRS)
Welker, Jean E.; Au, Andrew Y.
2003-01-01
As part of a larger analysis of country systems described elsewhere, named a Crop Country Inventory, CCI, large variations in annual crop yield for selected climate sensitive agricultural regions or sub-regions within a country have been studied over extended periods in decades. These climate sensitive regions, principally responsible for large annual variations in an entire country s crop production, generally are characterized by distinctive patterns of atmospheric circulation and synoptic processes that result in large seasonal fluctuations in temperature, precipitation and soil moisture as well as other climate properties. The immediate region of interest is drought prone Kazakhstan in Central Asia, part of the Former Soviet Union, FSU. As a partial validation test in a dry southern region of Kazakhstan, the Almati Oblast was chosen. The Almati Oblast, a sub-region of Kazakhstan located in its southeast corner, is one of 14 oblasts within the Republic of Kazahstan. The climate data set used to characterize this region was taken from the results of the current maturely developed Global Climate Model, GCM. In this paper, the GCM results have been compared to the meteorological station data at the station locations, over various periods. If the empirical correlation of the data sets from both the GCM and station data is sufficiently significant, this would validate the use of the superior GCM profile mapping and integration for the climatic characterization of a sub-region. Precipitation values interpolated from NCEP Reanalysis II data, a global climate database spanning over 5 decades since 1949, have been statistically correlated with monthly-averaged station data from 1949 through 1993, and with daily station data from April through August, 1990 for the Almati Oblast in Kazakhstan. The resultant correlation is significant, which implies that the methodology may be extended to different regions globally for Crop Country Inventory studies.
Community-based efforts to prevent obesity: Australia-wide survey of projects.
Nichols, Melanie S; Reynolds, Rebecca C; Waters, Elizabeth; Gill, Timothy; King, Lesley; Swinburn, Boyd A; Allender, Steven
2013-08-01
Community-based programs that affect healthy environments and policies have emerged as an effective response to high obesity levels in populations. Apart from limited individual reports, little is currently known about these programs, limiting the potential to provide effective support, to promote effective practice, prevent adverse outcomes and disseminate intervention results and experience. The aim of the present study was to identify the size and reach of current community-based obesity prevention projects in Australia and to examine their characteristics, program features (e.g. intervention setting), capacity and approach to obesity prevention. Detailed survey completed by representatives from community-based obesity prevention initiatives in Australia. There was wide variation in funding, capacity and approach to obesity prevention among the 78 participating projects. Median annual funding was Au$94900 (range Au$2500-$4.46 million). The most common intervention settings were schools (39%). Forty per cent of programs focused on a population group of ≥50000 people. A large proportion of respondents felt that they did not have sufficient resources or staff training to achieve project objectives. Community-based projects currently represent a very large investment by both government and non-government sectors for the prevention of obesity. Existing projects are diverse in size and scope, and reach large segments of the population. Further work is needed to identify the full extent of existing community actions and to monitor their reach and future 'scale up' to ensure that future activities aim for effective integration into systems, policies and environments. SO WHAT? Community-based programs make a substantial contribution to the prevention of obesity and promotion of healthy lifestyles in Australia. A risk of the current intervention landscape is that effective approaches may go unrecognised due to lack of effective evaluations or limitations in program design, duration or size. Policy makers and researchers must recognise the potential contribution of these initiatives, to both public health and knowledge generation, and provide support for strong evaluation and sustainable intervention designs.
Low-derivative operators of the Standard Model effective field theory via Hilbert series methods
NASA Astrophysics Data System (ADS)
Lehman, Landon; Martin, Adam
2016-02-01
In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we conjecture an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the conjectured technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically recreate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For these remaining classes, there is a well defined procedure to manually determine the number of invariants. Assuming our method is correct, we derive a set of 535 dimension-8 N f = 1 operators.
NASA Astrophysics Data System (ADS)
Kumpová, I.; Vavřík, D.; Fíla, T.; Koudelka, P.; Jandejsek, I.; Jakůbek, J.; Kytýř, D.; Zlámal, P.; Vopálenský, M.; Gantar, A.
2016-02-01
To overcome certain limitations of contemporary materials used for bone tissue engineering, such as inflammatory response after implantation, a whole new class of materials based on polysaccharide compounds is being developed. Here, nanoparticulate bioactive glass reinforced gelan-gum (GG-BAG) has recently been proposed for the production of bone scaffolds. This material offers promising biocompatibility properties, including bioactivity and biodegradability, with the possibility of producing scaffolds with directly controlled microgeometry. However, to utilize such a scaffold with application-optimized properties, large sets of complex numerical simulations using the real microgeometry of the material have to be carried out during the development process. Because the GG-BAG is a material with intrinsically very low attenuation to X-rays, its radiographical imaging, including tomographical scanning and reconstructions, with resolution required by numerical simulations might be a very challenging task. In this paper, we present a study on X-ray imaging of GG-BAG samples. High-resolution volumetric images of investigated specimens were generated on the basis of micro-CT measurements using a large area flat-panel detector and a large area photon-counting detector. The photon-counting detector was composed of a 010× 1 matrix of Timepix edgeless silicon pixelated detectors with tiling based on overlaying rows (i.e. assembled so that no gap is present between individual rows of detectors). We compare the results from both detectors with the scanning electron microscopy on selected slices in transversal plane. It has been shown that the photon counting detector can provide approx. 3× better resolution of the details in low-attenuating materials than the integrating flat panel detectors. We demonstrate that employment of a large area photon counting detector is a good choice for imaging of low attenuating materials with the resolution sufficient for numerical simulations.
On the linearity of tracer bias around voids
NASA Astrophysics Data System (ADS)
Pollina, Giorgia; Hamaus, Nico; Dolag, Klaus; Weller, Jochen; Baldi, Marco; Moscardini, Lauro
2017-07-01
The large-scale structure of the Universe can be observed only via luminous tracers of the dark matter. However, the clustering statistics of tracers are biased and depend on various properties, such as their host-halo mass and assembly history. On very large scales, this tracer bias results in a constant offset in the clustering amplitude, known as linear bias. Towards smaller non-linear scales, this is no longer the case and tracer bias becomes a complicated function of scale and time. We focus on tracer bias centred on cosmic voids, I.e. depressions of the density field that spatially dominate the Universe. We consider three types of tracers: galaxies, galaxy clusters and active galactic nuclei, extracted from the hydrodynamical simulation Magneticum Pathfinder. In contrast to common clustering statistics that focus on auto-correlations of tracers, we find that void-tracer cross-correlations are successfully described by a linear bias relation. The tracer-density profile of voids can thus be related to their matter-density profile by a single number. We show that it coincides with the linear tracer bias extracted from the large-scale auto-correlation function and expectations from theory, if sufficiently large voids are considered. For smaller voids we observe a shift towards higher values. This has important consequences on cosmological parameter inference, as the problem of unknown tracer bias is alleviated up to a constant number. The smallest scales in existing data sets become accessible to simpler models, providing numerous modes of the density field that have been disregarded so far, but may help to further reduce statistical errors in constraining cosmology.
Updating during reading comprehension: why causality matters.
Kendeou, Panayiota; Smith, Emily R; O'Brien, Edward J
2013-05-01
The present set of 7 experiments systematically examined the effectiveness of adding causal explanations to simple refutations in reducing or eliminating the impact of outdated information on subsequent comprehension. The addition of a single causal-explanation sentence to a refutation was sufficient to eliminate any measurable disruption in comprehension caused by the outdated information (Experiment 1) but was not sufficient to eliminate its reactivation (Experiment 2). However, a 3 sentence causal-explanation addition to a refutation eliminated both any measurable disruption in comprehension (Experiment 3) and the reactivation of the outdated information (Experiment 4). A direct comparison between the 1 and 3 causal-explanation conditions provided converging evidence for these findings (Experiment 5). Furthermore, a comparison of the 3 sentence causal-explanation condition with a 3 sentence qualified-elaboration condition demonstrated that even though both conditions were sufficient to eliminate any measurable disruption in comprehension (Experiment 6), only the causal-explanation condition was sufficient to eliminate the reactivation of the outdated information (Experiment 7). These results establish a boundary condition under which outdated information will influence comprehension; they also have broader implications for both the updating process and knowledge revision in general.
Cost Accounting, Business Education: 7709.41.
ERIC Educational Resources Information Center
Carino, Mariano G.
Cost accounting principles and procedures provide students with sufficient background to apply cost accounting factors to service and manufacturing businesses. Overhead, materials, goods in process, and finished goods are emphasized. Students complete a practice set in the course, which has guidelines, performance objectives, learning activities…
Spontaneous Discovery and Use of Categorical Structures
1992-02-15
must be defined by a set of necessary and sufficient features, an assumption that has been strongly criticized in recent years (e.g., Wittgenstein ...J.: Lawrence Erlbaum Associates. Smith, E. E., & Medin, D. L. (1981). Categories and concepts. Cambridge, MA: Harvard University Press. Wittgenstein
On the consistency among different approaches for nuclear track scanning and data processing
NASA Astrophysics Data System (ADS)
Inozemtsev, K. O.; Kushin, V. V.; Kodaira, S.; Shurshakov, V. A.
2018-04-01
The article describes various approaches for space radiation track measurement using CR-39™ detector (Tastrak). The results of comparing different methods for track scanning and data processing are presented. Basic algorithms for determination of track parameters are described. Every approach involves individual set of measured track parameters. For two sets, track scanning is sufficient in the plane of detector surface (2-D measurement), third set requires scanning in the additional projection (3-D measurement). An experimental comparison of considered techniques was made with the use of accelerated heavy ions Ar, Fe and Kr.
Allar, Ayse D; Beler Baykal, Bilsen
2016-01-01
ECOSAN is a recent domestic wastewater management concept which suggests segregation at the source. One of these streams, yellow water (human urine) has the potential to be used as fertilizer, directly or indirectly, because of its rich content of plant nutrients. One physicochemical method for indirect use is adsorption/ion exchange using clinoptilolite. This paper aims to present the results of a scenario focusing on possible diversion of urine and self-sufficiency of nutrients recovered on site through the use of this process, using actual demographic and territorial information from an existing summer housing site. Specifically, this paper aims to answer the questions: (i) how much nitrogen can be recovered to be used as fertilizer by diverting urine? and (ii) is this sufficient or in surplus within the model housing site? This sets an example of resource-oriented sanitation using stream segregation as a wastewater management strategy in a small community. Nitrogen was taken as the basis of calculations/predictions and the focus was placed on whether nitrogen is self-sufficient or in excess as fertilizer for use within the premises. The results reveal that the proposed application makes sense and that urine coming from the housing site is self-sufficient as fertilizer within the housing site itself.
Rasch analysis on OSCE data : An illustrative example.
Tor, E; Steketee, C
2011-01-01
The Objective Structured Clinical Examination (OSCE) is a widely used tool for the assessment of clinical competence in health professional education. The goal of the OSCE is to make reproducible decisions on pass/fail status as well as students' levels of clinical competence according to their demonstrated abilities based on the scores. This paper explores the use of the polytomous Rasch model in evaluating the psychometric properties of OSCE scores through a case study. The authors analysed an OSCE data set (comprised of 11 stations) for 80 fourth year medical students based on the polytomous Rasch model in an effort to answer two research questions: (1) Do the clinical tasks assessed in the 11 OSCE stations map on to a common underlying construct, namely clinical competence? (2) What other insights can Rasch analysis offer in terms of scaling, item analysis and instrument validation over and above the conventional analysis based on classical test theory? The OSCE data set has demonstrated a sufficient degree of fit to the Rasch model (Χ(2) = 17.060, DF=22, p=0.76) indicating that the 11 OSCE station scores have sufficient psychometric properties to form a measure for a common underlying construct, i.e. clinical competence. Individual OSCE station scores with good fit to the Rasch model (p > 0.1 for all Χ(2) statistics) further corroborated the characteristic of unidimensionality of the OSCE scale for clinical competence. A Person Separation Index (PSI) of 0.704 indicates sufficient level of reliability for the OSCE scores. Other useful findings from the Rasch analysis that provide insights, over and above the analysis based on classical test theory, are also exemplified using the data set. The polytomous Rasch model provides a useful and supplementary approach to the calibration and analysis of OSCE examination data.
A new modified conjugate gradient coefficient for solving system of linear equations
NASA Astrophysics Data System (ADS)
Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.
2017-09-01
Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations
Ebbinghaus, Simon; Meister, Konrad; Prigozhin, Maxim B; Devries, Arthur L; Havenith, Martina; Dzubiella, Joachim; Gruebele, Martin
2012-07-18
Short-range ice binding and long-range solvent perturbation both have been implicated in the activity of antifreeze proteins and antifreeze glycoproteins. We study these two mechanisms for activity of winter flounder antifreeze peptide. Four mutants are characterized by freezing point hysteresis (activity), circular dichroism (secondary structure), Förster resonance energy transfer (end-to-end rigidity), molecular dynamics simulation (structure), and terahertz spectroscopy (long-range solvent perturbation). Our results show that the short-range model is sufficient to explain the activity of our mutants, but the long-range model provides a necessary condition for activity: the most active peptides in our data set all have an extended dynamical hydration shell. It appears that antifreeze proteins and antifreeze glycoproteins have reached different evolutionary solutions to the antifreeze problem, utilizing either a few precisely positioned OH groups or a large quantity of OH groups for ice binding, assisted by long-range solvent perturbation. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
De Biase, Pablo M; Markosyan, Suren; Noskov, Sergei
2015-02-05
The transport of ions and solutes by biological pores is central for cellular processes and has a variety of applications in modern biotechnology. The time scale involved in the polymer transport across a nanopore is beyond the accessibility of conventional MD simulations. Moreover, experimental studies lack sufficient resolution to provide details on the molecular underpinning of the transport mechanisms. BROMOC, the code presented herein, performs Brownian dynamics simulations, both serial and parallel, up to several milliseconds long. BROMOC can be used to model large biological systems. IMC-MACRO software allows for the development of effective potentials for solute-ion interactions based on radial distribution function from all-atom MD. BROMOC Suite also provides a versatile set of tools to do a wide variety of preprocessing and postsimulation analysis. We illustrate a potential application with ion and ssDNA transport in MspA nanopore. © 2014 Wiley Periodicals, Inc.
Virtual reality to simulate large lighting with high efficiency LEDs
NASA Astrophysics Data System (ADS)
Blandet, Thierry; Coutelier, Gilles; Meyrueis, Patrick
2011-05-01
When a city or a local authority wishes to emphasize its historical heritage, for the lighting of its streets, setting up lights during the festive season, they call upon the skills of a lighting designer. The lighting designer proposes concepts, ideas, lighting, and to be able to present them, he makes use of simulation. On the other hand lighting technologies are evolving very rapidly and new lighting systems offer features that lighting designers are now integrating their projects. The street lights consume lot of energy; light projects are now taking into account the energy saving aspect. Lighting systems based on LEDs today provide good lighting needs, taking into account sustainable development issues while enabling new creative dimension. The lighting simulation can handle these parameters. Images or video simulation are no longer sufficient: stereoscopy and virtual reality techniques allow better communication and better understanding of projects. Virtual reality offers new possibilities of interaction, the freedom of movement in a scene, the presentation of variants or interactive simulations.
Solar coronal loop heating by cross-field wave transport
NASA Technical Reports Server (NTRS)
Amendt, Peter; Benford, Gregory
1989-01-01
Solar coronal arches heated by turbulent ion-cyclotron waves may suffer significant cross-field transport by these waves. Nonlinear processes fix the wave-propagation speed at about a tenth of the ion thermal velocity, which seems sufficient to spread heat from a central core into a large cool surrounding cocoon. Waves heat cocoon ions both through classical ion-electron collisions and by turbulent stochastic ion motions. Plausible cocoon sizes set by wave damping are in roughly kilometers, although the wave-emitting core may be only 100 m wide. Detailed study of nonlinear stabilization and energy-deposition rates predicts that nearby regions can heat to values intermediate between the roughly electron volt foot-point temperatures and the about 100 eV core, which is heated by anomalous Ohmic losses. A volume of 100 times the core volume may be affected. This qualitative result may solve a persistent problem with current-driven coronal heating; that it affects only small volumes and provides no way to produce the extended warm structures perceptible to existing instruments.
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
EU Laws on Privacy in Genomic Databases and Biobanking.
Townend, David
2016-03-01
Both the European Union and the Council of Europe have a bearing on privacy in genomic databases and biobanking. In terms of legislation, the processing of personal data as it relates to the right to privacy is currently largely regulated in Europe by Directive 95/46/EC, which requires that processing be "fair and lawful" and follow a set of principles, meaning that the data be processed only for stated purposes, be sufficient for the purposes of the processing, be kept only for so long as is necessary to achieve those purposes, and be kept securely and only in an identifiable state for such time as is necessary for the processing. The European privacy regime does not require the de-identification (anonymization) of personal data used in genomic databases or biobanks, and alongside this practice informed consent as well as governance and oversight mechanisms provide for the protection of genomic data. © 2016 American Society of Law, Medicine & Ethics.
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
In vivo imaging of Dauer-specific neuronal remodeling in C. elegans.
Schroeder, Nathan E; Flatt, Kristen M
2014-09-04
The mechanisms controlling stress-induced phenotypic plasticity in animals are frequently complex and difficult to study in vivo. A classic example of stress-induced plasticity is the dauer stage of C. elegans. Dauers are an alternative developmental larval stage formed under conditions of low concentrations of bacterial food and high concentrations of a dauer pheromone. Dauers display extensive developmental and behavioral plasticity. For example, a set of four inner-labial quadrant (IL2Q) neurons undergo extensive reversible remodeling during dauer formation. Utilizing the well-known environmental pathways regulating dauer entry, a previously established method for the production of crude dauer pheromone from large-scale liquid nematode cultures is demonstrated. With this method, a concentration of 50,000 - 75,000 nematodes/ml of liquid culture is sufficient to produce a highly potent crude dauer pheromone. The crude pheromone potency is determined by a dose-response bioassay. Finally, the methods used for in vivo time-lapse imaging of the IL2Qs during dauer formation are described.
Mukhopadhyay, Tushita; Musser, Andrew J; Puttaraju, Boregowda; Dhar, Joydeep; Friend, Richard H; Patil, Satish
2017-03-02
In this work, we have rationally designed and synthesized a novel thiophene-diketopyrrolopyrrole (TDPP)-vinyl-based dimer. We have investigated the optical and electronic properties and have probed the photophysical dynamics using transient absorption to investigate the possibility of singlet exciton fission. These revealed extremely rapid decay to the ground state (<50 ps), which we confirm is due to intramolecular excitonic processes rather than large-scale conformational change enabled by the vinyl linker. In all cases, the main excited state appears to be "dark", suggesting rapid internal conversion into a dark 2A g -type singlet state. We found no evidence of triplet formation in TDPP-V-TDPP under direct photoexcitation. This may be a consequence of significant singlet stabilization in the dimer, bringing it below the energy needed to form two triplets. Our studies on this model compound set valuable lessons for design of novel triplet-forming materials and highlight the need for more broadly applicable design principles.
Machine learning based job status prediction in scientific clusters
Yoo, Wucherl; Sim, Alex; Wu, Kesheng
2016-09-01
Large high-performance computing systems are built with increasing number of components with more CPU cores, more memory, and more storage space. At the same time, scientific applications have been growing in complexity. Together, they are leading to more frequent unsuccessful job statuses on HPC systems. From measured job statuses, 23.4% of CPU time was spent to the unsuccessful jobs. Here, we set out to study whether these unsuccessful job statuses could be anticipated from known job characteristics. To explore this possibility, we have developed a job status prediction method for the execution of jobs on scientific clusters. The Random Forestsmore » algorithm was applied to extract and characterize the patterns of unsuccessful job statuses. Experimental results show that our method can predict the unsuccessful job statuses from the monitored ongoing job executions in 99.8% the cases with 83.6% recall and 94.8% precision. Lastly, this prediction accuracy can be sufficiently high that it can be used to mitigation procedures of predicted failures.« less
Manifesting enhanced cancellations in supergravity: integrands versus integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bern, Zvi; Enciso, Michael; Parra-Martinez, Julio
2017-05-25
We have found examples of `enhanced ultraviolet cancellations' with no known standard-symmetry explanation in a variety of supergravity theories. Furthermore, by examining one- and two-loop examples in four- and five-dimensional half-maximal supergravity, we argue that enhanced cancellations in general cannot be exhibited prior to integration. In light of this, we explore reorganizations of integrands into parts that are manifestly finite and parts that have poor power counting but integrate to zero due to integral identities. At two loops we find that in the large loop-momentum limit the required integral identities follow from Lorentz and SL(2) relabeling symmetry. We carry outmore » a nontrivial check at four loops showing that the identities generated in this way are a complete set. We propose that at L loops the combination of Lorentz and SL(L) symmetry is sufficient for displaying enhanced cancellations when they happen, whenever the theory is known to be ultraviolet finite up to (L - 1) loops.« less
NASA Astrophysics Data System (ADS)
Boschi, Lapo
2006-10-01
I invert a large set of teleseismic phase-anomaly observations, to derive tomographic maps of fundamental-mode surface wave phase velocity, first via ray theory, then accounting for finite-frequency effects through scattering theory, in the far-field approximation and neglecting mode coupling. I make use of a multiple-resolution pixel parametrization which, in the assumption of sufficient data coverage, should be adequate to represent strongly oscillatory Fréchet kernels. The parametrization is finer over North America, a region particularly well covered by the data. For each surface-wave mode where phase-anomaly observations are available, I derive a wide spectrum of plausible, differently damped solutions; I then conduct a trade-off analysis, and select as optimal solution model the one associated with the point of maximum curvature on the trade-off curve. I repeat this exercise in both theoretical frameworks, to find that selected scattering and ray theoretical phase-velocity maps are coincident in pattern, and differ only slightly in amplitude.
BOREAS AFM-07 SRC Surface Meteorological Data
NASA Technical Reports Server (NTRS)
Osborne, Heather; Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Young, Kim; Wittrock, Virginia; Shewchuck, Stan; Smith, David E. (Technical Monitor)
2000-01-01
The Saskatchewan Research Council (SRC) collected surface meteorological and radiation data from December 1993 until December 1996. The data set comprises Suite A (meteorological and energy balance measurements) and Suite B (diffuse solar and longwave measurements) components. Suite A measurements were taken at each of ten sites, and Suite B measurements were made at five of the Suite A sites. The data cover an approximate area of 500 km (North-South) by 1000 km (East-West) (a large portion of northern Manitoba and northern Saskatchewan). The measurement network was designed to provide researchers with a sufficient record of near-surface meteorological and radiation measurements. The data are provided in tabular ASCII files, and were collected by Aircraft Flux and Meteorology (AFM)-7. The surface meteorological and radiation data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).
Machine learning based job status prediction in scientific clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex; Wu, Kesheng
Large high-performance computing systems are built with increasing number of components with more CPU cores, more memory, and more storage space. At the same time, scientific applications have been growing in complexity. Together, they are leading to more frequent unsuccessful job statuses on HPC systems. From measured job statuses, 23.4% of CPU time was spent to the unsuccessful jobs. Here, we set out to study whether these unsuccessful job statuses could be anticipated from known job characteristics. To explore this possibility, we have developed a job status prediction method for the execution of jobs on scientific clusters. The Random Forestsmore » algorithm was applied to extract and characterize the patterns of unsuccessful job statuses. Experimental results show that our method can predict the unsuccessful job statuses from the monitored ongoing job executions in 99.8% the cases with 83.6% recall and 94.8% precision. Lastly, this prediction accuracy can be sufficiently high that it can be used to mitigation procedures of predicted failures.« less
Anatomy-driven design of a prototype video laryngoscope for extremely low birth weight infants
NASA Astrophysics Data System (ADS)
Baker, Katherine; Tremblay, Eric; Karp, Jason; Ford, Joseph; Finer, Neil; Rich, Wade
2010-11-01
Extremely low birth weight (ELBW) infants frequently require endotracheal intubation for assisted ventilation or as a route for administration of drugs or exogenous surfactant. In adults and less premature infants, the risks of this intubation can be greatly reduced using video laryngoscopy, but current products are too large and incorrectly shaped to visualize an ELBW infant's airway anatomy. We design and prototype a video laryngoscope using a miniature camera set in a curved acrylic blade with a 3×6-mm cross section at the tip. The blade provides a mechanical structure for stabilizing the tongue and acts as a light guide for an LED light source, located remotely to avoid excessive local heating at the tip. The prototype is tested on an infant manikin and found to provide sufficient image quality and mechanical properties to facilitate intubation. Finally, we show a design for a neonate laryngoscope incorporating a wafer-level microcamera that further reduces the tip cross section and offers the potential for low cost manufacture.
Estimating the Magnetic Field Strength in Hot Jupiters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Rakesh K.; Thorngren, Daniel P., E-mail: rakesh_yadav@fas.harvard.edu
A large fraction of known Jupiter-like exoplanets are inflated as compared to Jupiter. These “hot” Jupiters orbit close to their parent star and are bombarded with intense starlight. Many theories have been proposed to explain their radius inflation and several suggest that a small fraction of the incident starlight is injected into the planetary interior, which helps to puff up the planet. How will such energy injection affect the planetary dynamo? In this Letter, we estimate the surface magnetic field strength of hot Jupiters using scaling arguments that relate energy available in planetary interiors to the dynamo-generated magnetic fields. Wemore » find that if we take into account the energy injected in the planetary interior that is sufficient to inflate hot Jupiters to observed radii, then the resulting dynamo should be able generate magnetic fields that are more than an order of magnitude stronger than the Jovian values. Our analysis highlights the potential fundamental role of the stellar light in setting the field strength in hot Jupiters.« less
Advising caution in studying seasonal oscillations in crime rates.
Dong, Kun; Cao, Yunbai; Siercke, Beatrice; Wilber, Matthew; McCalla, Scott G
2017-01-01
Most types of crime are known to exhibit seasonal oscillations, yet the annual variations in the amplitude of this seasonality and their causes are still uncertain. Using a large collection of data from the Houston and Los Angeles Metropolitan areas, we extract and study the seasonal variations in aggravated assault, break in and theft from vehicles, burglary, grand theft auto, rape, robbery, theft, and vandalism for many years from the raw daily data. Our approach allows us to see various long term and seasonal trends and aberrations in crime rates that have not been reported before. We then apply an ecologically motivated stochastic differential equation to reproduce the data. Our model relies only on social interaction terms, and not on any exigent factors, to reproduce both the seasonality, and the seasonal aberrations observed in our data set. Furthermore, the stochasticity in the system is sufficient to reproduce the variations seen in the seasonal oscillations from year to year. Researchers should be very careful about trying to correlate these oscillations with external factors.
Advising caution in studying seasonal oscillations in crime rates
Dong, Kun; Cao, Yunbai; Siercke, Beatrice; Wilber, Matthew
2017-01-01
Most types of crime are known to exhibit seasonal oscillations, yet the annual variations in the amplitude of this seasonality and their causes are still uncertain. Using a large collection of data from the Houston and Los Angeles Metropolitan areas, we extract and study the seasonal variations in aggravated assault, break in and theft from vehicles, burglary, grand theft auto, rape, robbery, theft, and vandalism for many years from the raw daily data. Our approach allows us to see various long term and seasonal trends and aberrations in crime rates that have not been reported before. We then apply an ecologically motivated stochastic differential equation to reproduce the data. Our model relies only on social interaction terms, and not on any exigent factors, to reproduce both the seasonality, and the seasonal aberrations observed in our data set. Furthermore, the stochasticity in the system is sufficient to reproduce the variations seen in the seasonal oscillations from year to year. Researchers should be very careful about trying to correlate these oscillations with external factors. PMID:28938022
A systematic review of interventions for anxiety, depression, and PTSD in adult offenders.
Leigh-Hunt, Nicholas; Perry, Amanda
2015-06-01
There is a high prevalence of anxiety and depression in offender populations but with no recent systematic review of interventions to identify what is effective. This systematic review was undertaken to identify randomised controlled trials of pharmacological and non-pharmacological interventions in adult offenders in prison or community settings. A search of five databases identified 14 studies meeting inclusion criteria, which considered the impact of psychological interventions, pharmacological agents, or exercise on levels of depression and anxiety. A narrative synthesis was undertaken and Hedges g effect sizes calculated to allow comparison between studies. Effect sizes for depression interventions ranged from 0.17 to 1.41, for anxiety 0.61 to 0.71 and for posttraumatic stress disorder 0 to 1.41. Cognitive behavioural therapy interventions for the reduction of depression and anxiety in adult offenders appear effective in the short term, though a large-scale trial of sufficient duration is needed to confirm this finding. © The Author(s) 2014.
Observations of a field-aligned ion/ion-beam instability in a magnetized laboratory plasma
NASA Astrophysics Data System (ADS)
Heuer, P. V.; Weidl, M. S.; Dorst, R. S.; Schaeffer, D. B.; Bondarenko, A. S.; Tripathi, S. K. P.; Van Compernolle, B.; Vincena, S.; Constantin, C. G.; Niemann, C.; Winske, D.
2018-03-01
Collisionless coupling between super Alfvénic ions and an ambient plasma parallel to a background magnetic field is mediated by a set of electromagnetic ion/ion-beam instabilities including the resonant right hand instability (RHI). To study this coupling and its role in parallel shock formation, a new experimental configuration at the University of California, Los Angeles utilizes high-energy and high-repetition-rate lasers to create a super-Alfvénic field-aligned debris plasma within an ambient plasma in the Large Plasma Device. We used a time-resolved fluorescence monochromator and an array of Langmuir probes to characterize the laser plasma velocity distribution and density. The debris ions were observed to be sufficiently super-Alfvénic and dense to excite the RHI. Measurements with magnetic flux probes exhibited a right-hand circularly polarized frequency chirp consistent with the excitation of the RHI near the laser target. We compared measurements to 2D hybrid simulations of the experiment.
Calzone, Kathleen A; Jenkins, Jean; Culp, Stacey; Badzek, Laurie
2017-11-13
The Precision Medicine Initiative will accelerate genomic discoveries that improve health care, necessitating a genomic competent workforce. This study assessed leadership team (administrator/educator) year-long interventions to improve registered nurses' (RNs) capacity to integrate genomics into practice. We examined genomic competency outcomes in 8,150 RNs. Awareness and intention to learn more increased compared with controls. Findings suggest achieving genomic competency requires a longer intervention and support strategies such as infrastructure and policies. Leadership played a role in mobilizing staff, resources, and supporting infrastructure to sustain a large-scale competency effort on an institutional basis. Results demonstrate genomic workforce competency can be attained with leadership support and sufficient time. Our study provides evidence of the critical role health-care leaders play in facilitating genomic integration into health care to improve patient outcomes. Genomics' impact on quality, safety, and cost indicate a leader-initiated national competency effort is achievable and warranted. Published by Elsevier Inc.
Observations of a field-aligned ion/ion-beam instability in a magnetized laboratory plasma
Heuer, P. V.; Weidl, M. S.; Dorst, R. S.; ...
2018-03-01
Collisionless coupling between super Alfvénic ions and an ambient plasma parallel to a background magnetic field is mediated by a set of electromagnetic ion/ion-beam instabilities including the resonant right hand instability (RHI). To study this coupling and its role in parallel shock formation, a new experimental configuration at the University of California, Los Angeles utilizes high-energy and high-repetition-rate lasers to create a super-Alfvénic field-aligned debris plasma within an ambient plasma in the Large Plasma Device. We used a time-resolved fluorescence monochromator and an array of Langmuir probes to characterize the laser plasma velocity distribution and density. The debris ions weremore » observed to be sufficiently super-Alfvénic and dense to excite the RHI. Measurements with magnetic flux probes exhibited a right-hand circularly polarized frequency chirp consistent with the excitation of the RHI near the laser target. To conclude, we compared measurements to 2D hybrid simulations of the experiment.« less
Fulford, Janice M.
2003-01-01
A numerical computer model, Transient Inundation Model for Rivers -- 2 Dimensional (TrimR2D), that solves the two-dimensional depth-averaged flow equations is documented and discussed. The model uses a semi-implicit, semi-Lagrangian finite-difference method. It is a variant of the Trim model and has been used successfully in estuarine environments such as San Francisco Bay. The abilities of the model are documented for three scenarios: uniform depth flows, laboratory dam-break flows, and large-scale riverine flows. The model can start computations from a ?dry? bed and converge to accurate solutions. Inflows are expressed as source terms, which limits the use of the model to sufficiently long reaches where the flow reaches equilibrium with the channel. The data sets used by the investigation demonstrate that the model accurately propagates flood waves through long river reaches and simulates dam breaks with abrupt water-surface changes.
Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.
Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter
2015-01-01
To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.
Generalized fluctuation-dissipation theorem as a test of the Markovianity of a system
NASA Astrophysics Data System (ADS)
Willareth, Lucian; Sokolov, Igor M.; Roichman, Yael; Lindner, Benjamin
2017-04-01
We study how well a generalized fluctuation-dissipation theorem (GFDT) is suited to test whether a stochastic system is not Markovian. To this end, we simulate a stochastic non-equilibrium model of the mechanosensory hair bundle from the inner ear organ and analyze its spontaneous activity and response to external stimulation. We demonstrate that this two-dimensional Markovian system indeed obeys the GFDT, as long as i) the averaging ensemble is sufficiently large and ii) finite-size effects in estimating the conjugated variable and its susceptibility can be neglected. Furthermore, we test the GFDT also by looking only at a one-dimensional projection of the system, the experimentally accessible position variable. This reduced system is certainly non-Markovian and the GFDT is somewhat violated but not as drastically as for the equilibrium fluctuation-dissipation theorem. We explore suitable measures to quantify the violation of the theorem and demonstrate that for a set of limited experimental data it might be difficult to decide whether the system is Markovian or not.
Observations of a field-aligned ion/ion-beam instability in a magnetized laboratory plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heuer, P. V.; Weidl, M. S.; Dorst, R. S.
Collisionless coupling between super Alfvénic ions and an ambient plasma parallel to a background magnetic field is mediated by a set of electromagnetic ion/ion-beam instabilities including the resonant right hand instability (RHI). To study this coupling and its role in parallel shock formation, a new experimental configuration at the University of California, Los Angeles utilizes high-energy and high-repetition-rate lasers to create a super-Alfvénic field-aligned debris plasma within an ambient plasma in the Large Plasma Device. We used a time-resolved fluorescence monochromator and an array of Langmuir probes to characterize the laser plasma velocity distribution and density. The debris ions weremore » observed to be sufficiently super-Alfvénic and dense to excite the RHI. Measurements with magnetic flux probes exhibited a right-hand circularly polarized frequency chirp consistent with the excitation of the RHI near the laser target. To conclude, we compared measurements to 2D hybrid simulations of the experiment.« less
From species ethics to social concerns: Habermas's critique of "liberal eugenics" evaluated.
Árnason, Vilhjálmur
2014-10-01
Three arguments of Habermas against "liberal eugenics" -- the arguments from consent, responsibility, and instrumentalization -- are critically evaluated and explicated in the light of his discourse ethics and social theory. It is argued that these arguments move partly at a too deep level and are in part too individualistic and psychological to sufficiently counter the liberal position that he sets out to criticize. This is also due to limitations that prevent discourse ethics from connecting effectively to the moral and political domains, e.g., through a discussion of justice. In spite of these weaknesses, Habermas's thesis is of major relevance and brings up neglected issues in the discussion about eugenic reproductive practices. This relevance has not been duly recognized in bioethics, largely because of the depth of his speculations of philosophical anthropology. It is argued that Habermas's notion of the colonization of the lifeworld could provide the analytical tool needed to build that bridge to the moral and political domain.
Mahendra, V S; Gilborn, L; Bharat, S; Mudoi, R; Gupta, I; George, B; Samson, L; Daly, C; Pulerwitz, J
2007-08-01
AIDS-related stigma and discrimination remain pervasive problems in health care institutions worldwide. This paper reports on stigma-related baseline findings from a study in New Delhi, India to evaluate the impact of a stigma-reduction intervention in three large hospitals. Data were collected via in-depth interviews with hospital staff and HIV-infected patients, surveys with hospital workers (884 doctors, nurses and ward staff) and observations of hospital practices. Interview findings highlighted drivers and manifestations of stigma that are important to address, and that are likely to have wider relevance for other developing country health care settings. These clustered around attitudes towards hospital practices, such as informing family members of a patient's HIV status without his/her consent, burning the linen of HIV-infected patients, charging HIV-infected patients for the cost of infection control supplies, and the use of gloves only with HIV-infected patients. These findings informed the development and evaluation of a culturally appropriate index to measure stigma in this setting. Baseline findings indicate that the stigma index is sufficiently reliable (alpha = 0.74). Higher scores on the stigma index--which focuses on attitudes towards HIV-infected persons--were associated with incorrect knowledge about HIV transmission and discriminatory practices. Stigma scores also varied by type of health care providers--physicians reported the least stigmatising attitudes as compared to nursing and ward staff in the hospitals. The study findings highlight issues particular to the health care sector in limited-resource settings. To be successful, stigma-reduction interventions, and the measures used to assess changes, need to take into account the sociocultural and economic context within which stigma occurs.
Chen, Meng-Yun; Liang, Dan; Zhang, Peng
2015-11-01
Incongruence between different phylogenomic analyses is the main challenge faced by phylogeneticists in the genomic era. To reduce incongruence, phylogenomic studies normally adopt some data filtering approaches, such as reducing missing data or using slowly evolving genes, to improve the signal quality of data. Here, we assembled a phylogenomic data set of 58 jawed vertebrate taxa and 4682 genes to investigate the backbone phylogeny of jawed vertebrates under both concatenation and coalescent-based frameworks. To evaluate the efficiency of extracting phylogenetic signals among different data filtering methods, we chose six highly intractable internodes within the backbone phylogeny of jawed vertebrates as our test questions. We found that our phylogenomic data set exhibits substantial conflicting signal among genes for these questions. Our analyses showed that non-specific data sets that are generated without bias toward specific questions are not sufficient to produce consistent results when there are several difficult nodes within a phylogeny. Moreover, phylogenetic accuracy based on non-specific data is considerably influenced by the size of data and the choice of tree inference methods. To address such incongruences, we selected genes that resolve a given internode but not the entire phylogeny. Notably, not only can this strategy yield correct relationships for the question, but it also reduces inconsistency associated with data sizes and inference methods. Our study highlights the importance of gene selection in phylogenomic analyses, suggesting that simply using a large amount of data cannot guarantee correct results. Constructing question-specific data sets may be more powerful for resolving problematic nodes. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ohta, Y; Chiba, S; Imai, Y; Kamiya, Y; Arisawa, T; Kitagawa, A
2006-12-01
We examined whether ascorbic acid (AA) deficiency aggravates water immersion restraint stress (WIRS)-induced gastric mucosal lesions in genetically scorbutic ODS rats. ODS rats received scorbutic diet with either distilled water containing AA (1 g/l) or distilled water for 2 weeks. AA-deficient rats had 12% of gastric mucosal AA content in AA-sufficient rats. AA-deficient rats showed more severe gastric mucosal lesions than AA-sufficient rats at 1, 3 or 6 h after the onset of WIRS, although AA-deficient rats had a slight decrease in gastric mucosal AA content, while AA-sufficient rats had a large decrease in that content. AA-deficient rats had more decreased gastric mucosal nonprotein SH and vitamin E contents and increased gastric mucosal lipid peroxide content than AA-sufficient rats at 1, 3 or 6 h of WIRS. These results indicate that AA deficiency aggravates WIRS-induced gastric mucosal lesions in ODS rats by enhancing oxidative damage in the gastric mucosa.
14 CFR 23.773 - Pilot compartment view.
Code of Federal Regulations, 2010 CFR
2010-01-01
... side windows sufficiently large to provide the view specified in paragraph (a)(1) of this section... be shown that the windshield and side windows can be easily cleared by the pilot without interruption...
Energy-Dependent Ionization States of Shock-Accelerated Particles in the Solar Corona
NASA Technical Reports Server (NTRS)
Reames, Donald V.; Ng, C. K.; Tylka, A. J.
2000-01-01
We examine the range of possible energy dependence of the ionization states of ions that are shock-accelerated from the ambient plasma of the solar corona. If acceleration begins in a region of moderate density, sufficiently low in the corona, ions above about 0.1 MeV/amu approach an equilibrium charge state that depends primarily upon their speed and only weakly on the plasma temperature. We suggest that the large variations of the charge states with energy for ions such as Si and Fe observed in the 1997 November 6 event are consistent with stripping in moderately dense coronal. plasma during shock acceleration. In the large solar-particle events studied previously, acceleration occurs sufficiently high in the corona that even Fe ions up to 600 MeV/amu are not stripped of electrons.
Front propagation and clustering in the stochastic nonlocal Fisher equation
NASA Astrophysics Data System (ADS)
Ganan, Yehuda A.; Kessler, David A.
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
Front propagation and clustering in the stochastic nonlocal Fisher equation.
Ganan, Yehuda A; Kessler, David A
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
Initial data sets for the Schwarzschild spacetime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Lobo, Alfonso Garcia-Parrado; Kroon, Juan A. Valiente; School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS
2007-01-15
A characterization of initial data sets for the Schwarzschild spacetime is provided. This characterization is obtained by performing a 3+1 decomposition of a certain invariant characterization of the Schwarzschild spacetime given in terms of concomitants of the Weyl tensor. This procedure renders a set of necessary conditions--which can be written in terms of the electric and magnetic parts of the Weyl tensor and their concomitants--for an initial data set to be a Schwarzschild initial data set. Our approach also provides a formula for a static Killing initial data set candidate--a KID candidate. Sufficient conditions for an initial data set tomore » be a Schwarzschild initial data set are obtained by supplementing the necessary conditions with the requirement that the initial data set possesses a stationary Killing initial data set of the form given by our KID candidate. Thus, we obtain an algorithmic procedure of checking whether a given initial data set is Schwarzschildean or not.« less
Hansoti, Bhakti; Jenson, Alexander; Kironji, Antony G; Katz, Joanne; Levin, Scott; Rothman, Richard; Kelen, Gabor D; Wallis, Lee A
2017-01-01
In low resource settings, an inadequate number of trained healthcare workers and high volumes of children presenting to Primary Healthcare Centers (PHC) result in prolonged waiting times and significant delays in identifying and evaluating critically ill children. The Sick Children Require Emergency Evaluation Now (SCREEN) program, a simple six-question screening algorithm administered by lay healthcare workers, was developed in 2014 to rapidly identify critically ill children and to expedite their care at the point of entry into a clinic. We sought to determine the impact of SCREEN on waiting times for critically ill children post real world implementation in Cape Town, South Africa. This is a prospective, observational implementation-effectiveness hybrid study that sought to determine: (1) the impact of SCREEN implementation on waiting times as a primary outcome measure, and (2) the effectiveness of the SCREEN tool in accurately identifying critically ill children when utilised by the QM and adherence by the QM to the SCREEN algorithm as secondary outcome measures. The study was conducted in two phases, Phase I control (pre-SCREEN implementation- three months in 2014) and Phase II (post-SCREEN implementation-two distinct three month periods in 2016). In Phase I, 1600 (92.38%) of 1732 children presenting to 4 clinics, had sufficient data for analysis and comprised the control sample. In Phase II, all 3383 of the children presenting to the 26 clinics during the sampling time frame had sufficient data for analysis. The proportion of critically ill children who saw a professional nurse within 10 minutes increased tenfold from 6.4% to 64% (Phase I to Phase II) with the median time to seeing a professional nurse reduced from 100.3 minutes to 4.9 minutes, (p < .001, respectively). Overall layperson screening compared to Integrated Management of Childhood Illnesses (IMCI) designation by a nurse had a sensitivity of 94.2% and a specificity of 88.1%, despite large variance in adherence to the SCREEN algorithm across clinics. The SCREEN program when implemented in a real-world setting can significantly reduce waiting times for critically ill children in PHCs, however further work is required to improve the implementation of this innovative program.
System for producing a uniform rubble bed for in situ processes
Galloway, Terry R.
1983-01-01
A method and a cutter for producing a large cavity filled with a uniform bed of rubblized oil shale or other material, for in situ processing. A raise drill head (72) has a hollow body (76) with a generally circular base and sloping upper surface. A hollow shaft (74) extends from the hollow body (76). Cutter teeth (78) are mounted on the upper surface of the body (76) and relatively small holes (77) are formed in the body (76) between the cutter teeth (78). Relatively large peripheral flutes (80) around the body (76) allow material to drop below the drill head (72). A pilot hole is drilled into the oil shale deposit. The pilot hole is reamed into a large diameter hole by means of a large diameter raise drill head or cutter to produce a cavity filled with rubble. A flushing fluid, such as air, is circulated through the pilot hole during the reaming operation to remove fines through the raise drill, thereby removing sufficient material to create sufficient void space, and allowing the larger particles to fill the cavity and provide a uniform bed of rubblized oil shale.
Smith, Katherine Elizabeth; Fooks, Gary; Gilmore, Anna B; Collin, Jeff; Weishaar, Heide
2015-04-01
Over the past fifteen years, an interconnected set of regulatory reforms, known as Better Regulation, has been adopted across Europe, marking a significant shift in the way that European Union policies are developed. There has been little exploration of the origins of these reforms, which include mandatory ex ante impact assessment. Drawing on documentary and interview data, this article discusses how and why large corporations, notably British American Tobacco (BAT), worked to influence and promote these reforms. Our analysis highlights (1) how policy entrepreneurs with sufficient resources (such as large corporations) can shape the membership and direction of advocacy coalitions; (2) the extent to which "think tanks" may be prepared to lobby on behalf of commercial clients; and (3) why regulated industries (including tobacco) may favor the use of "evidence tools," such as impact assessments, in policy making. We argue that a key aspect of BAT's ability to shape regulatory reform involved the deliberate construction of a vaguely defined idea that could be strategically adapted to appeal to diverse constituencies. We discuss the theoretical implications of this finding for the Advocacy Coalition Framework, as well as the practical implications of the findings for efforts to promote transparency and public health in the European Union. Copyright © 2015 by Duke University Press.
Smith, Katherine E.; Fooks, Gary; Gilmore, Anna B.; Collin, Jeff; Weishaar, Heide
2015-01-01
Over the past fifteen years, an inter-connected set of regulatory reforms, known as Better Regulation, has been adopted across Europe, marking a significant shift in the way European Union (EU) policies are developed. There has been little exploration of the origins of these reforms, which include mandatory ex-ante impact assessment. Drawing on documentary and interview data, this paper discusses how and why large corporations, notably British American Tobacco (BAT), worked to influence and promote these reforms. Our analysis highlights: (i) how policy entrepreneurs with sufficient resources (such as large corporations) can shape the membership and direction of advocacy coalitions; (ii) the extent to which ‘think tanks’ may be prepared to lobby on behalf of commercial clients; and (iii) why regulated industries (including tobacco) may favour the use of ‘evidence-tools’, such as impact assessments, in policymaking. We argue a key aspect of BAT’s ability to shape regulatory reform involved the deliberate construction of a vaguely defined idea that could be strategically adapted to appeal to diverse constituencies. We discuss the theoretical implications of this finding for the ‘Advocacy Coalition Framework’, as well as the practical implications of the findings for efforts to promote ‘transparency’ and public health in the EU. PMID:25646389
NASA Astrophysics Data System (ADS)
Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.
2018-07-01
Large-scale free-piston driven expansion tubes have uniquely high total pressure capabilities which make them an important resource for development of access-to-space scramjet engine technology. However, many aspects of their operation are complex, and their test flows are fundamentally unsteady and difficult to measure. While computational fluid dynamics methods provide an important tool for quantifying these flows, these calculations become very expensive with increasing facility size and therefore have to be carefully constructed to ensure sufficient accuracy is achieved within feasible computational times. This study examines modelling strategies for a Mach 10 scramjet test condition developed for The University of Queensland's X3 facility. The present paper outlines the challenges associated with test flow reconstruction, describes the experimental set-up for the X3 experiments, and then details the development of an experimentally tuned quasi-one-dimensional CFD model of the full facility. The 1-D model, which accurately captures longitudinal wave processes, is used to calculate the transient flow history in the shock tube. This becomes the inflow to a higher-fidelity 2-D axisymmetric simulation of the downstream facility, detailed in the Part 2 companion paper, leading to a validated, fully defined nozzle exit test flow.