Certified randomness in quantum physics.
Acín, Antonio; Masanes, Lluis
2016-12-07
The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.
A Voice-Radio Method for Collecting Human Factors Data.
ERIC Educational Resources Information Center
Askren, William B.; And Others
Available methods for collecting human factors data rely heavily on observations, interviews, and questionnaires. A need exists for other methods. The feasibility of using two-way voice-radio for this purpose was studied. The data collection methodology consisted of a human factors analyst talking from a radio base station with technicians wearing…
An Analytical Method for Measuring Competence in Project Management
ERIC Educational Resources Information Center
González-Marcos, Ana; Alba-Elías, Fernando; Ordieres-Meré, Joaquín
2016-01-01
The goal of this paper is to present a competence assessment method in project management that is based on participants' performance and value creation. It seeks to close an existing gap in competence assessment in higher education. The proposed method relies on information and communication technology (ICT) tools and combines Project Management…
Accelerated Colorimetric Micro-assay for Screening Mold Inhibitors
Carol A. Clausen; Vina W. Yang
2014-01-01
Rapid quantitative laboratory test methods are needed to screen potential antifungal agents. Existing laboratory test methods are relatively time consuming, may require specialized test equipment and rely on subjective visual ratings. A quantitative, colorimetric micro-assay has been developed that uses XTT tetrazolium salt to metabolically assess mold spore...
Future Issues and Perspectives in the Evaluation of Social Development.
ERIC Educational Resources Information Center
Marsden, David; Oakley, Peter
1991-01-01
An instrumental/technocratic approach to evaluation of social development relies on primarily quantitative methods. An interpretive approach resists claims to legitimacy and authority of "experts" and questions existing interpretations. The latter approach is characterized by cultural relativism and subjectivity. (SK)
DOT National Transportation Integrated Search
2007-09-01
Two competing approaches to travel demand modeling exist today. The more traditional 4-step travel demand models rely on aggregate demographic data at a traffic analysis zone (TAZ) level. Activity-based microsimulation methods employ more robus...
A graph-based semantic similarity measure for the gene ontology.
Alvarez, Marco A; Yan, Changhui
2011-12-01
Existing methods for calculating semantic similarities between pairs of Gene Ontology (GO) terms and gene products often rely on external databases like Gene Ontology Annotation (GOA) that annotate gene products using the GO terms. This dependency leads to some limitations in real applications. Here, we present a semantic similarity algorithm (SSA), that relies exclusively on the GO. When calculating the semantic similarity between a pair of input GO terms, SSA takes into account the shortest path between them, the depth of their nearest common ancestor, and a novel similarity score calculated between the definitions of the involved GO terms. In our work, we use SSA to calculate semantic similarities between pairs of proteins by combining pairwise semantic similarities between the GO terms that annotate the involved proteins. The reliability of SSA was evaluated by comparing the resulting semantic similarities between proteins with the functional similarities between proteins derived from expert annotations or sequence similarity. Comparisons with existing state-of-the-art methods showed that SSA is highly competitive with the other methods. SSA provides a reliable measure for semantics similarity independent of external databases of functional-annotation observations.
Water Mapping Using Multispectral Airborne LIDAR Data
NASA Astrophysics Data System (ADS)
Yan, W. Y.; Shaker, A.; LaRocque, P. E.
2018-04-01
This study investigates the use of the world's first multispectral airborne LiDAR sensor, Optech Titan, manufactured by Teledyne Optech to serve the purpose of automatic land-water classification with a particular focus on near shore region and river environment. Although there exist recent studies utilizing airborne LiDAR data for shoreline detection and water surface mapping, the majority of them only perform experimental testing on clipped data subset or rely on data fusion with aerial/satellite image. In addition, most of the existing approaches require manual intervention or existing tidal/datum data for sample collection of training data. To tackle the drawbacks of previous approaches, we propose and develop an automatic data processing workflow for land-water classification using multispectral airborne LiDAR data. Depending on the nature of the study scene, two methods are proposed for automatic training data selection. The first method utilizes the elevation/intensity histogram fitted with Gaussian mixture model (GMM) to preliminarily split the land and water bodies. The second method mainly relies on the use of a newly developed scan line elevation intensity ratio (SLIER) to estimate the water surface data points. Regardless of the training methods being used, feature spaces can be constructed using the multispectral LiDAR intensity, elevation and other features derived from these parameters. The comprehensive workflow was tested with two datasets collected for different near shore region and river environment, where the overall accuracy yielded better than 96 %.
Historically, risk assessment has relied upon toxicological data to obtain hazard-based reference levels, which are subsequently compared to exposure estimates to determine whether an unacceptable risk to public health may exist. Recent advances in analytical methods, biomarker ...
The Beck Depression Inventory, Second Edition (BDI-II): A Cross-Sample Structural Analysis
ERIC Educational Resources Information Center
Strunk, Kamden K.; Lane, Forrest C.
2017-01-01
A common concern about the Beck Depression Inventory, Second edition (BDI-II) among researchers in the area of depression has long been the single-factor scoring scheme. Methods exist for making cross-sample comparisons of latent structure but tend to rely on estimation methods that can be imprecise and unnecessarily complex. This study presents a…
ERIC Educational Resources Information Center
Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D.
2016-01-01
Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…
Sensors and signal processing for high accuracy passenger counting : final report.
DOT National Transportation Integrated Search
2009-03-05
It is imperative for a transit system to track statistics about their ridership in order to plan bus routes. There exists a wide variety of methods for obtaining these statistics that range from relying on the driver to count people to utilizing came...
Analyzing the security of an existing computer system
NASA Technical Reports Server (NTRS)
Bishop, M.
1986-01-01
Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.
In situ cell-by-cell imaging and analysis of small cell populations by mass spectrometry
USDA-ARS?s Scientific Manuscript database
Molecular imaging by mass spectrometry (MS) is emerging as a tool to determine the distribution of proteins, lipids and metabolites in tissues. The existing imaging methods, however, rely on predefined typically rectangular grids for sampling that ignore the natural cellular organization of the tiss...
Validation of the Quantitative Diagnostic Thinking Inventory for Athletic Training: A Pilot Study
ERIC Educational Resources Information Center
Kicklighter, Taz; Barnum, Mary; Geisler, Paul R.; Martin, Malissa
2016-01-01
Context: The cognitive process of making a clinical decision lies somewhere on a continuum between novices using hypothetico-deductive reasoning and experts relying more on case pattern recognition. Although several methods exist for measuring facets of clinical reasoning in specific situations, none have been experimentally applied, as of yet, to…
How to Build a Course to Teach Accountants to Teach
ERIC Educational Resources Information Center
Noel, Christine Z. J.; Crosser, Rick L.; Kuglin, Christine L.; Lupomech, Lynn A.
2014-01-01
Faculty preparation in schools of business continues to offer little or no instruction on how to teach. University instructors, generally teaching the way they were taught, continue to rely on teaching methods with which they are familiar. To exacerbate the issue, a shortage exists in terminally qualified accounting instructors. More and more…
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Inventing and improving ribozyme function: rational design versus iterative selection methods
NASA Technical Reports Server (NTRS)
Breaker, R. R.; Joyce, G. F.
1994-01-01
Two major strategies for generating novel biological catalysts exist. One relies on our knowledge of biopolymer structure and function to aid in the 'rational design' of new enzymes. The other, often called 'irrational design', aims to generate new catalysts, in the absence of detailed physicochemical knowledge, by using selection methods to search a library of molecules for functional variants. Both strategies have been applied, with considerable success, to the remodeling of existing ribozymes and the development of ribozymes with novel catalytic function. The two strategies are by no means mutually exclusive, and are best applied in a complementary fashion to obtain ribozymes with the desired catalytic properties.
A Data Augmentation Approach to Short Text Classification
ERIC Educational Resources Information Center
Rosario, Ryan Robert
2017-01-01
Text classification typically performs best with large training sets, but short texts are very common on the World Wide Web. Can we use resampling and data augmentation to construct larger texts using similar terms? Several current methods exist for working with short text that rely on using external data and contexts, or workarounds. Our focus is…
ERIC Educational Resources Information Center
Brewer, Michael S.; Gardner, Grant E.
2013-01-01
Teaching population genetics provides a bridge between genetics and evolution by using examples of the mechanisms that underlie changes in allele frequencies over time. Existing methods of teaching these concepts often rely on computer simulations or hand calculations, which distract students from the material and are problematic for those with…
An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis
ERIC Educational Resources Information Center
Diwakar, Rekha
2017-01-01
Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…
James M. Lazorchak; Michael B. Griffith; Marc Mills; Joseph Schubauer-Berigan; Frank McCormick; Richard Brenner; Craig Zeller
2015-01-01
The US Environmental Protection Agency (USEPA) develops methods and tools for evaluating risk management strategies for sediments contaminated with polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and other legacy pollutants. Monitored natural recovery is a risk management alternative that relies on existing physical, chemical, and biological...
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
ERIC Educational Resources Information Center
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately…
ERIC Educational Resources Information Center
Lim, Cheolil; Lee, Jihyun; Lee, Sunhee
2014-01-01
Existing approaches to developing creativity rely on the sporadic teaching of creative thinking techniques or the engagement of learners in a creativity-promoting environment. Such methods cannot develop students' creativity as fully as a multilateral approach that integrates creativity throughout a curriculum. The purpose of this study was to…
Identifying metabolic enzymes with multiple types of association evidence
Kharchenko, Peter; Chen, Lifeng; Freund, Yoav; Vitkup, Dennis; Church, George M
2006-01-01
Background Existing large-scale metabolic models of sequenced organisms commonly include enzymatic functions which can not be attributed to any gene in that organism. Existing computational strategies for identifying such missing genes rely primarily on sequence homology to known enzyme-encoding genes. Results We present a novel method for identifying genes encoding for a specific metabolic function based on a local structure of metabolic network and multiple types of functional association evidence, including clustering of genes on the chromosome, similarity of phylogenetic profiles, gene expression, protein fusion events and others. Using E. coli and S. cerevisiae metabolic networks, we illustrate predictive ability of each individual type of association evidence and show that significantly better predictions can be obtained based on the combination of all data. In this way our method is able to predict 60% of enzyme-encoding genes of E. coli metabolism within the top 10 (out of 3551) candidates for their enzymatic function, and as a top candidate within 43% of the cases. Conclusion We illustrate that a combination of genome context and other functional association evidence is effective in predicting genes encoding metabolic enzymes. Our approach does not rely on direct sequence homology to known enzyme-encoding genes, and can be used in conjunction with traditional homology-based metabolic reconstruction methods. The method can also be used to target orphan metabolic activities. PMID:16571130
Analysis of Bonded Joints Between the Facesheet and Flange of Corrugated Composite Panels
NASA Technical Reports Server (NTRS)
Yarrington, Phillip W.; Collier, Craig S.; Bednarcyk, Brett A.
2008-01-01
This paper outlines a method for the stress analysis of bonded composite corrugated panel facesheet to flange joints. The method relies on the existing HyperSizer Joints software, which analyzes the bonded joint, along with a beam analogy model that provides the necessary boundary loading conditions to the joint analysis. The method is capable of predicting the full multiaxial stress and strain fields within the flange to facesheet joint and thus can determine ply-level margins and evaluate delamination. Results comparing the method to NASTRAN finite element model stress fields are provided illustrating the accuracy of the method.
Finite-difference computations of rotor loads
NASA Technical Reports Server (NTRS)
Caradonna, F. X.; Tung, C.
1985-01-01
This paper demonstrates the current and future potential of finite-difference methods for solving real rotor problems which now rely largely on empiricism. The demonstration consists of a simple means of combining existing finite-difference, integral, and comprehensive loads codes to predict real transonic rotor flows. These computations are performed for hover and high-advance-ratio flight. Comparisons are made with experimental pressure data.
Finite-difference computations of rotor loads
NASA Technical Reports Server (NTRS)
Caradonna, F. X.; Tung, C.
1985-01-01
The current and future potential of finite difference methods for solving real rotor problems which now rely largely on empiricism are demonstrated. The demonstration consists of a simple means of combining existing finite-difference, integral, and comprehensive loads codes to predict real transonic rotor flows. These computations are performed for hover and high-advanced-ratio flight. Comparisons are made with experimental pressure data.
Sculpting bespoke mountains: Determining free energies with basis expansions
NASA Astrophysics Data System (ADS)
Whitmer, Jonathan K.; Fluitt, Aaron M.; Antony, Lucas; Qin, Jian; McGovern, Michael; de Pablo, Juan J.
2015-07-01
The intriguing behavior of a wide variety of physical systems, ranging from amorphous solids or glasses to proteins, is a direct manifestation of underlying free energy landscapes riddled with local minima separated by large barriers. Exploring such landscapes has arguably become one of statistical physics's great challenges. A new method is proposed here for uniform sampling of rugged free energy surfaces. The method, which relies on special Green's functions to approximate the Dirac delta function, improves significantly on existing simulation techniques by providing a boundary-agnostic approach that is capable of mapping complex features in multidimensional free energy surfaces. The usefulness of the proposed approach is established in the context of a simple model glass former and model proteins, demonstrating improved convergence and accuracy over existing methods.
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Electric Power Consumption Coefficients for U.S. Industries: Regional Estimation and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo
Economic activity relies on electric power provided by electrical generation, transmission, and distribution systems. This paper presents a method developed at Los Alamos National Laboratory to estimate electric power consumption by different industries in the United States. Results are validated through comparisons with existing literature and benchmarking data sources. We also discuss the limitations and applications of the presented method, such as estimating indirect electric power consumption and assessing the economic impact of power outages based on input-output economic models.
NASA Astrophysics Data System (ADS)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2010-04-01
For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.
Altitude Effects on Thermal Ice Protection System Performance; a Study of an Alternative Approach
NASA Technical Reports Server (NTRS)
Addy, Harold E., Jr.; Orchard, David; Wright, William B.; Oleskiw, Myron
2016-01-01
Research has been conducted to better understand the phenomena involved during operation of an aircraft's thermal ice protection system under running wet icing conditions. In such situations, supercooled water striking a thermally ice-protected surface does not fully evaporate but runs aft to a location where it freezes. The effects of altitude, in terms of air pressure and density, on the processes involved were of particular interest. Initial study results showed that the altitude effects on heat energy transfer were accurately modeled using existing methods, but water mass transport was not. Based upon those results, a new method to account for altitude effects on thermal ice protection system operation was proposed. The method employs a two-step process where heat energy and mass transport are sequentially matched, linked by matched surface temperatures. While not providing exact matching of heat and mass transport to reference conditions, the method produces a better simulation than other methods. Moreover, it does not rely on the application of empirical correction factors, but instead relies on the straightforward application of the primary physics involved. This report describes the method, shows results of testing the method, and discusses its limitations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
Multi-Frame Convolutional Neural Networks for Object Detection in Temporal Data
2017-03-01
maximum 200 words) Given the problem of detecting objects in video , existing neural-network solutions rely on a post-processing step to combine...information across frames and strengthen conclusions. This technique has been successful for videos with simple, dominant objects but it cannot detect objects...Computer Science iii THIS PAGE INTENTIONALLY LEFT BLANK iv ABSTRACT Given the problem of detecting objects in video , existing neural-network solutions rely
2015-06-01
and tools, called model-integrated computing ( MIC ) [3] relies on the use of domain-specific modeling languages for creating models of the system to be...hence giving reflective capabilities to it. We have followed the MIC method here: we designed a domain- specific modeling language for modeling...are produced one-off and not for the mass market , the scope for price reduction based on the market demands is non-existent. Processes to create
A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays
Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.
2013-01-01
Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767
Endophytic Phytoaugmentation: Treating Wastewater and Runoff Through Augmented Phytoremediation
Redfern, Lauren K.
2016-01-01
Abstract Limited options exist for efficiently and effectively treating water runoff from agricultural fields and landfills. Traditional treatments include excavation, transport to landfills, incineration, stabilization, and vitrification. In general, treatment options relying on biological methods such as bioremediation have the ability to be applied in situ and offer a sustainable remedial option with a lower environmental impact and reduced long-term operating expenses. These methods are generally considered ecologically friendly, particularly when compared to traditional physicochemical cleanup options. Phytoremediation, which relies on plants to take up and/or transform the contaminant of interest, is another alternative treatment method which has been developed. However, phytoremediation is not widely used, largely due to its low treatment efficiency. Endophytic phytoaugmentation is a variation on phytoremediation that relies on augmenting the phytoremediating plants with exogenous strains to stimulate associated plant-microbe interactions to facilitate and improve remediation efficiency. In this review, we offer a summary of the current knowledge as well as developments in endophytic phytoaugmentation and present some potential future applications for this technology. There has been a limited number of published endophytic phytoaugmentation case studies and much remains to be done to transition lab-scale results to field applications. Future research needs include large-scale endophytic phytoaugmentation experiments as well as the development of more exhaustive tools for monitoring plant-microbe-pollutant interactions. PMID:27158249
NASA Astrophysics Data System (ADS)
Gladwin, D.; Stewart, P.; Stewart, J.
2011-02-01
This article addresses the problem of maintaining a stable rectified DC output from the three-phase AC generator in a series-hybrid vehicle powertrain. The series-hybrid prime power source generally comprises an internal combustion (IC) engine driving a three-phase permanent magnet generator whose output is rectified to DC. A recent development has been to control the engine/generator combination by an electronically actuated throttle. This system can be represented as a nonlinear system with significant time delay. Previously, voltage control of the generator output has been achieved by model predictive methods such as the Smith Predictor. These methods rely on the incorporation of an accurate system model and time delay into the control algorithm, with a consequent increase in computational complexity in the real-time controller, and as a necessity relies to some extent on the accuracy of the models. Two complementary performance objectives exist for the control system. Firstly, to maintain the IC engine at its optimal operating point, and secondly, to supply a stable DC supply to the traction drive inverters. Achievement of these goals minimises the transient energy storage requirements at the DC link, with a consequent reduction in both weight and cost. These objectives imply constant velocity operation of the IC engine under external load disturbances and changes in both operating conditions and vehicle speed set-points. In order to achieve these objectives, and reduce the complexity of implementation, in this article a controller is designed by the use of Genetic Programming methods in the Simulink modelling environment, with the aim of obtaining a relatively simple controller for the time-delay system which does not rely on the implementation of real time system models or time delay approximations in the controller. A methodology is presented to utilise the miriad of existing control blocks in the Simulink libraries to automatically evolve optimal control structures.
Intensity non-uniformity correction in MRI: existing methods and their validation.
Belaroussi, Boubakeur; Milles, Julien; Carme, Sabin; Zhu, Yue Min; Benoit-Cattin, Hugues
2006-04-01
Magnetic resonance imaging is a popular and powerful non-invasive imaging technique. Automated analysis has become mandatory to efficiently cope with the large amount of data generated using this modality. However, several artifacts, such as intensity non-uniformity, can degrade the quality of acquired data. Intensity non-uniformity consists in anatomically irrelevant intensity variation throughout data. It can be induced by the choice of the radio-frequency coil, the acquisition pulse sequence and by the nature and geometry of the sample itself. Numerous methods have been proposed to correct this artifact. In this paper, we propose an overview of existing methods. We first sort them according to their location in the acquisition/processing pipeline. Sorting is then refined based on the assumptions those methods rely on. Next, we present the validation protocols used to evaluate these different correction schemes both from a qualitative and a quantitative point of view. Finally, availability and usability of the presented methods is discussed.
Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.
Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C
2018-06-01
Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.
Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint
NASA Astrophysics Data System (ADS)
Rodrigues, José Francisco; Santos, Lisa
2012-08-01
We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.
Spatially-explicit models of global tree density.
Glick, Henry B; Bettigole, Charlie; Maynard, Daniel S; Covey, Kristofer R; Smith, Jeffrey R; Crowther, Thomas W
2016-08-16
Remote sensing and geographic analysis of woody vegetation provide means of evaluating the distribution of natural resources, patterns of biodiversity and ecosystem structure, and socio-economic drivers of resource utilization. While these methods bring geographic datasets with global coverage into our day-to-day analytic spheres, many of the studies that rely on these strategies do not capitalize on the extensive collection of existing field data. We present the methods and maps associated with the first spatially-explicit models of global tree density, which relied on over 420,000 forest inventory field plots from around the world. This research is the result of a collaborative effort engaging over 20 scientists and institutions, and capitalizes on an array of analytical strategies. Our spatial data products offer precise estimates of the number of trees at global and biome scales, but should not be used for local-level estimation. At larger scales, these datasets can contribute valuable insight into resource management, ecological modelling efforts, and the quantification of ecosystem services.
NASA Astrophysics Data System (ADS)
Eriçok, Ozan Burak; Ertürk, Hakan
2018-07-01
Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.
A Simple Method to Simultaneously Detect and Identify Spikes from Raw Extracellular Recordings.
Petrantonakis, Panagiotis C; Poirazi, Panayiota
2015-01-01
The ability to track when and which neurons fire in the vicinity of an electrode, in an efficient and reliable manner can revolutionize the neuroscience field. The current bottleneck lies in spike sorting algorithms; existing methods for detecting and discriminating the activity of multiple neurons rely on inefficient, multi-step processing of extracellular recordings. In this work, we show that a single-step processing of raw (unfiltered) extracellular signals is sufficient for both the detection and identification of active neurons, thus greatly simplifying and optimizing the spike sorting approach. The efficiency and reliability of our method is demonstrated in both real and simulated data.
Lee, Wen-Chung
2003-09-01
The future of genetic studies of complex human diseases will rely more and more on the epidemiologic association paradigm. The author proposes to scan the genome for disease-susceptibility gene(s) by testing for deviation from Hardy-Weinberg equilibrium in a gene bank of affected individuals. A power formula is presented, which is very accurate as revealed by Monte Carlo simulations. If the disease-susceptibility gene is recessive with an allele frequency of < or = 0.5 or dominant with an allele frequency of > or = 0.5, the number of subjects needed by the present method is smaller than that needed by using a case-parents design (using either the transmission/disequilibrium test or the 2-df likelihood ratio test). However, the method cannot detect genes with a multiplicative mode of inheritance, and the validity of the method relies on the assumption that the source population from which the cases arise is in Hardy-Weinberg equilibrium. Thus, it is prone to produce false positive and false negative results. Nevertheless, the method enables rapid gene hunting in an existing gene bank of affected individuals with no extra effort beyond simple calculations.
Covariance analysis for evaluating head trackers
NASA Astrophysics Data System (ADS)
Kang, Donghoon
2017-10-01
Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.
Polarization holograms allow highly efficient generation of complex light beams.
Ruiz, U; Pagliusi, P; Provenzano, C; Volke-Sepúlveda, K; Cipparrone, Gabriella
2013-03-25
We report a viable method to generate complex beams, such as the non-diffracting Bessel and Weber beams, which relies on the encoding of amplitude information, in addition to phase and polarization, using polarization holography. The holograms are recorded in polarization sensitive films by the interference of a reference plane wave with a tailored complex beam, having orthogonal circular polarizations. The high efficiency, the intrinsic achromaticity and the simplicity of use of the polarization holograms make them competitive with respect to existing methods and attractive for several applications. Theoretical analysis, based on the Jones formalism, and experimental results are shown.
2015-09-30
ranging individuals support the existence of these same stress response pathways in marine mammals. 2 While the HPA axis and physiological processes...relying upon methods which include capture-release health assessments. Stress and reproductive hormones (cortisol, aldosterone , thyroid, testosterone...Analyses Hormone concentrations (cortisol, aldosterone , reproductive and thyroid hormones) in serum samples were analyzed by Cornell’s Animal Health
ERIC Educational Resources Information Center
Mikkelsen, Kim Sass
2017-01-01
Contemporary case studies rely on verbal arguments and set theory to build or evaluate theoretical claims. While existing procedures excel in the use of qualitative information (information about kind), they ignore quantitative information (information about degree) at central points of the analysis. Effectively, contemporary case studies rely on…
In-Situ Transfer Standard and Coincident-View Intercomparisons for Sensor Cross-Calibration
NASA Technical Reports Server (NTRS)
Thome, Kurt; McCorkel, Joel; Czapla-Myers, Jeff
2013-01-01
There exist numerous methods for accomplishing on-orbit calibration. Methods include the reflectance-based approach relying on measurements of surface and atmospheric properties at the time of a sensor overpass as well as invariant scene approaches relying on knowledge of the temporal characteristics of the site. The current work examines typical cross-calibration methods and discusses the expected uncertainties of the methods. Data from the Advanced Land Imager (ALI), Advanced Spaceborne Thermal Emission and Reflection and Radiometer (ASTER), Enhanced Thematic Mapper Plus (ETM+), Moderate Resolution Imaging Spectroradiometer (MODIS), and Thematic Mapper (TM) are used to demonstrate the limits of relative sensor-to-sensor calibration as applied to current sensors while Landsat-5 TM and Landsat-7 ETM+ are used to evaluate the limits of in situ site characterizations for SI-traceable cross calibration. The current work examines the difficulties in trending of results from cross-calibration approaches taking into account sampling issues, site-to-site variability, and accuracy of the method. Special attention is given to the differences caused in the cross-comparison of sensors in radiance space as opposed to reflectance space. The results show that cross calibrations with absolute uncertainties lesser than 1.5 percent (1 sigma) are currently achievable even for sensors without coincident views.
NASA Astrophysics Data System (ADS)
Rohling, E. J.
2014-12-01
Ice volume (and hence sea level) and deep-sea temperature are key measures of global climate change. Sea level has been documented using several independent methods over the past 0.5 million years (Myr). Older periods, however, lack such independent validation; all existing records are related to deep-sea oxygen isotope (d18O) data that are influenced by processes unrelated to sea level. For deep-sea temperature, only one continuous high-resolution (Mg/Ca-based) record exists, with related sea-level estimates, spanning the past 1.5 Myr. We have recently presented a novel sea-level reconstruction, with associated estimates of deep-sea temperature, which independently validates the previous 0-1.5 Myr reconstruction and extends it back to 5.3 Myr ago. A serious of caveats applies to this new method, especially in older times of its application, as is always the case with new methods. Independent validation exercises are needed to elucidate where consistency exists, and where solutions drift away from each other. A key observation from our new method is that a large temporal offset existed during the onset of Plio-Pleistocene ice ages, between a marked cooling step at 2.73 Myr ago and the first major glaciation at 2.15 Myr ago. This observation relies on relative changes within the dataset, which are more robust than absolute values. I will discuss our method and its main caveats and avenues for improvement.
Statistical Methods Applied to Gamma-ray Spectroscopy Algorithms in Nuclear Security Missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagan, Deborah K.; Robinson, Sean M.; Runkle, Robert C.
2012-10-01
In a wide range of nuclear security missions, gamma-ray spectroscopy is a critical research and development priority. One particularly relevant challenge is the interdiction of special nuclear material for which gamma-ray spectroscopy supports the goals of detecting and identifying gamma-ray sources. This manuscript examines the existing set of spectroscopy methods, attempts to categorize them by the statistical methods on which they rely, and identifies methods that have yet to be considered. Our examination shows that current methods effectively estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty—ones that are significantly moremore » complex. We thus explore the premise that significantly improving algorithm performance requires greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods have the potential to reduce decision uncertainty by more rigorously and comprehensively incorporating all sources of uncertainty. We expect that application of such methods will demonstrate progress in meeting the needs of nuclear security missions by improving on the existing numerical infrastructure for which these analyses have not been conducted.« less
Weak Galerkin finite element methods for Darcy flow: Anisotropy and heterogeneity
NASA Astrophysics Data System (ADS)
Lin, Guang; Liu, Jiangguo; Mu, Lin; Ye, Xiu
2014-11-01
This paper presents a family of weak Galerkin finite element methods (WGFEMs) for Darcy flow computation. The WGFEMs are new numerical methods that rely on the novel concept of discrete weak gradients. The WGFEMs solve for pressure unknowns both in element interiors and on the mesh skeleton. The numerical velocity is then obtained from the discrete weak gradient of the numerical pressure. The new methods are quite different than many existing numerical methods in that they are locally conservative by design, the resulting discrete linear systems are symmetric and positive-definite, and there is no need for tuning problem-dependent penalty factors. We test the WGFEMs on benchmark problems to demonstrate the strong potential of these new methods in handling strong anisotropy and heterogeneity in Darcy flow.
A Penalized Robust Method for Identifying Gene-Environment Interactions
Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Xie, Yang; Ma, Shuangge
2015-01-01
In high-throughput studies, an important objective is to identify gene-environment interactions associated with disease outcomes and phenotypes. Many commonly adopted methods assume specific parametric or semiparametric models, which may be subject to model mis-specification. In addition, they usually use significance level as the criterion for selecting important interactions. In this study, we adopt the rank-based estimation, which is much less sensitive to model specification than some of the existing methods and includes several commonly encountered data and models as special cases. Penalization is adopted for the identification of gene-environment interactions. It achieves simultaneous estimation and identification and does not rely on significance level. For computation feasibility, a smoothed rank estimation is further proposed. Simulation shows that under certain scenarios, for example with contaminated or heavy-tailed data, the proposed method can significantly outperform the existing alternatives with more accurate identification. We analyze a lung cancer prognosis study with gene expression measurements under the AFT (accelerated failure time) model. The proposed method identifies interactions different from those using the alternatives. Some of the identified genes have important implications. PMID:24616063
Toward Understanding the Heterogeneity in OCD: Evidence from narratives in adult patients
Van Schalkwyk, Gerrit I; Bhalla, Ish P; Griepp, Matthew; Kelmendi, Benjamin; Davidson, Larry; Pittenger, Christopher
2015-01-01
Background Current attempts at understanding the heterogeneity in OCD have relied on quantitative methods. The results of such work point towards a dimensional structure for OCD. Existing qualitative work in OCD has focused on understanding specific aspects of the OCD experience in greater depth. However, qualitative methods are also of potential value in furthering our understanding of OCD heterogeneity by allowing for open-ended exploration of the OCD experience and correlating identified subtypes with patient narratives. Aims We explored variations in patients’ experience prior to, during, and immediately after performing their compulsions. Method Semi-structured interviews were conducted with 20 adults with OCD, followed by inductive thematic analysis. Participant responses were not analyzed within the context of an existing theoretical framework, and themes were labeled descriptively. Results The previously dichotomy of ‘anxiety’ vs ‘incompleteness’ emerged organically during narrative analysis. In addition, we found that some individuals with OCD utilize their behaviors as a way to cope with stress and anxiety more generally. Other participants did not share this experience and denied finding any comfort in their OC behaviors. The consequences of attention difficulties were highlighted, with some participants describing how difficulty focusing on a task could influence the need for it to be repeated multiple times. Conclusions The extent to which patients use OCD as a coping mechanism is a relevant distinction with potential implications for treatment engagement. Patients may experience ambivalence about suppressing behaviors that they have come to rely upon for management of stress and anxiety, even if these behaviors represent symptoms of a psychiatric illness. PMID:25855685
Report #2007-P-00009, February 28, 2007. EPA’s Chesapeake Bay Program Office is relying on anticipated nitrogen deposition reductions from Clean Air Act (CAA) regulations already issued by EPA, combined with other non-air sources' anticipated reductions.
A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules
Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos
2015-01-01
Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods. PMID:25938136
A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules.
Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos
Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods.
Toxicokinetic and Dosimetry Modeling Tools for Exposure ...
New technologies and in vitro testing approaches have been valuable additions to risk assessments that have historically relied solely on in vivo test results. Compared to in vivo methods, in vitro high throughput screening (HTS) assays are less expensive, faster and can provide mechanistic insights on chemical action. However, extrapolating from in vitro chemical concentrations to target tissue or blood concentrations in vivo is fraught with uncertainties, and modeling is dependent upon pharmacokinetic variables not measured in in vitro assays. To address this need, new tools have been created for characterizing, simulating, and evaluating chemical toxicokinetics. Physiologically-based pharmacokinetic (PBPK) models provide estimates of chemical exposures that produce potentially hazardous tissue concentrations, while tissue microdosimetry PK models relate whole-body chemical exposures to cell-scale concentrations. These tools rely on high-throughput in vitro measurements, and successful methods exist for pharmaceutical compounds that determine PK from limited in vitro measurements and chemical structure-derived property predictions. These high throughput (HT) methods provide a more rapid and less resource–intensive alternative to traditional PK model development. We have augmented these in vitro data with chemical structure-based descriptors and mechanistic tissue partitioning models to construct HTPBPK models for over three hundred environmental and pharmace
Benzene construction via organocatalytic formal [3+3] cycloaddition reaction.
Zhu, Tingshun; Zheng, Pengcheng; Mou, Chengli; Yang, Song; Song, Bao-An; Chi, Yonggui Robin
2014-09-25
The benzene unit, in its substituted forms, is a most common scaffold in natural products, bioactive molecules and polymer materials. Nearly 80% of the 200 best selling small molecule drugs contain at least one benzene moiety. Not surprisingly, the synthesis of substituted benzenes receives constant attentions. At present, the dominant methods use pre-existing benzene framework to install substituents by using conventional functional group manipulations or transition metal-catalyzed carbon-hydrogen bond activations. These otherwise impressive approaches require multiple synthetic steps and are ineffective from both economic and environmental perspectives. Here we report an efficient method for the synthesis of substituted benzene molecules. Instead of relying on pre-existing aromatic rings, here we construct the benzene core through a carbene-catalyzed formal [3+3] reaction. Given the simplicity and high efficiency, we expect this strategy to be of wide use especially for large scale preparation of biomedicals and functional materials.
Stability basin estimates fall risk from observed kinematics, demonstrated on the Sit-to-Stand task.
Shia, Victor; Moore, Talia Yuki; Holmes, Patrick; Bajcsy, Ruzena; Vasudevan, Ram
2018-04-27
The ability to quantitatively measure stability is essential to ensuring the safety of locomoting systems. While the response to perturbation directly reflects the stability of a motion, this experimental method puts human subjects at risk. Unfortunately, existing indirect methods for estimating stability from unperturbed motion have been shown to have limited predictive power. This paper leverages recent advances in dynamical systems theory to accurately estimate the stability of human motion without requiring perturbation. This approach relies on kinematic observations of a nominal Sit-to-Stand motion to construct an individual-specific dynamic model, input bounds, and feedback control that are then used to compute the set of perturbations from which the model can recover. This set, referred to as the stability basin, was computed for 14 individuals, and was able to successfully differentiate between less and more stable Sit-to-Stand strategies for each individual with greater accuracy than existing methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Measuring carbon in forests: current status and future challenges.
Brown, Sandra
2002-01-01
To accurately and precisely measure the carbon in forests is gaining global attention as countries seek to comply with agreements under the UN Framework Convention on Climate Change. Established methods for measuring carbon in forests exist, and are best based on permanent sample plots laid out in a statistically sound design. Measurements on trees in these plots can be readily converted to aboveground biomass using either biomass expansion factors or allometric regression equations. A compilation of existing root biomass data for upland forests of the world generated a significant regression equation that can be used to predict root biomass based on aboveground biomass only. Methods for measuring coarse dead wood have been tested in many forest types, but the methods could be improved if a non-destructive tool for measuring the density of dead wood was developed. Future measurements of carbon storage in forests may rely more on remote sensing data, and new remote data collection technologies are in development.
Review of near-infrared methods for wound assessment
NASA Astrophysics Data System (ADS)
Sowa, Michael G.; Kuo, Wen-Chuan; Ko, Alex C.-T.; Armstrong, David G.
2016-09-01
Wound management is a challenging and costly problem that is growing in importance as people are living longer. Instrumental methods are increasingly being relied upon to provide objective measures of wound assessment to help guide management. Technologies that employ near-infrared (NIR) light form a prominent contingent among the existing and emerging technologies. We review some of these technologies. Some are already established, such as indocyanine green fluorescence angiography, while we also speculate on others that have the potential to be clinically relevant to wound monitoring and assessment. These various NIR-based technologies address clinical wound management needs along the entire healing trajectory of a wound.
Emergency treatment of exertional heatstroke and comparison of whole body cooling techniques.
Costrini, A
1990-02-01
This manuscript compares the whole body cooling techniques in the emergency treatment of heatstroke. Historically, the use of cold water immersion with skin massage has been quite successful in rapidly lowering body temperature and in avoiding severe complications or death. Recent studies have suggested alternative therapies, including the use of a warm air spray, the use of helicopter downdraft, and pharmacological agents. While evidence exists to support these methods, they have not been shown to reduce fatalities as effectively as ice water immersion. Although several cooling methods may have clinical use, all techniques rely on the prompt recognition of symptoms and immediate action in the field.
Accurately estimating PSF with straight lines detected by Hough transform
NASA Astrophysics Data System (ADS)
Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong
2018-04-01
This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.
An efficient repeating signal detector to investigate earthquake swarms
NASA Astrophysics Data System (ADS)
Skoumal, Robert J.; Brudzinski, Michael R.; Currie, Brian S.
2016-08-01
Repetitive earthquake swarms have been recognized as key signatures in fluid injection induced seismicity, precursors to volcanic eruptions, and slow slip events preceding megathrust earthquakes. We investigate earthquake swarms by developing a Repeating Signal Detector (RSD), a computationally efficient algorithm utilizing agglomerative clustering to identify similar waveforms buried in years of seismic recordings using a single seismometer. Instead of relying on existing earthquake catalogs of larger earthquakes, RSD identifies characteristic repetitive waveforms by rapidly identifying signals of interest above a low signal-to-noise ratio and then grouping based on spectral and time domain characteristics, resulting in dramatically shorter processing time than more exhaustive autocorrelation approaches. We investigate seismicity in four regions using RSD: (1) volcanic seismicity at Mammoth Mountain, California, (2) subduction-related seismicity in Oaxaca, Mexico, (3) induced seismicity in Central Alberta, Canada, and (4) induced seismicity in Harrison County, Ohio. In each case, RSD detects a similar or larger number of earthquakes than existing catalogs created using more time intensive methods. In Harrison County, RSD identifies 18 seismic sequences that correlate temporally and spatially to separate hydraulic fracturing operations, 15 of which were previously unreported. RSD utilizes a single seismometer for earthquake detection which enables seismicity to be quickly identified in poorly instrumented regions at the expense of relying on another method to locate the new detections. Due to the smaller computation overhead and success at distances up to ~50 km, RSD is well suited for real-time detection of low-magnitude earthquake swarms with permanent regional networks.
DeepLoc: prediction of protein subcellular localization using deep learning.
Almagro Armenteros, José Juan; Sønderby, Casper Kaae; Sønderby, Søren Kaae; Nielsen, Henrik; Winther, Ole
2017-11-01
The prediction of eukaryotic protein subcellular localization is a well-studied topic in bioinformatics due to its relevance in proteomics research. Many machine learning methods have been successfully applied in this task, but in most of them, predictions rely on annotation of homologues from knowledge databases. For novel proteins where no annotated homologues exist, and for predicting the effects of sequence variants, it is desirable to have methods for predicting protein properties from sequence information only. Here, we present a prediction algorithm using deep neural networks to predict protein subcellular localization relying only on sequence information. At its core, the prediction model uses a recurrent neural network that processes the entire protein sequence and an attention mechanism identifying protein regions important for the subcellular localization. The model was trained and tested on a protein dataset extracted from one of the latest UniProt releases, in which experimentally annotated proteins follow more stringent criteria than previously. We demonstrate that our model achieves a good accuracy (78% for 10 categories; 92% for membrane-bound or soluble), outperforming current state-of-the-art algorithms, including those relying on homology information. The method is available as a web server at http://www.cbs.dtu.dk/services/DeepLoc. Example code is available at https://github.com/JJAlmagro/subcellular_localization. The dataset is available at http://www.cbs.dtu.dk/services/DeepLoc/data.php. jjalma@dtu.dk. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil
2016-04-29
We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less
A shortest-path graph kernel for estimating gene product semantic similarity.
Alvarez, Marco A; Qi, Xiaojun; Yan, Changhui
2011-07-29
Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO) often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. We present a shortest-path graph kernel (spgk) method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.
Weak Galerkin finite element methods for Darcy flow: Anisotropy and heterogeneity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Liu, Jiangguo; Mu, Lin
2014-11-01
This paper presents a family of weak Galerkin finite element methods (WGFEMs) for Darcy flow computation. The WGFEMs are new numerical methods that rely on the novel concept of discrete weak gradients. The WGFEMs solve for pressure unknowns both in element interiors and on the mesh skeleton. The numerical velocity is then obtained from the discrete weak gradient of the numerical pressure. The new methods are quite different than many existing numerical methods in that they are locally conservative by design, the resulting discrete linear systems are symmetric and positive-definite, and there is no need for tuning problem-dependent penalty factors.more » We test the WGFEMs on benchmark problems to demonstrate the strong potential of these new methods in handling strong anisotropy and heterogeneity in Darcy flow.« less
Effective evaluation of privacy protection techniques in visible and thermal imagery
NASA Astrophysics Data System (ADS)
Nawaz, Tahir; Berg, Amanda; Ferryman, James; Ahlberg, Jörgen; Felsberg, Michael
2017-09-01
Privacy protection may be defined as replacing the original content in an image region with a (less intrusive) content having modified target appearance information to make it less recognizable by applying a privacy protection technique. Indeed, the development of privacy protection techniques also needs to be complemented with an established objective evaluation method to facilitate their assessment and comparison. Generally, existing evaluation methods rely on the use of subjective judgments or assume a specific target type in image data and use target detection and recognition accuracies to assess privacy protection. An annotation-free evaluation method that is neither subjective nor assumes a specific target type is proposed. It assesses two key aspects of privacy protection: "protection" and "utility." Protection is quantified as an appearance similarity, and utility is measured as a structural similarity between original and privacy-protected image regions. We performed an extensive experimentation using six challenging datasets (having 12 video sequences), including a new dataset (having six sequences) that contains visible and thermal imagery. The new dataset is made available online for the community. We demonstrate effectiveness of the proposed method by evaluating six image-based privacy protection techniques and also show comparisons of the proposed method over existing methods.
Time warp operating system version 2.7 internals manual
NASA Technical Reports Server (NTRS)
1992-01-01
The Time Warp Operating System (TWOS) is an implementation of the Time Warp synchronization method proposed by David Jefferson. In addition, it serves as an actual platform for running discrete event simulations. The code comprising TWOS can be divided into several different sections. TWOS typically relies on an existing operating system to furnish some very basic services. This existing operating system is referred to as the Base OS. The existing operating system varies depending on the hardware TWOS is running on. It is Unix on the Sun workstations, Chrysalis or Mach on the Butterfly, and Mercury on the Mark 3 Hypercube. The base OS could be an entirely new operating system, written to meet the special needs of TWOS, but, to this point, existing systems have been used instead. The base OS's used for TWOS on various platforms are not discussed in detail in this manual, as they are well covered in their own manuals. Appendix G discusses the interface between one such OS, Mach, and TWOS.
SLIC superpixels compared to state-of-the-art superpixel methods.
Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine
2012-11-01
Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
Real time algorithms for sharp wave ripple detection.
Sethi, Ankit; Kemere, Caleb
2014-01-01
Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.
Taylor, Robert T.; Jackson, Kenneth J.; Duba, Alfred G.; Chen, Ching-I
1998-01-01
An in situ thermally enhanced microbial remediation strategy and a method for the biodegradation of toxic petroleum fuel hydrocarbon and halogenated organic solvent contaminants. The method utilizes nonpathogenic, thermophilic bacteria for the thermal biodegradation of toxic and carcinogenic contaminants, such as benzene, toluene, ethylbenzene and xylenes, from fuel leaks and the chlorinated ethenes, such as trichloroethylene, chlorinated ethanes, such as 1,1,1-trichloroethane, and chlorinated methanes, such as chloroform, from past solvent cleaning practices. The method relies on and takes advantage of the pre-existing heated conditions and the array of delivery/recovery wells that are created and in place following primary subsurface contaminant volatilization efforts via thermal approaches, such as dynamic underground steam-electrical heating.
Taylor, R.T.; Jackson, K.J.; Duba, A.G.; Chen, C.I.
1998-05-19
An in situ thermally enhanced microbial remediation strategy and a method for the biodegradation of toxic petroleum fuel hydrocarbon and halogenated organic solvent contaminants are described. The method utilizes nonpathogenic, thermophilic bacteria for the thermal biodegradation of toxic and carcinogenic contaminants, such as benzene, toluene, ethylbenzene and xylenes, from fuel leaks and the chlorinated ethenes, such as trichloroethylene, chlorinated ethanes, such as 1,1,1-trichloroethane, and chlorinated methanes, such as chloroform, from past solvent cleaning practices. The method relies on and takes advantage of the pre-existing heated conditions and the array of delivery/recovery wells that are created and in place following primary subsurface contaminant volatilization efforts via thermal approaches, such as dynamic underground steam-electrical heating. 21 figs.
Berry, Christopher M; Zhao, Peng
2015-01-01
Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans. (c) 2015 APA, all rights reserved.
10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... behavior of items relied on for safety. (b) Facility and system design and facility layout must be based on... existing facilities. (a) Baseline design criteria. Each prospective applicant or licensee shall address the following baseline design criteria in the design of new facilities. Each existing licensee shall address the...
10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... behavior of items relied on for safety. (b) Facility and system design and facility layout must be based on... existing facilities. (a) Baseline design criteria. Each prospective applicant or licensee shall address the following baseline design criteria in the design of new facilities. Each existing licensee shall address the...
10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... behavior of items relied on for safety. (b) Facility and system design and facility layout must be based on... existing facilities. (a) Baseline design criteria. Each prospective applicant or licensee shall address the following baseline design criteria in the design of new facilities. Each existing licensee shall address the...
10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... behavior of items relied on for safety. (b) Facility and system design and facility layout must be based on... existing facilities. (a) Baseline design criteria. Each prospective applicant or licensee shall address the following baseline design criteria in the design of new facilities. Each existing licensee shall address the...
Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392
A real-time PCR diagnostic method for detection of Naegleria fowleri.
Madarová, Lucia; Trnková, Katarína; Feiková, Sona; Klement, Cyril; Obernauerová, Margita
2010-09-01
Naegleria fowleri is a free-living amoeba that can cause primary amoebic meningoencephalitis (PAM). While, traditional methods for diagnosing PAM still rely on culture, more current laboratory diagnoses exist based on conventional PCR methods; however, only a few real-time PCR processes have been described as yet. Here, we describe a real-time PCR-based diagnostic method using hybridization fluorescent labelled probes, with a LightCycler instrument and accompanying software (Roche), targeting the Naegleria fowleriMp2Cl5 gene sequence. Using this method, no cross reactivity with other tested epidemiologically relevant prokaryotic and eukaryotic organisms was found. The reaction detection limit was 1 copy of the Mp2Cl5 DNA sequence. This assay could become useful in the rapid laboratory diagnostic assessment of the presence or absence of Naegleria fowleri. Copyright 2009 Elsevier Inc. All rights reserved.
The least-squares finite element method for low-mach-number compressible viscous flows
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao
1994-01-01
The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.
A mobile phone user interface for image-based dietary assessment
NASA Astrophysics Data System (ADS)
Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A.; Boushey, Carol J.; Delp, Edward J.
2014-02-01
Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use.
A Mobile Phone User Interface for Image-Based Dietary Assessment
Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A.; Boushey, Carol J.; Delp, Edward J.
2016-01-01
Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use. PMID:28572696
A Mobile Phone User Interface for Image-Based Dietary Assessment.
Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A; Boushey, Carol J; Delp, Edward J
2014-02-02
Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use.
Automated object-based classification of topography from SRTM data
Drăguţ, Lucian; Eisank, Clemens
2012-01-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060
Automated object-based classification of topography from SRTM data
NASA Astrophysics Data System (ADS)
Drăguţ, Lucian; Eisank, Clemens
2012-03-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.
An Investigation of Automatic Change Detection for Topographic Map Updating
NASA Astrophysics Data System (ADS)
Duncan, P.; Smit, J.
2012-08-01
Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.
Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando
2015-07-21
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.
NASA Astrophysics Data System (ADS)
Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando
2015-07-01
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.
Robustly detecting differential expression in RNA sequencing data using observation weights
Zhou, Xiaobei; Lindsay, Helen; Robinson, Mark D.
2014-01-01
A popular approach for comparing gene expression levels between (replicated) conditions of RNA sequencing data relies on counting reads that map to features of interest. Within such count-based methods, many flexible and advanced statistical approaches now exist and offer the ability to adjust for covariates (e.g. batch effects). Often, these methods include some sort of ‘sharing of information’ across features to improve inferences in small samples. It is important to achieve an appropriate tradeoff between statistical power and protection against outliers. Here, we study the robustness of existing approaches for count-based differential expression analysis and propose a new strategy based on observation weights that can be used within existing frameworks. The results suggest that outliers can have a global effect on differential analyses. We demonstrate the effectiveness of our new approach with real data and simulated data that reflects properties of real datasets (e.g. dispersion-mean trend) and develop an extensible framework for comprehensive testing of current and future methods. In addition, we explore the origin of such outliers, in some cases highlighting additional biological or technical factors within the experiment. Further details can be downloaded from the project website: http://imlspenticton.uzh.ch/robinson_lab/edgeR_robust/. PMID:24753412
Dall'Olmo, Giorgio; Brewin, Robert J W; Nencioli, Francesco; Organelli, Emanuele; Lefering, Ina; McKee, David; Röttgers, Rüdiger; Mitchell, Catherine; Boss, Emmanuel; Bricaud, Annick; Tilstone, Gavin
2017-11-27
Measurements of the absorption coefficient of chromophoric dissolved organic matter (ay) are needed to validate existing ocean-color algorithms. In the surface open ocean, these measurements are challenging because of low ay values. Yet, existing global datasets demonstrate that ay could contribute between 30% to 50% of the total absorption budget in the 400-450 nm spectral range, thus making accurate measurement of ay essential to constrain these uncertainties. In this study, we present a simple way of determining ay using a commercially-available in-situ spectrophotometer operated in underway mode. The obtained ay values were validated using independent collocated measurements. The method is simple to implement, can provide measurements with very high spatio-temporal resolution, and has an accuracy of about 0.0004 m -1 and a precision of about 0.0025 m -1 when compared to independent data (at 440 nm). The only limitation for using this method at sea is that it relies on the availability of relatively large volumes of ultrapure water. Despite this limitation, the method can deliver the ay data needed for validating and assessing uncertainties in ocean-colour algorithms.
Integrative analysis with ChIP-seq advances the limits of transcript quantification from RNA-seq
Liu, Peng; Sanalkumar, Rajendran; Bresnick, Emery H.; Keleş, Sündüz; Dewey, Colin N.
2016-01-01
RNA-seq is currently the technology of choice for global measurement of transcript abundances in cells. Despite its successes, isoform-level quantification remains difficult because short RNA-seq reads are often compatible with multiple alternatively spliced isoforms. Existing methods rely heavily on uniquely mapping reads, which are not available for numerous isoforms that lack regions of unique sequence. To improve quantification accuracy in such difficult cases, we developed a novel computational method, prior-enhanced RSEM (pRSEM), which uses a complementary data type in addition to RNA-seq data. We found that ChIP-seq data of RNA polymerase II and histone modifications were particularly informative in this approach. In qRT-PCR validations, pRSEM was shown to be superior than competing methods in estimating relative isoform abundances within or across conditions. Data-driven simulations suggested that pRSEM has a greatly decreased false-positive rate at the expense of a small increase in false-negative rate. In aggregate, our study demonstrates that pRSEM transforms existing capacity to precisely estimate transcript abundances, especially at the isoform level. PMID:27405803
NASA Astrophysics Data System (ADS)
Yu, H.; Wang, Z.; Zhang, C.; Chen, N.; Zhao, Y.; Sawchuk, A. P.; Dalsing, M. C.; Teague, S. D.; Cheng, Y.
2014-11-01
Existing research of patient-specific computational hemodynamics (PSCH) heavily relies on software for anatomical extraction of blood arteries. Data reconstruction and mesh generation have to be done using existing commercial software due to the gap between medical image processing and CFD, which increases computation burden and introduces inaccuracy during data transformation thus limits the medical applications of PSCH. We use lattice Boltzmann method (LBM) to solve the level-set equation over an Eulerian distance field and implicitly and dynamically segment the artery surfaces from radiological CT/MRI imaging data. The segments seamlessly feed to the LBM based CFD computation of PSCH thus explicit mesh construction and extra data management are avoided. The LBM is ideally suited for GPU (graphic processing unit)-based parallel computing. The parallel acceleration over GPU achieves excellent performance in PSCH computation. An application study will be presented which segments an aortic artery from a chest CT dataset and models PSCH of the segmented artery.
Online boosting for vehicle detection.
Chang, Wen-Chung; Cho, Chih-Wei
2010-06-01
This paper presents a real-time vision-based vehicle detection system employing an online boosting algorithm. It is an online AdaBoost approach for a cascade of strong classifiers instead of a single strong classifier. Most existing cascades of classifiers must be trained offline and cannot effectively be updated when online tuning is required. The idea is to develop a cascade of strong classifiers for vehicle detection that is capable of being online trained in response to changing traffic environments. To make the online algorithm tractable, the proposed system must efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed online boosting method can improve system adaptability and accuracy to deal with novel types of vehicles and unfamiliar environments, whereas existing offline methods rely much more on extensive training processes to reach comparable results and cannot further be updated online. Our approach has been successfully validated in real traffic environments by performing experiments with an onboard charge-coupled-device camera in a roadway vehicle.
Numerical evaluation of mobile robot navigation in static indoor environment via EGAOR Iteration
NASA Astrophysics Data System (ADS)
Dahalan, A. A.; Saudi, A.; Sulaiman, J.; Din, W. R. W.
2017-09-01
One of the key issues in mobile robot navigation is the ability for the robot to move from an arbitrary start location to a specified goal location without colliding with any obstacles while traveling, also known as mobile robot path planning problem. In this paper, however, we examined the performance of a robust searching algorithm that relies on the use of harmonic potentials of the environment to generate smooth and safe path for mobile robot navigation in a static known indoor environment. The harmonic potentials will be discretized by using Laplacian’s operator to form a system of algebraic approximation equations. This algebraic linear system will be computed via 4-Point Explicit Group Accelerated Over-Relaxation (4-EGAOR) iterative method for rapid computation. The performance of the proposed algorithm will then be compared and analyzed against the existing algorithms in terms of number of iterations and execution time. The result shows that the proposed algorithm performed better than the existing methods.
NASA Astrophysics Data System (ADS)
Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.
2016-11-01
This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.
Torque-mixing magnetic resonance spectroscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Losby, Joseph; Fani Sani, Fatemeh; Grandmont, Dylan T.; Diao, Zhu; Belov, Miro; Burgess, Jacob A.; Compton, Shawn R.; Hiebert, Wayne K.; Vick, Doug; Mohammad, Kaveh; Salimi, Elham; Bridges, Gregory E.; Thomson, Douglas J.; Freeman, Mark R.
2016-10-01
An optomechanical platform for magnetic resonance spectroscopy will be presented. The method relies on frequency mixing of orthogonal RF fields to yield a torque amplitude (arising from the transverse component of a precessing dipole moment, in analogy to magnetic resonance detection by electromagnetic induction) on a miniaturized resonant mechanical torsion sensor. In contrast to induction, the method is fully broadband and allows for simultaneous observation of the equilibrium net magnetic moment alongside the associated magnetization dynamics. To illustrate the method, comprehensive electron spin resonance spectra of a mesoscopic, single-crystal YIG disk at room temperature will be presented, along with situations where torque spectroscopy can offer complimentary information to existing magnetic resonance detection techniques. The authors are very grateful for support from NSERC, CRC, AITF, and NINT. Reference: Science 350, 798 (2015).
An indirect approach to the extensive calculation of relationship coefficients
Colleau, Jean-Jacques
2002-01-01
A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102
Explosion safety in industrial electrostatics
NASA Astrophysics Data System (ADS)
Szabó, S. V.; Kiss, I.; Berta, I.
2011-01-01
Complicated industrial systems are often endangered by electrostatic hazards, both from atmospheric (lightning phenomenon, primary and secondary lightning protection) and industrial (technological problems caused by static charging and fire and explosion hazards.) According to the classical approach protective methods have to be used in order to remove electrostatic charging and to avoid damages, however no attempt to compute the risk before and after applying the protective method is made, relying instead on well-educated and practiced expertise. The Budapest School of Electrostatics - in close cooperation with industrial partners - develops new suitable solutions for probability based decision support (Static Control Up-to-date Technology, SCOUT) using soft computing methods. This new approach can be used to assess and audit existing systems and - using the predictive power of the models - to design and plan activities in industrial electrostatics.
Noise-Assisted Concurrent Multipath Traffic Distribution in Ad Hoc Networks
Murata, Masayuki
2013-01-01
The concept of biologically inspired networking has been introduced to tackle unpredictable and unstable situations in computer networks, especially in wireless ad hoc networks where network conditions are continuously changing, resulting in the need of robustness and adaptability of control methods. Unfortunately, existing methods often rely heavily on the detailed knowledge of each network component and the preconfigured, that is, fine-tuned, parameters. In this paper, we utilize a new concept, called attractor perturbation (AP), which enables controlling the network performance using only end-to-end information. Based on AP, we propose a concurrent multipath traffic distribution method, which aims at lowering the average end-to-end delay by only adjusting the transmission rate on each path. We demonstrate through simulations that, by utilizing the attractor perturbation relationship, the proposed method achieves a lower average end-to-end delay compared to other methods which do not take fluctuations into account. PMID:24319375
Unstructured viscous grid generation by advancing-front method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
1993-01-01
A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.
Martinez, Carlos A.; Barr, Kenneth; Kim, Ah-Ram; Reinitz, John
2013-01-01
Synthetic biology offers novel opportunities for elucidating transcriptional regulatory mechanisms and enhancer logic. Complex cis-regulatory sequences—like the ones driving expression of the Drosophila even-skipped gene—have proven difficult to design from existing knowledge, presumably due to the large number of protein-protein interactions needed to drive the correct expression patterns of genes in multicellular organisms. This work discusses two novel computational methods for the custom design of enhancers that employ a sophisticated, empirically validated transcriptional model, optimization algorithms, and synthetic biology. These synthetic elements have both utilitarian and academic value, including improving existing regulatory models as well as evolutionary questions. The first method involves the use of simulated annealing to explore the sequence space for synthetic enhancers whose expression output fit a given search criterion. The second method uses a novel optimization algorithm to find functionally accessible pathways between two enhancer sequences. These paths describe a set of mutations wherein the predicted expression pattern does not significantly vary at any point along the path. Both methods rely on a predictive mathematical framework that maps the enhancer sequence space to functional output. PMID:23732772
Bilinguals' Existing Languages Benefit Vocabulary Learning in a Third Language
ERIC Educational Resources Information Center
Bartolotti, James; Marian, Viorica
2017-01-01
Learning a new language involves substantial vocabulary acquisition. Learners can accelerate this process by relying on words with native-language overlap, such as cognates. For bilingual third language learners, it is necessary to determine how their two existing languages interact during novel language learning. A scaffolding account predicts…
SPATIALLY-BALANCED SURVEY DESIGN FOR GROUNDWATER USING EXISTING WELLS
Many states have a monitoring program to evaluate the water quality of groundwater across the state. These programs rely on existing wells for access to the groundwater, due to the high cost of drilling new wells. Typically, a state maintains a database of all well locations, in...
Imaging the inside of thick structures using cosmic rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guardincerri, E., E-mail: elenaguardincerri@lanl.gov; Durham, J. M.; Morris, C.
2016-01-15
The authors present here a new method to image reinforcement elements inside thick structures and the results of a demonstration measurement performed on a mock-up wall built at Los Alamos National Laboratory. The method, referred to as “multiple scattering muon radiography”, relies on the use of cosmic-ray muons as probes. The work described in this article was performed to prove the viability of the technique as a means to image the interior of the dome of Florence Cathedral Santa Maria del Fiore, one of the UNESCO World Heritage sites and among the highest profile buildings in existence. Its result showsmore » the effectiveness of the technique as a tool to radiograph thick structures and image denser object inside them.« less
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2001-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
Scalable Track Detection in SAR CCD Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, James G; Quach, Tu-Thach
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988,more » up fr om 0.907 obtained by the current state-of-the-art method.« less
Imaging the inside of thick structures using cosmic rays
Guardincerri, E.; Durham, J. M.; Morris, C.; ...
2016-01-01
Here, we present a new method to image reinforcement elements inside thick structures and the results of a demonstration measurement performed on a mock-up wall built at Los Alamos National Laboratory. The method, referred to as “multiple scattering muon radiography”, relies on the use of cosmic-ray muons as probes. Our work was performed to prove the viability of the technique as a means to image the interior of the dome of Florence Cathedral Santa Maria del Fiore, one of the UNESCO World Heritage sites and among the highest profile buildings in existence. This result shows the effectiveness of the techniquemore » as a tool to radiograph thick structures and image denser object inside them.« less
Topological photonic crystals with zero Berry curvature
NASA Astrophysics Data System (ADS)
Liu, Feng; Deng, Hai-Yao; Wakabayashi, Katsunori
2018-02-01
Topological photonic crystals are designed based on the concept of Zak's phase rather than the topological invariants such as the Chern number and spin Chern number, which rely on the existence of a nonvanishing Berry curvature. Our photonic crystals (PCs) are made of pure dielectrics and sit on a square lattice obeying the C4 v point-group symmetry. Two varieties of PCs are considered: one closely resembles the electronic two-dimensional Su-Schrieffer-Heeger model, and the other continues as an extension of this analogy. In both cases, the topological transitions are induced by adjusting the lattice constants. Topological edge modes (TEMs) are shown to exist within the nontrivial photonic band gaps on the termination of those PCs. The high efficiency of these TEMs transferring electromagnetic energy against several types of disorders has been demonstrated using the finite-element method.
Lombaert, Herve; Grady, Leo; Polimeni, Jonathan R.; Cheriet, Farida
2013-01-01
Existing methods for surface matching are limited by the trade-off between precision and computational efficiency. Here we present an improved algorithm for dense vertex-to-vertex correspondence that uses direct matching of features defined on a surface and improves it by using spectral correspondence as a regularization. This algorithm has the speed of both feature matching and spectral matching while exhibiting greatly improved precision (distance errors of 1.4%). The method, FOCUSR, incorporates implicitly such additional features to calculate the correspondence and relies on the smoothness of the lowest-frequency harmonics of a graph Laplacian to spatially regularize the features. In its simplest form, FOCUSR is an improved spectral correspondence method that nonrigidly deforms spectral embeddings. We provide here a full realization of spectral correspondence where virtually any feature can be used as additional information using weights on graph edges, but also on graph nodes and as extra embedded coordinates. As an example, the full power of FOCUSR is demonstrated in a real case scenario with the challenging task of brain surface matching across several individuals. Our results show that combining features and regularizing them in a spectral embedding greatly improves the matching precision (to a sub-millimeter level) while performing at much greater speed than existing methods. PMID:23868776
Accuracy of Time Integration Approaches for Stiff Magnetohydrodynamics Problems
NASA Astrophysics Data System (ADS)
Knoll, D. A.; Chacon, L.
2003-10-01
The simulation of complex physical processes with multiple time scales presents a continuing challenge to the computational plasma physisist due to the co-existence of fast and slow time scales. Within computational plasma physics, practitioners have developed and used linearized methods, semi-implicit methods, and time splitting in an attempt to tackle such problems. All of these methods are understood to generate numerical error. We are currently developing algorithms which remove such error for MHD problems [1,2]. These methods do not rely on linearization or time splitting. We are also attempting to analyze the errors introduced by existing ``implicit'' methods using modified equation analysis (MEA) [3]. In this presentation we will briefly cover the major findings in [3]. We will then extend this work further into MHD. This analysis will be augmented with numerical experiments with the hope of gaining insight, particularly into how these errors accumulate over many time steps. [1] L. Chacon,. D.A. Knoll, J.M. Finn, J. Comput. Phys., vol. 178, pp. 15-36 (2002) [2] L. Chacon and D.A. Knoll, J. Comput. Phys., vol. 188, pp. 573-592 (2003) [3] D.A. Knoll , L. Chacon, L.G. Margolin, V.A. Mousseau, J. Comput. Phys., vol. 185, pp. 583-611 (2003)
De Geuser, F; Lefebvre, W
2011-03-01
In this study, we propose a fast automatic method providing the matrix concentration in an atom probe tomography (APT) data set containing two phases or more. The principle of this method relies on the calculation of the relative amount of isolated solute atoms (i.e., not surrounded by a similar solute atom) as a function of a distance d in the APT reconstruction. Simulated data sets have been generated to test the robustness of this new tool and demonstrate that rapid and reproducible results can be obtained without the need of any user input parameter. The method has then been successfully applied to a ternary Al-Zn-Mg alloy containing a fine dispersion of hardening precipitates. The relevance of this method for direct estimation of matrix concentration is discussed and compared with the existing methodologies. Copyright © 2010 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2014-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.
I-SonReb: an improved NDT method to evaluate the in situ strength of carbonated concrete
NASA Astrophysics Data System (ADS)
Breccolotti, Marco; Bonfigli, Massimo F.
2015-10-01
Concrete strength evaluated in situ by means of the conventional SonReb method can be highly overestimated in presence of carbonation. This latter, in fact, is responsible for the physical and chemical alteration of the outer layer of concrete. As most of the existing concrete structures are subjected to carbonation, it is of high importance to overcome this problem. In this paper, an Improved SonReb method (I-SonReb) for carbonated concretes is proposed. It relies on the definition of a correction coefficient of the measured Rebound index as a function of the carbonated concrete cover thickness, an additional parameter to be measured during in situ testing campaigns. The usefulness of the method has been validated showing the improvement in the accuracy of concrete strength estimation from two sets of NDT experimental data collected from investigations on real structures.
Intrinsic Frequency and the Single Wave Biopsy
Petrasek, Danny; Pahlevan, Niema M.; Tavallali, Peyman; Rinderknecht, Derek G.; Gharib, Morteza
2015-01-01
Insulin resistance is the hallmark of classical type II diabetes. In addition, insulin resistance plays a central role in metabolic syndrome, which astonishingly affects 1 out of 3 adults in North America. The insulin resistance state can precede the manifestation of diabetes and hypertension by years. Insulin resistance is correlated with a low-grade inflammatory condition, thought to be induced by obesity as well as other conditions. Currently, the methods to measure and monitor insulin resistance, such as the homeostatic model assessment and the euglycemic insulin clamp, can be impractical, expensive, and invasive. Abundant evidence exists that relates increased pulse pressure, pulse wave velocity (PWV), and vascular dysfunction with insulin resistance. We introduce a potential method of assessing insulin resistance that relies on a novel signal-processing algorithm, the intrinsic frequency method (IFM). The method requires a single pulse pressure wave, thus the term “ wave biopsy.” PMID:26183600
Unsupervised daily routine and activity discovery in smart homes.
Jie Yin; Qing Zhang; Karunanithi, Mohan
2015-08-01
The ability to accurately recognize daily activities of residents is a core premise of smart homes to assist with remote health monitoring. Most of the existing methods rely on a supervised model trained from a preselected and manually labeled set of activities, which are often time-consuming and costly to obtain in practice. In contrast, this paper presents an unsupervised method for discovering daily routines and activities for smart home residents. Our proposed method first uses a Markov chain to model a resident's locomotion patterns at different times of day and discover clusters of daily routines at the macro level. For each routine cluster, it then drills down to further discover room-level activities at the micro level. The automatic identification of daily routines and activities is useful for understanding indicators of functional decline of elderly people and suggesting timely interventions.
Developing an Automated Method for Detection of Operationally Relevant Ocean Fronts and Eddies
NASA Astrophysics Data System (ADS)
Rogers-Cotrone, J. D.; Cadden, D. D. H.; Rivera, P.; Wynn, L. L.
2016-02-01
Since the early 90's, the U.S. Navy has utilized an observation-based process for identification of frontal systems and eddies. These Ocean Feature Assessments (OFA) rely on trained analysts to identify and position ocean features using satellite-observed sea surface temperatures. Meanwhile, as enhancements and expansion of the navy's Hybrid Coastal Ocean Model (HYCOM) and Regional Navy Coastal Ocean Model (RNCOM) domains have proceeded, the Naval Oceanographic Office (NAVO) has provided Tactical Oceanographic Feature Assessments (TOFA) that are based on data-validated model output but also rely on analyst identification of significant features. A recently completed project has migrated OFA production to the ArcGIS-based Acoustic Reach-back Cell Ocean Analysis Suite (ARCOAS), enabling use of additional observational datasets and significantly decreasing production time; however, it has highlighted inconsistencies inherent to this analyst-based identification process. Current efforts are focused on development of an automated method for detecting operationally significant fronts and eddies that integrates model output and observational data on a global scale. Previous attempts to employ techniques from the scientific community have been unable to meet the production tempo at NAVO. Thus, a system that incorporates existing techniques (Marr-Hildreth, Okubo-Weiss, etc.) with internally-developed feature identification methods (from model-derived physical and acoustic properties) is required. Ongoing expansions to the ARCOAS toolset have shown promising early results.
Discovering Deeply Divergent RNA Viruses in Existing Metatranscriptome Data with Machine Learning
NASA Astrophysics Data System (ADS)
Rivers, A. R.
2016-02-01
Most sampling of RNA viruses and phages has been directed toward a narrow range of hosts and environments. Several marine metagenomic studies have examined the RNA viral fraction in aquatic samples and found a number of picornaviruses and uncharacterized sequences. The lack of homology to known protein families has limited the discovery of new RNA viruses. We developed a computational method for identifying RNA viruses that relies on information in the codon transition probabilities of viral sequences to train a classifier. This approach does not rely on homology, but it has higher information content than other reference-free methods such as tetranucleotide frequency. Training and validation with RefSeq data gave true positive and true negative rates of 99.6% and 99.5% on the highly imbalanced validation sets (0.2% viruses) that, like the metatranscriptomes themselves, contain mostly non-viral sequences. To further test the method, a validation dataset of putative RNA virus genomes were identified in metatransciptomes by the presence of RNA dependent RNA polymerase, an essential gene for RNA viruses. The classifier successfully identified 99.4% of those contigs as viral. This approach is currently being extended to screen all metatranscriptome data sequenced at the DOE Joint Genome Institute, presently 4.5 Gb of assembled data from 504 public projects representing a wide range of marine, aquatic and terrestrial environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacón, Enrique, E-mail: echacon@icmm.csic.es; Tarazona, Pedro, E-mail: pedro.tarazona@uam.es; Bresme, Fernando, E-mail: f.bresme@imperial.ac.uk
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributionsmore » related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.« less
Detection of food intake from swallowing sequences by supervised and unsupervised methods.
Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L; Neuman, Michael R; Sazonov, Edward
2010-08-01
Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone.
Detection of Food Intake from Swallowing Sequences by Supervised and Unsupervised Methods
Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L.; Neuman, Michael R.; Sazonov, Edward
2010-01-01
Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone. PMID:20352335
ERIC Educational Resources Information Center
Pittman, Jason
2013-01-01
The use of laboratories as part of cybersecurity education is well evidenced in the existing literature. We are informed about the benefits, different types of laboratories and, in addition, underlying learning theories therein. Existing research also demonstrates that the success of employing cybersecurity laboratory exercises relies upon…
Integrative analysis with ChIP-seq advances the limits of transcript quantification from RNA-seq.
Liu, Peng; Sanalkumar, Rajendran; Bresnick, Emery H; Keleş, Sündüz; Dewey, Colin N
2016-08-01
RNA-seq is currently the technology of choice for global measurement of transcript abundances in cells. Despite its successes, isoform-level quantification remains difficult because short RNA-seq reads are often compatible with multiple alternatively spliced isoforms. Existing methods rely heavily on uniquely mapping reads, which are not available for numerous isoforms that lack regions of unique sequence. To improve quantification accuracy in such difficult cases, we developed a novel computational method, prior-enhanced RSEM (pRSEM), which uses a complementary data type in addition to RNA-seq data. We found that ChIP-seq data of RNA polymerase II and histone modifications were particularly informative in this approach. In qRT-PCR validations, pRSEM was shown to be superior than competing methods in estimating relative isoform abundances within or across conditions. Data-driven simulations suggested that pRSEM has a greatly decreased false-positive rate at the expense of a small increase in false-negative rate. In aggregate, our study demonstrates that pRSEM transforms existing capacity to precisely estimate transcript abundances, especially at the isoform level. © 2016 Liu et al.; Published by Cold Spring Harbor Laboratory Press.
NASA Astrophysics Data System (ADS)
Beaumont, Benjamin; Grippa, Tais; Lennert, Moritz; Vanhuysse, Sabine; Stephenne, Nathalie; Wolff, Eléonore
2017-07-01
Encouraged by the EU INSPIRE directive requirements and recommendations, the Walloon authorities, similar to other EU regional or national authorities, want to develop operational land-cover (LC) and land-use (LU) mapping methods using existing geodata. Urban planners and environmental monitoring stakeholders of Wallonia have to rely on outdated, mixed, and incomplete LC and LU information. The current reference map is 10-years old. The two object-based classification methods, i.e., a rule- and a classifier-based method, for detailed regional urban LC mapping are compared. The added value of using the different existing geospatial datasets in the process is assessed. This includes the comparison between satellite and aerial optical data in terms of mapping accuracies, visual quality of the map, costs, processing, data availability, and property rights. The combination of spectral, tridimensional, and vector data provides accuracy values close to 0.90 for mapping the LC into nine categories with a minimum mapping unit of 15 m2. Such a detailed LC map offers opportunities for fine-scale environmental and spatial planning activities. Still, the regional application poses challenges regarding automation, big data handling, and processing time, which are discussed.
Ma, Junshui; Bayram, Sevinç; Tao, Peining; Svetnik, Vladimir
2011-03-15
After a review of the ocular artifact reduction literature, a high-throughput method designed to reduce the ocular artifacts in multichannel continuous EEG recordings acquired at clinical EEG laboratories worldwide is proposed. The proposed method belongs to the category of component-based methods, and does not rely on any electrooculography (EOG) signals. Based on a concept that all ocular artifact components exist in a signal component subspace, the method can uniformly handle all types of ocular artifacts, including eye-blinks, saccades, and other eye movements, by automatically identifying ocular components from decomposed signal components. This study also proposes an improved strategy to objectively and quantitatively evaluate artifact reduction methods. The evaluation strategy uses real EEG signals to synthesize realistic simulated datasets with different amounts of ocular artifacts. The simulated datasets enable us to objectively demonstrate that the proposed method outperforms some existing methods when no high-quality EOG signals are available. Moreover, the results of the simulated datasets improve our understanding of the involved signal decomposition algorithms, and provide us with insights into the inconsistency regarding the performance of different methods in the literature. The proposed method was also applied to two independent clinical EEG datasets involving 28 volunteers and over 1000 EEG recordings. This effort further confirms that the proposed method can effectively reduce ocular artifacts in large clinical EEG datasets in a high-throughput fashion. Copyright © 2011 Elsevier B.V. All rights reserved.
An Accurate Projector Calibration Method Based on Polynomial Distortion Representation
Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua
2015-01-01
In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247
Diviani, Nicola; van den Putte, Bas; Meppelink, Corine S; van Weert, Julia C M
2016-06-01
To gain new insights into the relationship between health literacy and evaluation of online health information. Using a mixed-methods approach, forty-four semi-structured interviews were conducted followed by a short questionnaire on health literacy and eHealth literacy. Qualitative and quantitative data were merged to explore differences and similarities among respondents with different health literacy levels. Thematic analysis showed that most respondents did not question the quality of online health information and relied on evaluation criteria not recognized by existing web quality guidelines. Individuals with low health literacy, despite presenting higher eHealth literacy scores, appeared to use less established criteria and to rely more heavily on non-established ones compared to those with high health literacy. Disparities in evaluation ability among people with different health literacy might be related to differences in awareness of the issue and to the use of different evaluation criteria. Future research should quantitatively investigate the interplay between health literacy, use of established and non-established criteria, and ability to evaluate online health information. Communication and patient education efforts should aim to raise awareness on online health information quality and to promote use of established evaluation criteria, especially among low health literate citizens. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Sun, Jiaqi; Xie, Yuchen; Ye, Wenxing; Ho, Jeffrey; Entezari, Alireza; Blackband, Stephen J.
2013-01-01
In this paper, we present a novel dictionary learning framework for data lying on the manifold of square root densities and apply it to the reconstruction of diffusion propagator (DP) fields given a multi-shell diffusion MRI data set. Unlike most of the existing dictionary learning algorithms which rely on the assumption that the data points are vectors in some Euclidean space, our dictionary learning algorithm is designed to incorporate the intrinsic geometric structure of manifolds and performs better than traditional dictionary learning approaches when applied to data lying on the manifold of square root densities. Non-negativity as well as smoothness across the whole field of the reconstructed DPs is guaranteed in our approach. We demonstrate the advantage of our approach by comparing it with an existing dictionary based reconstruction method on synthetic and real multi-shell MRI data. PMID:24684004
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick
We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes andmore » different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.« less
NASA Astrophysics Data System (ADS)
Rosnitskiy, P. B.; Gavrilov, L. R.; Yuldashev, P. V.; Sapozhnikov, O. A.; Khokhlova, V. A.
2017-09-01
A noninvasive ultrasound surgery method that relies on using multi-element focused phased arrays is being successfully used to destroy tumors and perform neurosurgical operations in deep structures of the human brain. However, several drawbacks that limit the possibilities of the existing systems in their clinical use have been revealed: a large size of the hemispherical array, impossibility of its mechanical movement relative to the patient's head, limited volume of dynamic focusing around the center of curvature of the array, and side effect of overheating skull. Here we evaluate the possibility of using arrays of smaller size and aperture angles to achieve shock-wave formation at the focus for thermal and mechanical ablation (histotripsy) of brain tissue taking into account current intensity limitations at the array elements. The proposed approach has potential advantages to mitigate the existing limitations and expand the possibilities of transcranial ultrasound surgery.
Wiener Chaos and Nonlinear Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lototsky, S.V.
2006-11-15
The paper discusses two algorithms for solving the Zakai equation in the time-homogeneous diffusion filtering model with possible correlation between the state process and the observation noise. Both algorithms rely on the Cameron-Martin version of the Wiener chaos expansion, so that the approximate filter is a finite linear combination of the chaos elements generated by the observation process. The coefficients in the expansion depend only on the deterministic dynamics of the state and observation processes. For real-time applications, computing the coefficients in advance improves the performance of the algorithms in comparison with most other existing methods of nonlinear filtering. Themore » paper summarizes the main existing results about these Wiener chaos algorithms and resolves some open questions concerning the convergence of the algorithms in the noise-correlated setting. The presentation includes the necessary background on the Wiener chaos and optimal nonlinear filtering.« less
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik
1996-01-01
For a space mission to be successful it is vitally important to have a good control strategy. For example, with the Space Shuttle it is necessary to guarantee the success and smoothness of docking, the smoothness and fuel efficiency of trajectory control, etc. For an automated planetary mission it is important to control the spacecraft's trajectory, and after that, to control the planetary rover so that it would be operable for the longest possible period of time. In many complicated control situations, traditional methods of control theory are difficult or even impossible to apply. In general, in uncertain situations, where no routine methods are directly applicable, we must rely on the creativity and skill of the human operators. In order to simulate these experts, an intelligent control methodology must be developed. The research objectives of this project were: to analyze existing control techniques; to find out which of these techniques is the best with respect to the basic optimality criteria (stability, smoothness, robustness); and, if for some problems, none of the existing techniques is satisfactory, to design new, better intelligent control techniques.
Kreeft, Davey; Arkenbout, Ewout Aart; Henselmans, Paulus Wilhelmus Johannes; van Furth, Wouter R.; Breedveld, Paul
2017-01-01
A clear visualization of the operative field is of critical importance in endoscopic surgery. During surgery the endoscope lens can get fouled by body fluids (eg, blood), ground substance, rinsing fluid, bone dust, or smoke plumes, resulting in visual impairment. As a result, surgeons spend part of the procedure on intermittent cleaning of the endoscope lens. Current cleaning methods that rely on manual wiping or a lens irrigation system are still far from ideal, leading to longer procedure times, dirtying of the surgical site, and reduced visual acuity, potentially reducing patient safety. With the goal of finding a solution to these issues, a literature review was conducted to identify and categorize existing techniques capable of achieving optically clean surfaces, and to show which techniques can potentially be implemented in surgical practice. The review found that the most promising method for achieving surface cleanliness consists of a hybrid solution, namely, that of a hydrophilic or hydrophobic coating on the endoscope lens and the use of the existing lens irrigation system. PMID:28511635
[The role of biotechnology in pharmaceutical drug design].
Gaisser, Sibylle; Nusser, Michael
2010-01-01
Biotechnological methods have become an important tool in pharmaceutical drug research and development. Today approximately 15 % of drug revenues are derived from biopharmaceuticals. The most relevant indications are oncology, metabolic disorders and disorders of the musculoskeletal system. For the future it can be expected that the relevance of biopharmaceuticals will further increase. Currently, the share of substances in preclinical testing that rely on biotechnology is more than 25 % of all substances in preclinical testing. Products for the treatment of cancer, metabolic disorders and infectious diseases are most important. New therapeutic approaches such as RNA interference only play a minor role in current commercial drug research and development with 1.5 % of all biological preclinical substances. Investments in sustainable high technology such as biotechnology are of vital importance for a highly developed country like Germany because of its lack of raw materials. Biotechnology helps the pharmaceutical industry to develop new products, new processes, methods and services and to improve existing ones. Thus, international competitiveness can be strengthened, new jobs can be created and existing jobs preserved.
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2000-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
VIIRS Product Evaluation at the Ocean PEATE
NASA Technical Reports Server (NTRS)
Patt, Frederick S.; Feldman, Gene C.
2010-01-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) mission will support the continuation of climate records generated from NASA missions. The NASA Science Data Segment (SDS) relies upon discipline-specific centers of expertise to evaluate the NPP data products for suitability as climate data records, The Ocean Product Evaluation and Analysis Tool Element (PEATE) will build upon Well established NASA capabilities within the Ocean Color program in order to evaluate the NPP Visible and Infrared Imager/Radiometer Suite (VIIRS) Ocean Color and Chlorophyll data products. The specific evaluation methods will support not only the evaluation of product quality but also the sources of differences with existing data records.
Using Cardiac Biomarkers in Veterinary Practice.
Oyama, Mark A
2015-09-01
Blood-based assays for various cardiac biomarkers can assist in the diagnosis of heart disease in dogs and cats. The two most common markers are cardiac troponin-I and N-terminal pro-B-type natriuretic peptide. Biomarker assays can assist in differentiating cardiac from noncardiac causes of respiratory signs and detection of preclinical cardiomyopathy. Increasingly, studies indicate that cardiac biomarker testing can help assess the risk of morbidity and mortality in animals with heart disease. Usage of cardiac biomarker testing in clinical practice relies on proper patient selection, correct interpretation of test results, and incorporation of biomarker testing into existing diagnostic methods. Copyright © 2015 Elsevier Inc. All rights reserved.
Using cardiac biomarkers in veterinary practice.
Oyama, Mark A
2013-11-01
Blood-based assays for various cardiac biomarkers can assist in the diagnosis of heart disease in dogs and cats. The two most common markers are cardiac troponin-I and N-terminal pro-B-type natriuretic peptide. Biomarker assays can assist in differentiating cardiac from noncardiac causes of respiratory signs and detection of preclinical cardiomyopathy. Increasingly, studies indicate that cardiac biomarker testing can help assess the risk of morbidity and mortality in animals with heart disease. Usage of cardiac biomarker testing in clinical practice relies on proper patient selection, correct interpretation of test results, and incorporation of biomarker testing into existing diagnostic methods. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dunlap, Lucas
2016-11-01
I argue that Deutsch's model for the behavior of systems traveling around closed timelike curves (CTCs) relies implicitly on a substantive metaphysical assumption. Deutsch is employing a version of quantum theory with a significantly supplemented ontology of parallel existent worlds, which differ in kind from the many worlds of the Everett interpretation. Standard Everett does not support the existence of multiple identical copies of the world, which the D-CTC model requires. This has been obscured because he often refers to the branching structure of Everett as a "multiverse", and describes quantum interference by reference to parallel interacting definite worlds. But he admits that this is only an approximation to Everett. The D-CTC model, however, relies crucially on the existence of a multiverse of parallel interacting worlds. Since his model is supplemented by structures that go significantly beyond quantum theory, and play an ineliminable role in its predictions and explanations, it does not represent a quantum solution to the paradoxes of time travel.
Supporting Children in Mastering Temporal Relations of Stories: The TERENCE Learning Approach
ERIC Educational Resources Information Center
Di Mascio, Tania; Gennari, Rosella; Melonio, Alessandra; Tarantino, Laura
2016-01-01
Though temporal reasoning is a key factor for text comprehension, existing proposals for visualizing temporal information and temporal connectives proves to be inadequate for children, not only for their levels of abstraction and detail, but also because they rely on pre-existing mental models of time and temporal connectives, while in the case of…
Comparison of Spatiotemporal Mapping Techniques for Enormous Etl and Exploitation Patterns
NASA Astrophysics Data System (ADS)
Deiotte, R.; La Valley, R.
2017-10-01
The need to extract, transform, and exploit enormous volumes of spatiotemporal data has exploded with the rise of social media, advanced military sensors, wearables, automotive tracking, etc. However, current methods of spatiotemporal encoding and exploitation simultaneously limit the use of that information and increase computing complexity. Current spatiotemporal encoding methods from Niemeyer and Usher rely on a Z-order space filling curve, a relative of Peano's 1890 space filling curve, for spatial hashing and interleaving temporal hashes to generate a spatiotemporal encoding. However, there exist other space-filling curves, and that provide different manifold coverings that could promote better hashing techniques for spatial data and have the potential to map spatiotemporal data without interleaving. The concatenation of Niemeyer's and Usher's techniques provide a highly efficient space-time index. However, other methods have advantages and disadvantages regarding computational cost, efficiency, and utility. This paper explores the several methods using a range of sizes of data sets from 1K to 10M observations and provides a comparison of the methods.
Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online. PMID:23155351
Error correction and diversity analysis of population mixtures determined by NGS
Burroughs, Nigel J.; Evans, David J.; Ryabov, Eugene V.
2014-01-01
The impetus for this work was the need to analyse nucleotide diversity in a viral mix taken from honeybees. The paper has two findings. First, a method for correction of next generation sequencing error in the distribution of nucleotides at a site is developed. Second, a package of methods for assessment of nucleotide diversity is assembled. The error correction method is statistically based and works at the level of the nucleotide distribution rather than the level of individual nucleotides. The method relies on an error model and a sample of known viral genotypes that is used for model calibration. A compendium of existing and new diversity analysis tools is also presented, allowing hypotheses about diversity and mean diversity to be tested and associated confidence intervals to be calculated. The methods are illustrated using honeybee viral samples. Software in both Excel and Matlab and a guide are available at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/, the Warwick University Systems Biology Centre software download site. PMID:25405074
Semiblind channel estimation for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
Chen, Yi-Sheng; Song, Jyu-Han
2012-12-01
This article proposes a semiblind channel estimation method for multiple-input multiple-output orthogonal frequency-division multiplexing systems based on circular precoding. Relying on the precoding scheme at the transmitters, the autocorrelation matrix of the received data induces a structure relating the outer product of the channel frequency response matrix and precoding coefficients. This structure makes it possible to extract information about channel product matrices, which can be used to form a Hermitian matrix whose positive eigenvalues and corresponding eigenvectors yield the channel impulse response matrix. This article also tests the resistance of the precoding design to finite-sample estimation errors, and explores the effects of the precoding scheme on channel equalization by performing pairwise error probability analysis. The proposed method is immune to channel zero locations, and is reasonably robust to channel order overestimation. The proposed method is applicable to the scenarios in which the number of transmitters exceeds that of the receivers. Simulation results demonstrate the performance of the proposed method and compare it with some existing methods.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
info-gibbs: a motif discovery algorithm that directly optimizes information content during sampling.
Defrance, Matthieu; van Helden, Jacques
2009-10-15
Discovering cis-regulatory elements in genome sequence remains a challenging issue. Several methods rely on the optimization of some target scoring function. The information content (IC) or relative entropy of the motif has proven to be a good estimator of transcription factor DNA binding affinity. However, these information-based metrics are usually used as a posteriori statistics rather than during the motif search process itself. We introduce here info-gibbs, a Gibbs sampling algorithm that efficiently optimizes the IC or the log-likelihood ratio (LLR) of the motif while keeping computation time low. The method compares well with existing methods like MEME, BioProspector, Gibbs or GAME on both synthetic and biological datasets. Our study shows that motif discovery techniques can be enhanced by directly focusing the search on the motif IC or the motif LLR. http://rsat.ulb.ac.be/rsat/info-gibbs
Ex vitro composite plants: an inexpensive, rapid method for root biology.
Collier, Ray; Fuchs, Beth; Walter, Nathalie; Kevin Lutke, William; Taylor, Christopher G
2005-08-01
Plant transformation technology is frequently the rate-limiting step in gene function analysis in non-model plants. An important tool for root biologists is the Agrobacterium rhizogenes-derived composite plant, which has made possible genetic analyses in a wide variety of transformation recalcitrant dicotyledonous plants. The novel, rapid and inexpensive ex vitro method for producing composite plants described in this report represents a significant advance over existing composite plant induction protocols, which rely on expensive and time-consuming in vitro conditions. The utility of the new system is validated by expression and RNAi silencing of GFP in transgenic roots of composite plants, and is bolstered further by experimental disruption, via RNAi silencing, of endogenous plant resistance to the plant parasitic nematode Meloidogyne incognita in transgenic roots of Lycopersicon esculentum cv. Motelle composite plants. Critical parameters of the method are described and discussed herein.
NASA Astrophysics Data System (ADS)
Wong, S. K.; Chan, V. S.; Hinton, F. L.
2001-10-01
The classic solution of the linearized drift kinetic equations in neoclassical transport theory for large-aspect-ratio tokamak flux-surfaces relies on the variational principle and the choice of ``localized" distribution functions as trialfunctions.(M.N. Rosenbluth, et al., Phys. Fluids 15) (1972) 116. Somewhat unclear in this approach are the nature and the origin of the ``localization" and whether the results obtained represent the exact leading terms in an asymptotic expansion int he inverse aspect ratio. Using the method of matched asymptotic expansions, we were able to derive the leading approximations to the distribution functions and demonstrated the asymptotic exactness of the existing results. The method is also applied to the calculation of angular momentum transport(M.N. Rosenbluth, et al., Plasma Phys. and Contr. Nucl. Fusion Research, 1970, Vol. 1 (IAEA, Vienna, 1971) p. 495.) and the current driven by electron cyclotron waves.
Coexistence of superconductivity and magnetism by chemical design
NASA Astrophysics Data System (ADS)
Coronado, Eugenio; Martí-Gastaldo, Carlos; Navarro-Moratalla, Efrén; Ribera, Antonio; Blundell, Stephen J.; Baker, Peter J.
2010-12-01
Although the coexistence of superconductivity and ferromagnetism in one compound is rare, some examples of such materials are known to exist. Methods to physically prepare hybrid structures with both competing phases are also known, which rely on the nanofabrication of alternating conducting layers. Chemical methods of building up hybrid materials with organic molecules (superconducting layers) and metal complexes (magnetic layers) have provided examples of superconductivity with some magnetic properties, but not fully ordered. Now, we report a chemical design strategy that uses the self assembly in solution of macromolecular nanosheet building blocks to engineer the coexistence of superconductivity and magnetism in [Ni0.66Al0.33(OH)2][TaS2] at ~4 K. The method is further demonstrated in the isostructural [Ni0.66Fe0.33(OH)2][TaS2], in which the magnetic ordering is shifted from 4 K to 16 K.
Local Descriptors of Dynamic and Nondynamic Correlation.
Ramos-Cordoba, Eloy; Matito, Eduard
2017-06-13
Quantitatively accurate electronic structure calculations rely on the proper description of electron correlation. A judicious choice of the approximate quantum chemistry method depends upon the importance of dynamic and nondynamic correlation, which is usually assesed by scalar measures. Existing measures of electron correlation do not consider separately the regions of the Cartesian space where dynamic or nondynamic correlation are most important. We introduce real-space descriptors of dynamic and nondynamic electron correlation that admit orbital decomposition. Integration of the local descriptors yields global numbers that can be used to quantify dynamic and nondynamic correlation. Illustrative examples over different chemical systems with varying electron correlation regimes are used to demonstrate the capabilities of the local descriptors. Since the expressions only require orbitals and occupation numbers, they can be readily applied in the context of local correlation methods, hybrid methods, density matrix functional theory, and fractional-occupancy density functional theory.
Visual attention capacity: a review of TVA-based patient studies.
Habekost, Thomas; Starrfelt, Randi
2009-02-01
Psychophysical studies have identified two distinct limitations of visual attention capacity: processing speed and apprehension span. Using a simple test, these cognitive factors can be analyzed by Bundesen's Theory of Visual Attention (TVA). The method has strong specificity and sensitivity, and measurements are highly reliable. As the method is theoretically founded, it also has high validity. TVA-based assessment has recently been used to investigate a broad range of neuropsychological and neurological conditions. We present the method, including the experimental paradigm and practical guidelines to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical networks that include several regions outside the visual system. The two visual capacity parameters are functionally separable, but seem to rely on largely overlapping brain areas.
Discovering relevance knowledge in data: a growing cell structures approach.
Azuaje, F; Dubitzky, W; Black, N; Adamson, K
2000-01-01
Both information retrieval and case-based reasoning systems rely on effective and efficient selection of relevant data. Typically, relevance in such systems is approximated by similarity or indexing models. However, the definition of what makes data items similar or how they should be indexed is often nontrivial and time-consuming. Based on growing cell structure artificial neural networks, this paper presents a method that automatically constructs a case retrieval model from existing data. Within the case-based reasoning (CBR) framework, the method is evaluated for two medical prognosis tasks, namely, colorectal cancer survival and coronary heart disease risk prognosis. The results of the experiments suggest that the proposed method is effective and robust. To gain a deeper insight and understanding of the underlying mechanisms of the proposed model, a detailed empirical analysis of the models structural and behavioral properties is also provided.
MToS: A Tree of Shapes for Multivariate Images.
Carlinet, Edwin; Géraud, Thierry
2015-12-01
The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.
Help From Above: Air Force Close Air Support of the Army. 1946-1973
2003-01-01
his leader, his command center, or the glue that held his alliance together,3 went unheeded in the nineteenth-century rush to realize the Prussian...directives and concepts based on the last war,” Army ground troops were trained to rely on aerial photographs rather than existing maps .102 With the...standing quest for separation from the Army. Lacking a road map for what lay ahead, the services relied heavily upon the doctrines that had proved
NASA Astrophysics Data System (ADS)
Topping, David; Alibay, Irfan; Bane, Michael
2017-04-01
To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method. Activity coefficients are often neglected with the largely untested hypothesis that they are simply too computationally expensive to include in dynamic frameworks. We present results demonstrating increased computational efficiency for a range of typical scenarios, including a profiling of the energy use resulting from reliance on such computations. As the landscape of HPC changes, the latter aspect is important to consider in future applications.
Rapid Assessment of Ecosystem Service Co-Benefits of Biodiversity Priority Areas in Madagascar
Andriamaro, Luciano; Cano, Carlos Andres; Grantham, Hedley S.; Hole, David; Juhn, Daniel; McKinnon, Madeleine; Rasolohery, Andriambolantsoa; Steininger, Marc; Wright, Timothy Max
2016-01-01
The importance of ecosystems for supporting human well-being is increasingly recognized by both the conservation and development sectors. Our ability to conserve ecosystems that people rely on is often limited by a lack of spatially explicit data on the location and distribution of ecosystem services (ES), the benefits provided by nature to people. Thus there is a need to map ES to guide conservation investments, to ensure these co-benefits are maintained. To target conservation investments most effectively, ES assessments must be rigorous enough to support conservation planning, rapid enough to respond to decision-making timelines, and often must rely on existing data. We developed a framework for rapid spatial assessment of ES that relies on expert and stakeholder consultation, available data, and spatial analyses in order to rapidly identify sites providing multiple benefits. We applied the framework in Madagascar, a country with globally significant biodiversity and a high level of human dependence on ecosystems. Our objective was to identify the ES co-benefits of biodiversity priority areas in order to guide the investment strategy of a global conservation fund. We assessed key provisioning (fisheries, hunting and non-timber forest products, and water for domestic use, agriculture, and hydropower), regulating (climate mitigation, flood risk reduction and coastal protection), and cultural (nature tourism) ES. We also conducted multi-criteria analyses to identify sites providing multiple benefits. While our approach has limitations, including the reliance on proximity-based indicators for several ES, the results were useful for targeting conservation investments by the Critical Ecosystem Partnership Fund (CEPF). Because our approach relies on available data, standardized methods for linking ES provision to ES use, and expert validation, it has the potential to quickly guide conservation planning and investment decisions in other data-poor regions. PMID:28006005
Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S
2015-01-01
Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908
Integrating asthma hazard characterization methods for consumer products.
Maier, A; Vincent, M J; Gadagbui, B; Patterson, J; Beckett, W; Dalton, P; Kimber, I; Selgrade, M J K
2014-10-01
Despite extensive study, definitive conclusions regarding the relationship between asthma and consumer products remain elusive. Uncertainties reflect the multi-faceted nature of asthma (i.e., contributions of immunologic and non-immunologic mechanisms). Many substances used in consumer products are associated with occupational asthma or asthma-like syndromes. However, risk assessment methods do not adequately predict the potential for consumer product exposures to trigger asthma and related syndromes under lower-level end-user conditions. A decision tree system is required to characterize asthma and respiratory-related hazards associated with consumer products. A system can be built to incorporate the best features of existing guidance, frameworks, and models using a weight-of-evidence (WoE) approach. With this goal in mind, we have evaluated chemical hazard characterization methods for asthma and asthma-like responses. Despite the wealth of information available, current hazard characterization methods do not definitively identify whether a particular ingredient will cause or exacerbate asthma, asthma-like responses, or sensitization of the respiratory tract at lower levels associated with consumer product use. Effective use of hierarchical lines of evidence relies on consideration of the relevance and potency of assays, organization of assays by mode of action, and better assay validation. It is anticipated that the analysis of existing methods will support the development of a refined WoE approach. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
A Discrete Fracture Network Model with Stress-Driven Nucleation and Growth
NASA Astrophysics Data System (ADS)
Lavoine, E.; Darcel, C.; Munier, R.; Davy, P.
2017-12-01
The realism of Discrete Fracture Network (DFN) models, beyond the bulk statistical properties, relies on the spatial organization of fractures, which is not issued by purely stochastic DFN models. The realism can be improved by injecting prior information in DFN from a better knowledge of the geological fracturing processes. We first develop a model using simple kinematic rules for mimicking the growth of fractures from nucleation to arrest, in order to evaluate the consequences of the DFN structure on the network connectivity and flow properties. The model generates fracture networks with power-law scaling distributions and a percentage of T-intersections that are consistent with field observations. Nevertheless, a larger complexity relying on the spatial variability of natural fractures positions cannot be explained by the random nucleation process. We propose to introduce a stress-driven nucleation in the timewise process of this kinematic model to study the correlations between nucleation, growth and existing fracture patterns. The method uses the stress field generated by existing fractures and remote stress as an input for a Monte-Carlo sampling of nuclei centers at each time step. Networks so generated are found to have correlations over a large range of scales, with a correlation dimension that varies with time and with the function that relates the nucleation probability to stress. A sensibility analysis of input parameters has been performed in 3D to quantify the influence of fractures and remote stress field orientations.
Viger, Mathieu L; Sheng, Wangzhong; McFearin, Cathryn L; Berezin, Mikhail Y; Almutairi, Adah
2013-11-10
Though accurately evaluating the kinetics of release is critical for validating newly designed therapeutic carriers for in vivo applications, few methods yet exist for release measurement in real time and without the need for any sample preparation. Many of the current approaches (e.g. chromatographic methods, absorption spectroscopy, or NMR spectroscopy) rely on isolation of the released material from the loaded vehicles, which require additional sample purification and can lead to loss of accuracy when probing fast kinetics of release. In this study we describe the use of time-resolved fluorescence for in situ monitoring of small molecule release kinetics from biodegradable polymeric drug delivery systems. This method relies on the observation that fluorescent reporters being released from polymeric drug delivery systems possess distinct excited-state lifetime components, reflecting their different environments in the particle suspensions, i.e., confined in the polymer matrices or free in the aqueous environment. These distinct lifetimes enable real-time quantitative mapping of the relative concentrations of dye in each population to obtain precise and accurate temporal information on the release profile of particular carrier/payload combinations. We found that fluorescence lifetime better distinguishes subtle differences in release profiles (e.g. differences associated with dye loading) than conventional steady-state fluorescence measurements, which represent the averaged dye behavior over the entire scan. Given the method's applicability to both hydrophobic and hydrophilic cargo, it could be employed to model the release of any drug-carrier combination. Copyright © 2013 Elsevier B.V. All rights reserved.
Approximate Model Checking of PCTL Involving Unbounded Path Properties
NASA Astrophysics Data System (ADS)
Basu, Samik; Ghosh, Arka P.; He, Ru
We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as
Unraveling navigational strategies in migratory insects
Merlin, Christine; Heinze, Stanley; Reppert, Steven M.
2011-01-01
Long-distance migration is a strategy some animals use to survive a seasonally changing environment. To reach favorable grounds, migratory animals have evolved sophisticated navigational mechanisms that rely on a map and compasses. In migratory insects, the existence of a map sense (sense of position) remains poorly understood, but recent work has provided new insights into the mechanisms some compasses use for maintaining a constant bearing during long-distance navigation. The best-studied directional strategy relies on a time-compensated sun compass, used by diurnal insects, for which neural circuits have begun to be delineated. Yet, a growing body of evidence suggests that migratory insects may also rely on other compasses that use night sky cues or the Earth's magnetic field. Those mechanisms are ripe for exploration. PMID:22154565
10 CFR 70.62 - Safety program and integrated safety analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... conclusion of each failure investigation of an item relied on for safety or management measure. (b) Process... methodology being used. (3) Requirements for existing licensees. Individuals holding an NRC license on...
Czajkowski, R; Pérombelon, MCM; Jafra, S; Lojkowska, E; Potrykus, M; van der Wolf, JM; Sledz, W
2015-01-01
The soft rot Enterobacteriaceae (SRE) Pectobacterium and Dickeya species (formerly classified as pectinolytic Erwinia spp.) cause important diseases on potato and other arable and horticultural crops. They may affect the growing potato plant causing blackleg and are responsible for tuber soft rot in storage thereby reducing yield and quality. Efficient and cost-effective detection and identification methods are essential to investigate the ecology and pathogenesis of the SRE as well as in seed certification programmes. The aim of this review was to collect all existing information on methods available for SRE detection. The review reports on the sampling and preparation of plant material for testing and on over thirty methods to detect, identify and differentiate the soft rot and blackleg causing bacteria to species and subspecies level. These include methods based on biochemical characters, serology, molecular techniques which rely on DNA sequence amplification as well as several less-investigated ones. PMID:25684775
NASA Technical Reports Server (NTRS)
1971-01-01
Methods for presterilization cleaning or decontamination of spacecraft hardware to reduce microbial load, without harming materials or spacecraft components, are investigated. Three methods were considered: (1) chemicals in liquid form, relying on physical removal as well as bacterial or bacteriostatic action; (2) chemicals used in the gaseous phase, relying on bacterial activity; and (3) mechanical cleaning relying on physical removal of organisms. These methods were evaluated in terms of their effectiveness in microbial burden reduction and compatibility with spacecraft hardware. Results show chemical methods were effective against spore microorganisms but were harmful to spacecraft materials. Mechanical methods were also effective with the degree depending upon the type of instrument employed. Mechanical methods caused problems in handling the equipment, due to vacuum pressure damaging the very thin layered materials used for shielding, and the bristles used in the process caused streaks or abrasions on some spacecraft components.
NASA Astrophysics Data System (ADS)
Doran-Peterson, Joy; Jangid, Amruta; Brandon, Sarah K.; Decrescenzo-Henriksen, Emily; Dien, Bruce; Ingram, Lonnie O.
Ethanol production by fermentation of lignocellulosic biomass-derived sugars involves a fairly ancient art and an ever-evolving science. Production of ethanol from lignocellulosic biomass is not avant-garde, and wood ethanol plants have been in existence since at least 1915. Most current ethanol production relies on starch- and sugar-based crops as the substrate; however, limitations of these materials and competing value for human and animal feeds is renewing interest in lignocellulose conversion. Herein, we describe methods for both simultaneous saccharification and fermentation (SSF) and a similar but separate process for partial saccharification and cofermentation (PSCF) of lignocellulosic biomass for ethanol production using yeasts or pentose-fermenting engineered bacteria. These methods are applicable for small-scale preliminary evaluations of ethanol production from a variety of biomass sources.
Inner Ear Drug Delivery for Auditory Applications
Swan, Erin E. Leary; Mescher, Mark J.; Sewell, William F.; Tao, Sarah L.; Borenstein, Jeffrey T.
2008-01-01
Many inner ear disorders cannot be adequately treated by systemic drug delivery. A blood-cochlear barrier exists, similar physiologically to the blood-brain barrier, which limits the concentration and size of molecules able to leave the circulation and gain access to the cells of the inner ear. However, research in novel therapeutics and delivery systems has led to significant progress in the development of local methods of drug delivery to the inner ear. Intratympanic approaches, which deliver therapeutics to the middle ear, rely on permeation through tissue for access to the structures of the inner ear, whereas intracochlear methods are able to directly insert drugs into the inner ear. Innovative drug delivery systems to treat various inner ear ailments such as ototoxicity, sudden sensorineural hearing loss, autoimmune inner ear disease, and for preserving neurons and regenerating sensory cells are being explored. PMID:18848590
A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2016-12-06
This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.
A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method
Tuta, Jure; Juric, Matjaz B.
2016-01-01
This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453
Husch, Andreas; V Petersen, Mikkel; Gemmar, Peter; Goncalves, Jorge; Hertel, Frank
2018-01-01
Deep brain stimulation (DBS) is a neurosurgical intervention where electrodes are permanently implanted into the brain in order to modulate pathologic neural activity. The post-operative reconstruction of the DBS electrodes is important for an efficient stimulation parameter tuning. A major limitation of existing approaches for electrode reconstruction from post-operative imaging that prevents the clinical routine use is that they are manual or semi-automatic, and thus both time-consuming and subjective. Moreover, the existing methods rely on a simplified model of a straight line electrode trajectory, rather than the more realistic curved trajectory. The main contribution of this paper is that for the first time we present a highly accurate and fully automated method for electrode reconstruction that considers curved trajectories. The robustness of our proposed method is demonstrated using a multi-center clinical dataset consisting of N = 44 electrodes. In all cases the electrode trajectories were successfully identified and reconstructed. In addition, the accuracy is demonstrated quantitatively using a high-accuracy phantom with known ground truth. In the phantom experiment, the method could detect individual electrode contacts with high accuracy and the trajectory reconstruction reached an error level below 100 μm (0.046 ± 0.025 mm). An implementation of the method is made publicly available such that it can directly be used by researchers or clinicians. This constitutes an important step towards future integration of lead reconstruction into standard clinical care.
Dissecting Reactor Antineutrino Flux Calculations
NASA Astrophysics Data System (ADS)
Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.
2017-09-01
Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235U, 239Pu, 241Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In the present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238U contribution as well as the effective charge and the allowed shape assumption used in the conversion method. We observe that including a shape correction of about +6 % MeV-1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.
Dissecting Reactor Antineutrino Flux Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.
2017-09-15
Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235 U , 239 Pu , 241 Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In our present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238 U contribution as wellmore » as the effective charge and the allowed shape assumption used in the conversion method. Here, we observe that including a shape correction of about + 6 % MeV - 1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.« less
NASA Astrophysics Data System (ADS)
Gilmore, T. E.; Zlotnik, V. A.; Johnson, M.
2017-12-01
Groundwater table elevations are one of the most fundamental measurements used to characterize unconfined aquifers, groundwater flow patterns, and aquifer sustainability over time. In this study, we developed an analytical model that relies on analysis of groundwater elevation contour (equipotential) shape, aquifer transmissivity, and streambed gradient between two parallel, perennial streams. Using two existing regional water table maps, created at different times using different methods, our analysis of groundwater elevation contours, transmissivity and streambed gradient produced groundwater recharge rates (42-218 mm yr-1) that were consistent with previous independent recharge estimates from different methods. The three regions we investigated overly the High Plains Aquifer in Nebraska and included some areas where groundwater is used for irrigation. The three regions ranged from 1,500 to 3,300 km2, with either Sand Hills surficial geology, or Sand Hills transitioning to loess. Based on our results, the approach may be used to increase the value of existing water table maps, and may be useful as a diagnostic tool to evaluate the quality of groundwater table maps, identify areas in need of detailed aquifer characterization and expansion of groundwater monitoring networks, and/or as a first approximation before investing in more complex approaches to groundwater recharge estimation.
How to Exist Politically and Learn from It: Hannah Arendt and the Problem of Democratic Education
ERIC Educational Resources Information Center
Biesta, Gert
2010-01-01
Background/Context: In discussions about democratic education, there is a strong tendency to see the role of education as that of the preparation of children and young people for their future participation in democratic life. A major problem with this view is that it relies on the idea that the guarantee for democracy lies in the existence of a…
For patients with difficult-to-treat cancers, doctors increasingly rely on genomic testing of tumors to identify errors in the DNA that indicate a tumor can be targeted by existing therapies. But this approach overlooks another potential marker — rogue proteins — that may be driving cancer cells and also could be targeted with existing treatments.
Uncovering text mining: A survey of current work on web-based epidemic intelligence
Collier, Nigel
2012-01-01
Real world pandemics such as SARS 2002 as well as popular fiction like the movie Contagion graphically depict the health threat of a global pandemic and the key role of epidemic intelligence (EI). While EI relies heavily on established indicator sources a new class of methods based on event alerting from unstructured digital Internet media is rapidly becoming acknowledged within the public health community. At the heart of automated information gathering systems is a technology called text mining. My contribution here is to provide an overview of the role that text mining technology plays in detecting epidemics and to synthesise my existing research on the BioCaster project. PMID:22783909
A game-theoretical pricing mechanism for multiuser rate allocation for video over WiMAX
NASA Astrophysics Data System (ADS)
Chen, Chao-An; Lo, Chi-Wen; Lin, Chia-Wen; Chen, Yung-Chang
2010-07-01
In multiuser rate allocation in a wireless network, strategic users can bias the rate allocation by misrepresenting their bandwidth demands to a base station, leading to an unfair allocation. Game-theoretical approaches have been proposed to address the unfair allocation problems caused by the strategic users. However, existing approaches rely on a timeconsuming iterative negotiation process. Besides, they cannot completely prevent unfair allocations caused by inconsistent strategic behaviors. To address these problems, we propose a Search Based Pricing Mechanism to reduce the communication time and to capture a user's strategic behavior. Our simulation results show that the proposed method significantly reduce the communication time as well as converges stably to an optimal allocation.
NASA Astrophysics Data System (ADS)
Feehan, Paul M. N.
2017-09-01
We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].
NASA Astrophysics Data System (ADS)
Ikegawa, Shinichi; Horinouchi, Takeshi
2016-06-01
Accurate wind observation is a key to study atmospheric dynamics. A new automated cloud tracking method for the dayside of Venus is proposed and evaluated by using the ultraviolet images obtained by the Venus Monitoring Camera onboard the Venus Express orbiter. It uses multiple images obtained successively over a few hours. Cross-correlations are computed from the pair combinations of the images and are superposed to identify cloud advection. It is shown that the superposition improves the accuracy of velocity estimation and significantly reduces false pattern matches that cause large errors. Two methods to evaluate the accuracy of each of the obtained cloud motion vectors are proposed. One relies on the confidence bounds of cross-correlation with consideration of anisotropic cloud morphology. The other relies on the comparison of two independent estimations obtained by separating the successive images into two groups. The two evaluations can be combined to screen the results. It is shown that the accuracy of the screened vectors are very high to the equatorward of 30 degree, while it is relatively low at higher latitudes. Analysis of them supports the previously reported existence of day-to-day large-scale variability at the cloud deck of Venus, and it further suggests smaller-scale features. The product of this study is expected to advance the dynamics of venusian atmosphere.
Segmentation of the hippocampus by transferring algorithmic knowledge for large cohort processing.
Thyreau, Benjamin; Sato, Kazunori; Fukuda, Hiroshi; Taki, Yasuyuki
2018-01-01
The hippocampus is a particularly interesting target for neuroscience research studies due to its essential role within the human brain. In large human cohort studies, bilateral hippocampal structures are frequently identified and measured to gain insight into human behaviour or genomic variability in neuropsychiatric disorders of interest. Automatic segmentation is performed using various algorithms, with FreeSurfer being a popular option. In this manuscript, we present a method to segment the bilateral hippocampus using a deep-learned appearance model. Deep convolutional neural networks (ConvNets) have shown great success in recent years, due to their ability to learn meaningful features from a mass of training data. Our method relies on the following key novelties: (i) we use a wide and variable training set coming from multiple cohorts (ii) our training labels come in part from the output of the FreeSurfer algorithm, and (iii) we include synthetic data and use a powerful data augmentation scheme. Our method proves to be robust, and it has fast inference (<30s total per subject), with trained model available online (https://github.com/bthyreau/hippodeep). We depict illustrative results and show extensive qualitative and quantitative cohort-wide comparisons with FreeSurfer. Our work demonstrates that deep neural-network methods can easily encode, and even improve, existing anatomical knowledge, even when this knowledge exists in algorithmic form. Copyright © 2017 Elsevier B.V. All rights reserved.
Fedina, Lisa
2015-04-01
Recent articles have raised important questions about the validity of prevalence data on human trafficking, exposing flawed methodologies behind frequently cited statistics. While considerable evidence points to the fact that human trafficking does exist in the United States and abroad, many sources of literature continue to cite flawed data and some misuse research in ways that seemingly inflate the problem, which can have serious implications for anti-trafficking efforts, including victim services and anti-trafficking legislation and policy. This systematic review reports on the prevalence data used in 42 recently published books on sex trafficking to determine the extent to which published books rely on data estimates and just how they use or misuse existing data. The findings from this review reveal that the vast majority of published books do rely on existing data that were not rigorously produced and therefore may be misleading or at minimum, inaccurate. Implications for practice, research, and policy are discussed, as well as recommendations for future prevalence studies on human trafficking. © The Author(s) 2014.
Unraveling navigational strategies in migratory insects.
Merlin, Christine; Heinze, Stanley; Reppert, Steven M
2012-04-01
Long-distance migration is a strategy some animals use to survive a seasonally changing environment. To reach favorable grounds, migratory animals have evolved sophisticated navigational mechanisms that rely on a map and compasses. In migratory insects, the existence of a map sense (sense of position) remains poorly understood, but recent work has provided new insights into the mechanisms some compasses use for maintaining a constant bearing during long-distance navigation. The best-studied directional strategy relies on a time-compensated sun compass, used by diurnal insects, for which neural circuits have begun to be delineated. Yet, a growing body of evidence suggests that migratory insects may also rely on other compasses that use night sky cues or the Earth's magnetic field. Those mechanisms are ripe for exploration. Copyright © 2011 Elsevier Ltd. All rights reserved.
Staley, Dennis M.; Negri, Jacquelyn; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2017-01-01
Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity–duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity–duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity–duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity–duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity–duration thresholds do not currently exist.
NASA Astrophysics Data System (ADS)
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2017-02-01
Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity-duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity-duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity-duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity-duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity-duration thresholds do not currently exist.
Minimizing species extinctions through strategic planning for conservation fencing.
Ringma, Jeremy L; Wintle, Brendan; Fuller, Richard A; Fisher, Diana; Bode, Michael
2017-10-01
Conservation fences are an increasingly common management action, particularly for species threatened by invasive predators. However, unlike many conservation actions, fence networks are expanding in an unsystematic manner, generally as a reaction to local funding opportunities or threats. We conducted a gap analysis of Australia's large predator-exclusion fence network by examining translocation of Australian mammals relative to their extinction risk. To address gaps identified in species representation, we devised a systematic prioritization method for expanding the conservation fence network that explicitly incorporated population viability analysis and minimized expected species' extinctions. The approach was applied to New South Wales, Australia, where the state government intends to expand the existing conservation fence network. Existing protection of species in fenced areas was highly uneven; 67% of predator-sensitive species were unrepresented in the fence network. Our systematic prioritization yielded substantial efficiencies in that it reduced expected number of species extinctions up to 17 times more effectively than ad hoc approaches. The outcome illustrates the importance of governance in coordinating management action when multiple projects have similar objectives and rely on systematic methods rather than expanding networks opportunistically. © 2017 Society for Conservation Biology.
Precision enhancement of pavement roughness localization with connected vehicles
NASA Astrophysics Data System (ADS)
Bridgelall, R.; Huang, Y.; Zhang, Z.; Deng, F.
2016-02-01
Transportation agencies rely on the accurate localization and reporting of roadway anomalies that could pose serious hazards to the traveling public. However, the cost and technical limitations of present methods prevent their scaling to all roadways. Connected vehicles with on-board accelerometers and conventional geospatial position receivers offer an attractive alternative because of their potential to monitor all roadways in real-time. The conventional global positioning system is ubiquitous and essentially free to use but it produces impractically large position errors. This study evaluated the improvement in precision achievable by augmenting the conventional geo-fence system with a standard speed bump or an existing anomaly at a pre-determined position to establish a reference inertial marker. The speed sensor subsequently generates position tags for the remaining inertial samples by computing their path distances relative to the reference position. The error model and a case study using smartphones to emulate connected vehicles revealed that the precision in localization improves from tens of metres to sub-centimetre levels, and the accuracy of measuring localized roughness more than doubles. The research results demonstrate that transportation agencies will benefit from using the connected vehicle method to achieve precision and accuracy levels that are comparable to existing laser-based inertial profilers.
Worker experiences of accessibility in post-Katrina New Orleans.
DOT National Transportation Integrated Search
2013-06-01
Existing research has identified transportation challenges that low-income workers face, including a : spatial mismatch between suburban entry level-jobs and urban low-income workers. These studies rely on travel : models and secondary data and thus ...
Video prompting versus other instruction strategies for persons with Alzheimer's disease.
Perilli, Viviana; Lancioni, Giulio E; Hoogeveen, Frans; Caffó, Alessandro; Singh, Nirbhay; O'Reilly, Mark; Sigafoos, Jeff; Cassano, Germana; Oliva, Doretta
2013-06-01
Two studies assessed the effectiveness of video prompting as a strategy to support persons with mild and moderate Alzheimer's disease in performing daily activities. In study I, video prompting was compared to an existing strategy relying on verbal instructions. In study II, video prompting was compared to another existing strategy relying on static pictorial cues. Video prompting and the other strategies were counterbalanced across tasks and participants and compared within alternating treatments designs. Video prompting was effective in all participants. Similarly effective were the other 2 strategies, and only occasional differences between the strategies were reported. Two social validation assessments showed that university psychology students and graduates rated the patients' performance with video prompting more favorably than their performance with the other strategies. Video prompting may be considered a valuable alternative to the other strategies to support daily activities in persons with Alzheimer's disease.
Nonexistence of extremal de Sitter black rings
NASA Astrophysics Data System (ADS)
Khuri, Marcus; Woolgar, Eric
2017-11-01
We show that near-horizon geometries in the presence of a positive cosmological constant cannot exist with ring topology. In particular, de Sitter black rings with vanishing surface gravity do not exist. Our result relies on a known mathematical theorem which is a straightforward consequence of a type of energy condition for a modified Ricci tensor, similar to the curvature-dimension conditions for the m-Bakry-Émery-Ricci tensor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Brien, C. J.; Barr, C. M.; Price, P. M.
There has recently been a great deal of interest in employing immiscible solutes to stabilize nanocrystalline microstructures. Existing modeling efforts largely rely on mesoscale Monte Carlo approaches that employ a simplified model of the microstructure and result in highly homogeneous segregation to grain boundaries. However, there is ample evidence from experimental and modeling studies that demonstrates segregation to grain boundaries is highly non-uniform and sensitive to boundary character. This work employs a realistic nanocrystalline microstructure with experimentally relevant global solute concentrations to illustrate inhomogeneous boundary segregation. Furthermore, experiments quantifying segregation in thin films are reported that corroborate the prediction thatmore » grain boundary segregation is highly inhomogeneous. In addition to grain boundary structure modifying the degree of segregation, the existence of a phase transformation between low and high solute content grain boundaries is predicted. In order to conduct this study, new embedded atom method interatomic potentials are developed for Pt, Au, and the PtAu binary alloy.« less
O’Brien, C. J.; Barr, C. M.; Price, P. M.; ...
2017-10-31
There has recently been a great deal of interest in employing immiscible solutes to stabilize nanocrystalline microstructures. Existing modeling efforts largely rely on mesoscale Monte Carlo approaches that employ a simplified model of the microstructure and result in highly homogeneous segregation to grain boundaries. However, there is ample evidence from experimental and modeling studies that demonstrates segregation to grain boundaries is highly non-uniform and sensitive to boundary character. This work employs a realistic nanocrystalline microstructure with experimentally relevant global solute concentrations to illustrate inhomogeneous boundary segregation. Furthermore, experiments quantifying segregation in thin films are reported that corroborate the prediction thatmore » grain boundary segregation is highly inhomogeneous. In addition to grain boundary structure modifying the degree of segregation, the existence of a phase transformation between low and high solute content grain boundaries is predicted. In order to conduct this study, new embedded atom method interatomic potentials are developed for Pt, Au, and the PtAu binary alloy.« less
Semantic distance-based creation of clusters of pharmacovigilance terms and their evaluation.
Dupuch, Marie; Grabar, Natalia
2015-04-01
Pharmacovigilance is the activity related to the collection, analysis and prevention of adverse drug reactions (ADRs) induced by drugs or biologics. The detection of adverse drug reactions is performed using statistical algorithms and groupings of ADR terms from the MedDRA (Medical Dictionary for Drug Regulatory Activities) terminology. Standardized MedDRA Queries (SMQs) are the groupings which become a standard for assisting the retrieval and evaluation of MedDRA-coded ADR reports worldwide. Currently 84 SMQs have been created, while several important safety topics are not yet covered. Creation of SMQs is a long and tedious process performed by the experts. It relies on manual analysis of MedDRA in order to find out all the relevant terms to be included in a SMQ. Our objective is to propose an automatic method for assisting the creation of SMQs using the clustering of terms which are semantically similar. The experimental method relies on a specific semantic resource, and also on the semantic distance algorithms and clustering approaches. We perform several experiments in order to define the optimal parameters. Our results show that the proposed method can assist the creation of SMQs and make this process faster and systematic. The average performance of the method is precision 59% and recall 26%. The correlation of the results obtained is 0.72 against the medical doctors judgments and 0.78 against the medical coders judgments. These results and additional evaluation indicate that the generated clusters can be efficiently used for the detection of pharmacovigilance signals, as they provide better signal detection than the existing SMQs. Copyright © 2014. Published by Elsevier Inc.
Gehrmann, Sebastian; Dernoncourt, Franck; Li, Yeran; Carlson, Eric T; Wu, Joy T; Welt, Jonathan; Foote, John; Moseley, Edward T; Grant, David W; Tyler, Patrick D; Celi, Leo A
2018-01-01
In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions.
Application of Persistent Scatterer Radar Interferometry to the New Orleans delta region
NASA Astrophysics Data System (ADS)
Lohman, R.; Fielding, E.; Blom, R.
2007-12-01
Subsidence in New Orleans and along the Gulf Coast is currently monitored using a variety of ground- and satellite-based methods, and extensive geophysical modeling of the area seeks to understand the inputs to subsidence rates from sediment compaction, salt evacuation, oxidation and anthropogenic forcings such as the withdrawal or injection of subsurface fluids. Better understanding of the temporal and spatial variability of these subsidence rates can help us improve civic planning and disaster mitigation efforts with the goal of protecting lives and property over the long term. Existing ground-based surveys indicate that subsidence gradients of up to 1 cm/yr or more over length scales of several 10's of km exist in the region, especially in the vicinity of the city of New Orleans. Modeling results based on sediment inputs and post-glacial sea level change tend to predict lower gradients, presumably because there is a large input from unmodeled crustal faults and anthropogenic activity. The broad spatial coverage of InSAR can both add to the existing network of ground-based geodetic surveys, and can help to identify areas that are deforming anomalously with respect to surrounding areas. Here we present the use of a modified point scatterer method applied to radar data from the Radarsat satellite for New Orleans and the Gulf Coast. Point target analysis of InSAR data has already been successfully applied to the New Orleans area by Dixon et al (2006). Our method is similar to the Stanford Method for PS (StaMPS) developed by Andy Hooper, adapted to rely on combinations of small orbital baselines and the inclusion of coherent regions from the time span of each interferogram during phase unwrapping rather than only using points that are stable within all interferograms.
Compressible cavitation with stochastic field method
NASA Astrophysics Data System (ADS)
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
Inverse scattering transform analysis of rogue waves using local periodization procedure
NASA Astrophysics Data System (ADS)
Randoux, Stéphane; Suret, Pierre; El, Gennady
2016-07-01
The nonlinear Schrödinger equation (NLSE) stands out as the dispersive nonlinear partial differential equation that plays a prominent role in the modeling and understanding of the wave phenomena relevant to many fields of nonlinear physics. The question of random input problems in the one-dimensional and integrable NLSE enters within the framework of integrable turbulence, and the specific question of the formation of rogue waves (RWs) has been recently extensively studied in this context. The determination of exact analytic solutions of the focusing 1D-NLSE prototyping RW events of statistical relevance is now considered as the problem of central importance. Here we address this question from the perspective of the inverse scattering transform (IST) method that relies on the integrable nature of the wave equation. We develop a conceptually new approach to the RW classification in which appropriate, locally coherent structures are specifically isolated from a globally incoherent wave train to be subsequently analyzed by implementing a numerical IST procedure relying on a spatial periodization of the object under consideration. Using this approach we extend the existing classifications of the prototypes of RWs from standard breathers and their collisions to more general nonlinear modes characterized by their nonlinear spectra.
Dafforn, Timothy R; Rajendra, Jacindra; Halsall, David J; Serpell, Louise C; Rodger, Alison
2004-01-01
High-resolution structure determination of soluble globular proteins relies heavily on x-ray crystallography techniques. Such an approach is often ineffective for investigations into the structure of fibrous proteins as these proteins generally do not crystallize. Thus investigations into fibrous protein structure have relied on less direct methods such as x-ray fiber diffraction and circular dichroism. Ultraviolet linear dichroism has the potential to provide additional information on the structure of such biomolecular systems. However, existing systems are not optimized for the requirements of fibrous proteins. We have designed and built a low-volume (200 microL), low-wavelength (down to 180 nm), low-pathlength (100 microm), high-alignment flow-alignment system (couette) to perform ultraviolet linear dichroism studies on the fibers formed by a range of biomolecules. The apparatus has been tested using a number of proteins for which longer wavelength linear dichroism spectra had already been measured. The new couette cell has also been used to obtain data on two medically important protein fibers, the all-beta-sheet amyloid fibers of the Alzheimer's derived protein Abeta and the long-chain assemblies of alpha1-antitrypsin polymers.
Inverse scattering transform analysis of rogue waves using local periodization procedure
Randoux, Stéphane; Suret, Pierre; El, Gennady
2016-01-01
The nonlinear Schrödinger equation (NLSE) stands out as the dispersive nonlinear partial differential equation that plays a prominent role in the modeling and understanding of the wave phenomena relevant to many fields of nonlinear physics. The question of random input problems in the one-dimensional and integrable NLSE enters within the framework of integrable turbulence, and the specific question of the formation of rogue waves (RWs) has been recently extensively studied in this context. The determination of exact analytic solutions of the focusing 1D-NLSE prototyping RW events of statistical relevance is now considered as the problem of central importance. Here we address this question from the perspective of the inverse scattering transform (IST) method that relies on the integrable nature of the wave equation. We develop a conceptually new approach to the RW classification in which appropriate, locally coherent structures are specifically isolated from a globally incoherent wave train to be subsequently analyzed by implementing a numerical IST procedure relying on a spatial periodization of the object under consideration. Using this approach we extend the existing classifications of the prototypes of RWs from standard breathers and their collisions to more general nonlinear modes characterized by their nonlinear spectra. PMID:27385164
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Cristea, Ioana Alina
2018-01-01
P values represent a widely used, but pervasively misunderstood and fiercely contested method of scientific inference. Display items, such as figures and tables, often containing the main results, are an important source of P values. We conducted a survey comparing the overall use of P values and the occurrence of significant P values in display items of a sample of articles in the three top multidisciplinary journals (Nature, Science, PNAS) in 2017 and, respectively, in 1997. We also examined the reporting of multiplicity corrections and its potential influence on the proportion of statistically significant P values. Our findings demonstrated substantial and growing reliance on P values in display items, with increases of 2.5 to 14.5 times in 2017 compared to 1997. The overwhelming majority of P values (94%, 95% confidence interval [CI] 92% to 96%) were statistically significant. Methods to adjust for multiplicity were almost non-existent in 1997, but reported in many articles relying on P values in 2017 (Nature 68%, Science 48%, PNAS 38%). In their absence, almost all reported P values were statistically significant (98%, 95% CI 96% to 99%). Conversely, when any multiplicity corrections were described, 88% (95% CI 82% to 93%) of reported P values were statistically significant. Use of Bayesian methods was scant (2.5%) and rarely (0.7%) articles relied exclusively on Bayesian statistics. Overall, wider appreciation of the need for multiplicity corrections is a welcome evolution, but the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome. PMID:29763472
Cristea, Ioana Alina; Ioannidis, John P A
2018-01-01
P values represent a widely used, but pervasively misunderstood and fiercely contested method of scientific inference. Display items, such as figures and tables, often containing the main results, are an important source of P values. We conducted a survey comparing the overall use of P values and the occurrence of significant P values in display items of a sample of articles in the three top multidisciplinary journals (Nature, Science, PNAS) in 2017 and, respectively, in 1997. We also examined the reporting of multiplicity corrections and its potential influence on the proportion of statistically significant P values. Our findings demonstrated substantial and growing reliance on P values in display items, with increases of 2.5 to 14.5 times in 2017 compared to 1997. The overwhelming majority of P values (94%, 95% confidence interval [CI] 92% to 96%) were statistically significant. Methods to adjust for multiplicity were almost non-existent in 1997, but reported in many articles relying on P values in 2017 (Nature 68%, Science 48%, PNAS 38%). In their absence, almost all reported P values were statistically significant (98%, 95% CI 96% to 99%). Conversely, when any multiplicity corrections were described, 88% (95% CI 82% to 93%) of reported P values were statistically significant. Use of Bayesian methods was scant (2.5%) and rarely (0.7%) articles relied exclusively on Bayesian statistics. Overall, wider appreciation of the need for multiplicity corrections is a welcome evolution, but the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome.
Investigation of rainfall and regional factors for maintenance cost allocation.
DOT National Transportation Integrated Search
2010-08-01
The existing formulas used by the Texas Department of Transportation (TxDOT) to allocate the statewide : maintenance budget rely heavily on inventory and pavement evaluation data. These formulas include : regional factors and rainfall indices that va...
New Tools for Investigating Chemical and Product Use
- The timely characterization of the human and ecological risk posed by thousands of existing and emerging commercial chemicals is a critical challenge - High throughput (HT) risk prioritization relies on hazard and exposure characterization - While advances have been made ...
Prediction technologies for assessment of climate change impacts
USDA-ARS?s Scientific Manuscript database
Temperatures, precipitation, and weather patterns are changing, in response to increasing carbon dioxide in the atmosphere. With these relatively rapid changes, existing soil erosion prediction technologies that rely upon climate stationarity are potentially becoming less reliable. This is especiall...
Consensus Prediction of Charged Single Alpha-Helices with CSAHserver.
Dudola, Dániel; Tóth, Gábor; Nyitray, László; Gáspári, Zoltán
2017-01-01
Charged single alpha-helices (CSAHs) constitute a rare structural motif. CSAH is characterized by a high density of regularly alternating residues with positively and negatively charged side chains. Such segments exhibit unique structural properties; however, there are only a handful of proteins where its existence is experimentally verified. Therefore, establishing a pipeline that is capable of predicting the presence of CSAH segments with a low false positive rate is of considerable importance. Here we describe a consensus-based approach that relies on two conceptually different CSAH detection methods and a final filter based on the estimated helix-forming capabilities of the segments. This pipeline was shown to be capable of identifying previously uncharacterized CSAH segments that could be verified experimentally. The method is available as a web server at http://csahserver.itk.ppke.hu and also a downloadable standalone program suitable to scan larger sequence collections.
Image smoothing and enhancement via min/max curvature flow
NASA Astrophysics Data System (ADS)
Malladi, Ravikanth; Sethian, James A.
1996-03-01
We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.
Creating nanoscale emulsions using condensation.
Guha, Ingrid F; Anand, Sushant; Varanasi, Kripa K
2017-11-08
Nanoscale emulsions are essential components in numerous products, ranging from processed foods to novel drug delivery systems. Existing emulsification methods rely either on the breakup of larger droplets or solvent exchange/inversion. Here we report a simple, scalable method of creating nanoscale water-in-oil emulsions by condensing water vapor onto a subcooled oil-surfactant solution. Our technique enables a bottom-up approach to forming small-scale emulsions. Nanoscale water droplets nucleate at the oil/air interface and spontaneously disperse within the oil, due to the spreading dynamics of oil on water. Oil-soluble surfactants stabilize the resulting emulsions. We find that the oil-surfactant concentration controls the spreading behavior of oil on water, as well as the peak size, polydispersity, and stability of the resulting emulsions. Using condensation, we form emulsions with peak radii around 100 nm and polydispersities around 10%. This emulsion formation technique may open different routes to creating emulsions, colloidal systems, and emulsion-based materials.
Demand for male contraception.
Dorman, Emily; Bishai, David
2012-10-01
The biological basis for male contraception was established decades ago, but despite promising breakthroughs and the financial burden men increasingly bear due to better enforcement of child support policies, no viable alternative to the condom has been brought to market. Men who wish to control their fertility must rely on female compliance with contraceptives, barrier methods, vasectomy or abstinence. Over the last 10 years, the pharmaceutical industry has abandoned most of its investment in the field, leaving only nonprofit organisations and public entities pursuing male contraception. Leading explanations are uncertain forecasts of market demand pitted against the need for critical investments to demonstrate the safety of existing candidate products. This paper explores the developments and challenges in male contraception research. We produce preliminary estimates of potential market size for a safe and effective male contraceptive based on available data to estimate the potential market for a novel male method.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Rural sewage treatment processing in Yongjia County, Zhejiang Province
NASA Astrophysics Data System (ADS)
Wang, W. H.; Kuan, T. H.
2016-08-01
Issues regarding water pollution in rural areas of China have garnered increased attention over the years. Further discussion on the circumstances and results of existing domestic sewage treatment methods may serve as an appropriate reference in solving these important issues. This article explored the current conditions of water contamination in rural areas of China, introduced the characteristics and effects of applicable sewage treatment technology, and summarized the results of the planning, installation, and operation of rural sewage treatment facilities in Yongjia County in Zhejiang Province. However, relying on a single technical design rule is not adequate for solving the practical problems that these villages face. Instead, methods of planning rural sewage treatment should be adapted to better suit local conditions and different residential forms. It is crucial, ultimately, for any domestic sewage treatment system in a rural area to be commissioned, engineered, and maintained by a market-oriented professional company.
Enhancement method for rendered images of home decoration based on SLIC superpixels
NASA Astrophysics Data System (ADS)
Dai, Yutong; Jiang, Xiaotong
2018-04-01
Rendering technology has been widely used in the home decoration industry in recent years for images of home decoration design. However, due to the fact that rendered images of home decoration design rely heavily on the parameters of renderer and the lights of scenes, most rendered images in this industry require further optimization afterwards. To reduce workload and enhance rendered images automatically, an algorithm utilizing neural networks is proposed in this manuscript. In addition, considering few extreme conditions such as strong sunlight and lights, SLIC superpixels based segmentation is used to choose out these bright areas of an image and enhance them independently. Finally, these chosen areas are merged with the entire image. Experimental results show that the proposed method effectively enhances the rendered images when compared with some existing algorithms. Besides, the proposed strategy is proven to be adaptable especially to those images with obvious bright parts.
Developing Analogy Cost Estimates for Space Missions
NASA Technical Reports Server (NTRS)
Shishko, Robert
2004-01-01
The analogy approach in cost estimation combines actual cost data from similar existing systems, activities, or items with adjustments for a new project's technical, physical or programmatic differences to derive a cost estimate for the new system. This method is normally used early in a project cycle when there is insufficient design/cost data to use as a basis for (or insufficient time to perform) a detailed engineering cost estimate. The major limitation of this method is that it relies on the judgment and experience of the analyst/estimator. The analyst must ensure that the best analogy or analogies have been selected, and that appropriate adjustments have been made. While analogy costing is common, there is a dearth of advice in the literature on the 'adjustment methodology', especially for hardware projects. This paper discusses some potential approaches that can improve rigor and repeatability in the analogy costing process.
One library to make them all: streamlining the creation of yeast libraries via a SWAp-Tag strategy.
Yofe, Ido; Weill, Uri; Meurer, Matthias; Chuartzman, Silvia; Zalckvar, Einat; Goldman, Omer; Ben-Dor, Shifra; Schütze, Conny; Wiedemann, Nils; Knop, Michael; Khmelinskii, Anton; Schuldiner, Maya
2016-04-01
The yeast Saccharomyces cerevisiae is ideal for systematic studies relying on collections of modified strains (libraries). Despite the significance of yeast libraries and the immense variety of available tags and regulatory elements, only a few such libraries exist, as their construction is extremely expensive and laborious. To overcome these limitations, we developed a SWAp-Tag (SWAT) method that enables one parental library to be modified easily and efficiently to give rise to an endless variety of libraries of choice. To showcase the versatility of the SWAT approach, we constructed and investigated a library of ∼1,800 strains carrying SWAT-GFP modules at the amino termini of endomembrane proteins and then used it to create two new libraries (mCherry and seamless GFP). Our work demonstrates how the SWAT method allows fast and effortless creation of yeast libraries, opening the door to new ways of systematically studying cell biology.
Predicting Baseline for Analysis of Electricity Pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.; Lee, D.; Choi, J.
2016-05-03
To understand the impact of new pricing structure on residential electricity demands, we need a baseline model that captures every factor other than the new price. The standard baseline is a randomized control group, however, a good control group is hard to design. This motivates us to devlop data-driven approaches. We explored many techniques and designed a strategy, named LTAP, that could predict the hourly usage years ahead. The key challenge in this process is that the daily cycle of electricity demand peaks a few hours after the temperature reaching its peak. Existing methods rely on the lagged variables ofmore » recent past usages to enforce this daily cycle. These methods have trouble making predictions years ahead. LTAP avoids this trouble by assuming the daily usage profile is determined by temperature and other factors. In a comparison against a well-designed control group, LTAP is found to produce accurate predictions.« less
Verification of low-Mach number combustion codes using the method of manufactured solutions
NASA Astrophysics Data System (ADS)
Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz
2007-11-01
Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.
NASA Astrophysics Data System (ADS)
Blake, T.; Egede, U.; Owen, P.; Petridis, K. A.; Pomery, G.
2018-06-01
A method for analysing the hadronic resonance contributions in \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- decays is presented. This method uses an empirical model that relies on measurements of the branching fractions and polarisation amplitudes of final states involving J^{PC}=1^{-} resonances, relative to the short-distance component, across the full dimuon mass spectrum of \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- transitions. The model is in good agreement with existing calculations of hadronic non-local effects. The effect of this contribution to the angular observables is presented and it is demonstrated how the narrow resonances in the q^2 spectrum provide a dramatic enhancement to CP-violating effects in the short-distance amplitude. Finally, a study of the hadronic resonance effects on lepton universality ratios, R_{K^{(*)}}, in the presence of new physics is presented.
Su, Zhangli
2016-01-01
Combinatorial patterns of histone modifications are key indicators of different chromatin states. Most of the current approaches rely on the usage of antibodies to analyze combinatorial histone modifications. Here we detail an antibody-free method named MARCC (Matrix-Assisted Reader Chromatin Capture) to enrich combinatorial histone modifications. The combinatorial patterns are enriched on native nucleosomes extracted from cultured mammalian cells and prepared by micrococcal nuclease digestion. Such enrichment is achieved by recombinant chromatin-interacting protein modules, or so-called reader domains, which can bind in a combinatorial modification-dependent manner. The enriched chromatin can be quantified by western blotting or mass spectrometry for the co-existence of histone modifications, while the associated DNA content can be analyzed by qPCR or next-generation sequencing. Altogether, MARCC provides a reproducible, efficient and customizable solution to enrich and analyze combinatorial histone modifications. PMID:26131849
Vyzantiadis, Timoleon-Achilleas A; Johnson, Elizabeth M; Kibbler, Christopher C
2012-06-01
The identification of fungi relies mainly on morphological criteria. However, there is a need for robust and definitive phenotypic identification procedures in order to evaluate continuously evolving molecular methods. For the future, there is an emerging consensus that a combined (phenotypic and molecular) approach is more powerful for fungal identification, especially for moulds. Most of the procedures used for phenotypic identification are based on experience rather than comparative studies of effectiveness or performance and there is a need for standardisation among mycology laboratories. This review summarises and evaluates the evidence for the major existing phenotypic identification procedures for the predominant causes of opportunistic mould infection. We have concentrated mainly on Aspergillus, Fusarium and mucoraceous mould species, as these are the most important clinically and the ones for which there are the most molecular taxonomic data.
Craig, Peter; Katikireddi, Srinivasa Vittal; Leyland, Alastair; Popham, Frank
2017-03-20
Population health interventions are essential to reduce health inequalities and tackle other public health priorities, but they are not always amenable to experimental manipulation. Natural experiment (NE) approaches are attracting growing interest as a way of providing evidence in such circumstances. One key challenge in evaluating NEs is selective exposure to the intervention. Studies should be based on a clear theoretical understanding of the processes that determine exposure. Even if the observed effects are large and rapidly follow implementation, confidence in attributing these effects to the intervention can be improved by carefully considering alternative explanations. Causal inference can be strengthened by including additional design features alongside the principal method of effect estimation. NE studies often rely on existing (including routinely collected) data. Investment in such data sources and the infrastructure for linking exposure and outcome data is essential if the potential for such studies to inform decision making is to be realized.
Dehghan, Azad; Kovacevic, Aleksandar; Karystianis, George; Keane, John A; Nenadic, Goran
2017-11-01
De-identification of clinical narratives is one of the main obstacles to making healthcare free text available for research. In this paper we describe our experience in expanding and tailoring two existing tools as part of the 2016 CEGS N-GRID Shared Tasks Track 1, which evaluated de-identification methods on a set of psychiatric evaluation notes for up to 25 different types of Protected Health Information (PHI). The methods we used rely on machine learning on either a large or small feature space, with additional strategies, including two-pass tagging and multi-class models, which both proved to be beneficial. The results show that the integration of the proposed methods can identify Health Information Portability and Accountability Act (HIPAA) defined PHIs with overall F 1 -scores of ∼90% and above. Yet, some classes (Profession, Organization) proved again to be challenging given the variability of expressions used to reference given information. Copyright © 2017. Published by Elsevier Inc.
Electric Power Distribution System Model Simplification Using Segment Substitution
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...
2017-09-20
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Inyang, Imo; Benke, Geza; McKenzie, Ray; Abramson, Michael
2008-03-01
The debate on mobile telephone safety continues. Most epidemiological studies investigating health effects of radiofrequency (RF) radiation emitted by mobile phone handsets have been criticised for poor exposure assessment. Most of these studies relied on the historical reconstruction of participants' phone use by questionnaires. Such exposure assessment methods are prone to recall bias resulting in misclassification that may lead to conflicting conclusions. Although there have been some studies using software-modified phones (SMP) for exposure assessment in the literature, until now there is no published work on the use of hardware modified phones (HMPs) or RF dosimeters for studies of mobile phones and health outcomes. We reviewed existing literature on mobile phone epidemiology with particular attention to exposure assessment methods used. Owing to the inherent limitations of these assessment methods, we suggest that the use of HMPs may show promise for more accurate exposure assessment of RF radiation from mobile phones.
Spatial Copula Model for Imputing Traffic Flow Data from Remote Microwave Sensors.
Ma, Xiaolei; Luan, Sen; Du, Bowen; Yu, Bin
2017-09-21
Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks.
Efficient Strategies for Estimating the Spatial Coherence of Backscatter
Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.
2017-01-01
The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342
Evaluation of actinide biosorption by microorganisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Happel, A.M.
1996-06-01
Conventional methods for removing metals from aqueous solutions include chemical precipitation, chemical oxidation or reduction, ion exchange, reverse osmosis, electrochemical treatment and evaporation. The removal of radionuclides from aqueous waste streams has largely relied on ion exchange methods which can be prohibitively costly given increasingly stringent regulatory effluent limits. The use of microbial cells as biosorbants for heavy metals offers a potential alternative to existing methods for decontamination or recovery of heavy metals from a variety of industrial waste streams and contaminated ground waters. The toxicity and the extreme and variable conditions present in many radionuclide containing waste streams maymore » preclude the use of living microorganisms and favor the use of non-living biomass for the removal of actinides from these waste streams. In the work presented here, we have examined the biosorption of uranium by non-living, non-metabolizing microbial biomass thus avoiding the problems associated with living systems. We are investigating biosorption with the long term goal of developing microbial technologies for the remediation of actinides.« less
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Mergias, I; Moustakas, K; Papadopoulos, A; Loizidou, M
2007-08-25
Each alternative scheme for treating a vehicle at its end of life has its own consequences from a social, environmental, economic and technical point of view. Furthermore, the criteria used to determine these consequences are often contradictory and not equally important. In the presence of multiple conflicting criteria, an optimal alternative scheme never exists. A multiple-criteria decision aid (MCDA) method to aid the Decision Maker (DM) in selecting the best compromise scheme for the management of End-of-Life Vehicles (ELVs) is presented in this paper. The constitution of a set of alternatives schemes, the selection of a list of relevant criteria to evaluate these alternative schemes and the choice of an appropriate management system are also analyzed in this framework. The proposed procedure relies on the PROMETHEE method which belongs to the well-known family of multiple criteria outranking methods. For this purpose, level, linear and Gaussian functions are used as preference functions.
A regional, market oriented governance for disaster management: A new planning approach.
Blackstone, Erwin A; Hakim, Simon; Meehan, Brian
2017-10-01
This paper proposes a regional competitive governance and management of response and recovery from disasters. It presents problems experienced in major disasters, analyzes the failures, and suggests how a competitive system that relies on private and volunteer regional leaders, personnel, and capital can improve preparation, response and recovery efforts over the existing government system. A Public Choice approach is adopted to explain why government often fails, and how regional governance may be socially more efficient than the existing federal- state-local funded and managed disaster system. The paper suggests that the federal role might change from both funding and supplying aid in disasters to merely funding disaster recovery efforts. When a disaster occurs, available businesses and government resources in the region can be utilized under a competitive system. These resources could replace existing federal and state inventories and emergency personnel. An independent regionally controlled and managed council, which also develops its own financial resources, and local volunteer leaders are key for success. The paper suggests a new planning method that utilizes the statistical Factor Analysis methodology to derive an efficient organizational and functional model to confront disasters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Anguita, Alberto; García-Remesal, Miguel; Graf, Norbert; Maojo, Victor
2016-04-01
Modern biomedical research relies on the semantic integration of heterogeneous data sources to find data correlations. Researchers access multiple datasets of disparate origin, and identify elements-e.g. genes, compounds, pathways-that lead to interesting correlations. Normally, they must refer to additional public databases in order to enrich the information about the identified entities-e.g. scientific literature, published clinical trial results, etc. While semantic integration techniques have traditionally focused on providing homogeneous access to private datasets-thus helping automate the first part of the research, and there exist different solutions for browsing public data, there is still a need for tools that facilitate merging public repositories with private datasets. This paper presents a framework that automatically locates public data of interest to the researcher and semantically integrates it with existing private datasets. The framework has been designed as an extension of traditional data integration systems, and has been validated with an existing data integration platform from a European research project by integrating a private biological dataset with data from the National Center for Biotechnology Information (NCBI). Copyright © 2016 Elsevier Inc. All rights reserved.
Existence of Corotating and Counter-Rotating Vortex Pairs for Active Scalar Equations
NASA Astrophysics Data System (ADS)
Hmidi, Taoufik; Mateu, Joan
2017-03-01
In this paper, we study the existence of corotating and counter-rotating pairs of simply connected patches for Euler equations and the {(SQG)_{α}} equations with {α in (0,1)}. From the numerical experiments implemented for Euler equations in Deem and Zabusky (Phys Rev Lett 40(13):859-862, 1978), Pierrehumbert (J Fluid Mech 99:129-144, 1980), Saffman and Szeto (Phys Fluids 23(12):2339-2342, 1980) it is conjectured the existence of a curve of steady vortex pairs passing through the point vortex pairs. There are some analytical proofs based on variational principle (Keady in J Aust Math Soc Ser B 26:487-502, 1985; Turkington in Nonlinear Anal Theory Methods Appl 9(4):351-369, 1985); however, they do not give enough information about the pairs, such as the uniqueness or the topological structure of each single vortex. We intend in this paper to give direct proofs confirming the numerical experiments and extend these results for the {(SQG)_{α}} equation when {α in (0,1)}. The proofs rely on the contour dynamics equations combined with a desingularization of the point vortex pairs and the application of the implicit function theorem.
NASA Technical Reports Server (NTRS)
Kim, Jong Dae (Inventor); Nagarajaiah, Satish (Inventor); Barrera, Enrique V. (Inventor); Dharap, Prasad (Inventor); Zhiling, Li (Inventor)
2010-01-01
The present invention is directed toward devices comprising carbon nanotubes that are capable of detecting displacement, impact, stress, and/or strain in materials, methods of making such devices, methods for sensing/detecting/monitoring displacement, impact, stress, and/or strain via carbon nanotubes, and various applications for such methods and devices. The devices and methods of the present invention all rely on mechanically-induced electronic perturbations within the carbon nanotubes to detect and quantify such stress/strain. Such detection and quantification can rely on techniques which include, but are not limited to, electrical conductivity/conductance and/or resistivity/resistance detection/measurements, thermal conductivity detection/measurements, electroluminescence detection/measurements, photoluminescence detection/measurements, and combinations thereof. All such techniques rely on an understanding of how such properties change in response to mechanical stress and/or strain.
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data sets, but little research exists of how these tools compare to each other in practice. This work introduces and defines exploratory causal analysis (ECA) to address this issue along with the concept of data causality in the taxonomy of causal studies introduced in this work. The motivation is to provide a framework for exploring potential causal structures in time series data sets. ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.
Locating Portable Stations to Support the Operation of Bike Sharing Systems
DOT National Transportation Integrated Search
2017-12-26
Redistributing bikes has been a major challenge for the daily operation of bike sharing system around the world. Existing literature explore solution strategies that rely on pick-up-and-delivery routing as well as user incentivization approaches. The...
Ground robotic measurement of aeolian processes
USDA-ARS?s Scientific Manuscript database
Models of aeolian processes rely on accurate measurements of the rates of sediment transport by wind, and careful evaluation of the environmental controls of these processes. Existing field approaches typically require intensive, event-based experiments involving dense arrays of instruments. These d...
Fara, Patricia
2008-12-01
Few original portraits exist of René Descartes, yet his theories of vision were central to Enlightenment thought. French philosophers combined his emphasis on sight with the English approach of insisting that ideas are not innate, but must be built up from experience. In particular, Denis Diderot criticised Descartes's views by describing how Nicholas Saunderson--a blind physics professor at Cambridge--relied on touch. Diderot also made Saunderson the mouthpiece for some heretical arguments against the existence of God.
NASA Astrophysics Data System (ADS)
Webster, Matthew Julian
The ultimate goal of any treatment of cancer is to maximize the likelihood of killing the tumor while minimizing the chance of damaging healthy tissues. One of the most effective ways to accomplish this is through radiation therapy, which must be able to target the tumor volume with a high accuracy while minimizing the dose delivered to healthy tissues. A successful method of accomplishing this is brachytherapy which works by placing the radiation source in very close proximity to the tumor. However, most current applications of brachytherapy rely mostly on the geometric manipulation of isotropic sources, which limits the ability to specifically target the tumor. The purpose of this work is to introduce several types of shielded brachytherapy applicators which are capable of targeting tumors with much greater accuracy than existing technologies. These applicators rely on the modulation of the dose profile through a high-density tungsten alloy shields to create anisotropic dose distributions. Two classes of applicators have been developed in this work. The first relies on the active motion of the shield, to aim a highly directional radiation profile. This allows for very precise control of the dose distribution for treatment, achieving unparalleled dose coverage to the tumor while sparing healthy tissues. This technique has been given the moniker of Dynamic Modulated Brachytherapy (DMBT). The second class of applicators, designed to reduce treatment complexity uses static applicators. These applicators retain the use of the tungsten shield, but the shield is motionless during treatment. By intelligently designing the shield, significant improvements over current methods have been demonstrated. Although these static applicators fail to match the dosimetric quality of DMBT applicators the simplified setup and treatment procedure gives them significant appeal. The focus of this work has been to optimize these shield designs, specifically for the treatment of rectal and breast carcinomas. The use of Monte Carlo methods and development of optimization algorithms have played a prominent role in accomplishing this. The use of shielded applicators, such as the ones described here, is the next logical step in the rapidly evolving field of brachytherapy.
Charpentier, R.R.; Gautier, D.L.
2011-01-01
The USGS has assessed undiscovered petroleum resources in the Arctic through geological mapping, basin analysis and quantitative assessment. The new map compilation provided the base from which geologists subdivided the Arctic for burial history modelling and quantitative assessment. The CARA was a probabilistic, geologically based study that used existing USGS methodology, modified somewhat for the circumstances of the Arctic. The assessment relied heavily on analogue modelling, with numerical input as lognormal distributions of sizes and numbers of undiscovered accumulations. Probabilistic results for individual assessment units were statistically aggregated taking geological dependencies into account. Fourteen papers in this Geological Society volume present summaries of various aspects of the CARA. ?? 2011 The Geological Society of London.
Establishing equivalence: methodological progress in group-matching design and analysis.
Kover, Sara T; Atwoo, Amy K
2013-01-01
This methodological review draws attention to the challenges faced by intellectual and developmental disabilities researchers in the appropriate design and analysis of group comparison studies. We provide a brief overview of matching methodologies in the field, emphasizing group-matching designs used in behavioral research on cognition and language in neurodevelopmental disorders, including autism spectrum disorder, Fragile X syndrome, Down syndrome, and Williams syndrome. The limitations of relying on p values to establish group equivalence are discussed in the context of other existing methods: equivalence tests, propensity scores, and regression-based analyses. Our primary recommendation for advancing research on intellectual and developmental disabilities is the use of descriptive indices of adequate group matching: effect sizes (i.e., standardized mean differences) and variance ratios.
Establishing Equivalence: Methodological Progress in Group-Matching Design and Analysis
Kover, Sara T.; Atwood, Amy K.
2017-01-01
This methodological review draws attention to the challenges faced by intellectual and developmental disabilities researchers in the appropriate design and analysis of group comparison studies. We provide a brief overview of matching methodologies in the field, emphasizing group-matching designs utilized in behavioral research on cognition and language in neurodevelopmental disorders, including autism spectrum disorder, fragile X syndrome, Down syndrome, and Williams syndrome. The limitations of relying on p-values to establish group equivalence are discussed in the context of other existing methods: equivalence tests, propensity scores, and regression-based analyses. Our primary recommendation for advancing research on intellectual and developmental disabilities is the use of descriptive indices of adequate group matching: effect sizes (i.e., standardized mean differences) and variance ratios. PMID:23301899
A color-coded vision scheme for robotics
NASA Technical Reports Server (NTRS)
Johnson, Kelley Tina
1991-01-01
Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.
Detecting Visually Observable Disease Symptoms from Faces.
Wang, Kuan; Luo, Jiebo
2016-12-01
Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.
Identification of Chinese plague foci from long-term epidemiological data
Ben-Ari, Tamara; Neerinckx, Simon; Agier, Lydiane; Cazelles, Bernard; Xu, Lei; Zhang, Zhibin; Fang, Xiye; Wang, Shuchun; Liu, Qiyong; Stenseth, Nils C.
2012-01-01
Carrying out statistical analysis over an extensive dataset of human plague reports in Chinese villages from 1772 to 1964, we identified plague endemic territories in China (i.e., plague foci). Analyses rely on (i) a clustering method that groups time series based on their time-frequency resemblances and (ii) an ecological niche model that helps identify plague suitable territories characterized by value ranges for a set of predefined environmental variables. Results from both statistical tools indicate the existence of two disconnected plague territories corresponding to Northern and Southern China. Altogether, at least four well defined independent foci are identified. Their contours compare favorably with field observations. Potential and limitations of inferring plague foci and dynamics using epidemiological data is discussed. PMID:22570501
Aspects of Mathematical Modelling of Pressure Retarded Osmosis
Anissimov, Yuri G.
2016-01-01
In power generating terms, a pressure retarded osmosis (PRO) energy generating plant, on a river entering a sea or ocean, is equivalent to a hydroelectric dam with a height of about 60 meters. Therefore, PRO can add significantly to existing renewable power generation capacity if economical constrains of the method are resolved. PRO energy generation relies on a semipermeable membrane that is permeable to water and impermeable to salt. Mathematical modelling plays an important part in understanding flows of water and salt near and across semipermeable membranes and helps to optimize PRO energy generation. Therefore, the modelling can help realizing PRO energy generation potential. In this work, a few aspects of mathematical modelling of the PRO process are reviewed and discussed. PMID:26848696
Medical microbiology: laboratory diagnosis of invasive pneumococcal disease.
Werno, Anja M; Murdoch, David R
2008-03-15
The laboratory diagnosis of invasive pneumococcal disease (IPD) continues to rely on culture-based methods that have been used for many decades. The most significant recent developments have occurred with antigen detection assays, whereas the role of nucleic acid amplification tests has yet to be fully clarified. Despite developments in laboratory diagnostics, a microbiological diagnosis is still not made in most cases of IPD, particularly for pneumococcal pneumonia. The limitations of existing diagnostic tests impact the ability to obtain accurate IPD burden data and to assess the effectiveness of control measures, such as vaccination, in addition to the ability to diagnose IPD in individual patients. There is an urgent need for improved diagnostic tests for pneumococcal disease--especially tests that are suitable for use in underresourced countries.
Tracking of electrochemical impedance of batteries
NASA Astrophysics Data System (ADS)
Piret, H.; Granjon, P.; Guillet, N.; Cattin, V.
2016-04-01
This paper presents an evolutionary battery impedance estimation method, which can be easily embedded in vehicles or nomad devices. The proposed method not only allows an accurate frequency impedance estimation, but also a tracking of its temporal evolution contrary to classical electrochemical impedance spectroscopy methods. Taking into account constraints of cost and complexity, we propose to use the existing electronics of current control to perform a frequency evolutionary estimation of the electrochemical impedance. The developed method uses a simple wideband input signal, and relies on a recursive local average of Fourier transforms. The averaging is controlled by a single parameter, managing a trade-off between tracking and estimation performance. This normalized parameter allows to correctly adapt the behavior of the proposed estimator to the variations of the impedance. The advantage of the proposed method is twofold: the method is easy to embed into a simple electronic circuit, and the battery impedance estimator is evolutionary. The ability of the method to monitor the impedance over time is demonstrated on a simulator, and on a real Lithium ion battery, on which a repeatability study is carried out. The experiments reveal good tracking results, and estimation performance as accurate as the usual laboratory approaches.
Infrared fix pattern noise reduction method based on Shearlet Transform
NASA Astrophysics Data System (ADS)
Rong, Shenghui; Zhou, Huixin; Zhao, Dong; Cheng, Kuanhong; Qian, Kun; Qin, Hanlin
2018-06-01
The non-uniformity correction (NUC) is an effective way to reduce fix pattern noise (FPN) and improve infrared image quality. The temporal high-pass NUC method is a kind of practical NUC method because of its simple implementation. However, traditional temporal high-pass NUC methods rely deeply on the scene motion and suffer image ghosting and blurring. Thus, this paper proposes an improved NUC method based on Shearlet Transform (ST). First, the raw infrared image is decomposed into multiscale and multi-orientation subbands by ST and the FPN component mainly exists in some certain high-frequency subbands. Then, high-frequency subbands are processed by the temporal filter to extract the FPN due to its low-frequency characteristics. Besides, each subband has a confidence parameter to determine the degree of FPN, which is estimated by the variance of subbands adaptively. At last, the process of NUC is achieved by subtracting the estimated FPN component from the original subbands and the corrected infrared image can be obtained by the inverse ST. The performance of the proposed method is evaluated with real and synthetic infrared image sequences thoroughly. Experimental results indicate that the proposed method can reduce heavily FPN with less roughness and RMSE.
Simplified model for determining local heat flux boundary conditions for slagging wall
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingzhi Li; Anders Brink; Mikko Hupa
2009-07-15
In this work, two models for calculating heat transfer through a cooled vertical wall covered with a running slag layer are investigated. The first one relies on a discretization of the velocity equation, and the second one relies on an analytical solution. The aim is to find a model that can be used for calculating local heat flux boundary conditions in computational fluid dynamics (CFD) analysis of such processes. Two different cases where molten deposits exist are investigated: the black liquor recovery boiler and the coal gasifier. The results show that a model relying on discretization of the velocity equationmore » is more flexible in handling different temperature-viscosity relations. Nevertheless, a model relying on an analytical solution is the one fast enough for a potential use as a CFD submodel. Furthermore, the influence of simplifications to the heat balance in the model is investigated. It is found that simplification of the heat balance can be applied when the radiation heat flux is dominant in the balance. 9 refs., 7 figs., 10 tabs.« less
Delamination detection using methods of computational intelligence
NASA Astrophysics Data System (ADS)
Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata
2012-11-01
Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.
Hadagali, Prasannaah; Peters, James R; Balasubramanian, Sriram
2018-03-01
Personalized Finite Element (FE) models and hexahedral elements are preferred for biomechanical investigations. Feature-based multi-block methods are used to develop anatomically accurate personalized FE models with hexahedral mesh. It is tedious to manually construct multi-blocks for large number of geometries on an individual basis to develop personalized FE models. Mesh-morphing method mitigates the aforementioned tediousness in meshing personalized geometries every time, but leads to element warping and loss of geometrical data. Such issues increase in magnitude when normative spine FE model is morphed to scoliosis-affected spinal geometry. The only way to bypass the issue of hex-mesh distortion or loss of geometry as a result of morphing is to rely on manually constructing the multi-blocks for scoliosis-affected spine geometry of each individual, which is time intensive. A method to semi-automate the construction of multi-blocks on the geometry of scoliosis vertebrae from the existing multi-blocks of normative vertebrae is demonstrated in this paper. High-quality hexahedral elements were generated on the scoliosis vertebrae from the morphed multi-blocks of normative vertebrae. Time taken was 3 months to construct the multi-blocks for normative spine and less than a day for scoliosis. Efforts taken to construct multi-blocks on personalized scoliosis spinal geometries are significantly reduced by morphing existing multi-blocks.
PREDICTION OF MULTICOMPONENT INORGANIC ATMOSPHERIC AEROSOL BEHAVIOR. (R824793)
Many existing models calculate the composition of the atmospheric aerosol system by solving a set of algebraic equations based on reversible reactions derived from thermodynamic equilibrium. Some models rely on an a priori knowledge of the presence of components in certain relati...
Implications of the New EEOC Guidelines.
ERIC Educational Resources Information Center
Dhanens, Thomas P.
1979-01-01
In the next few years employers will frequently be confronted with the fact that they cannot rely on undocumented, subjective selection procedures. As long as disparate impact exists in employee selection, employers will be required to validate whatever selection procedures they use. (Author/IRT)
A novel methodological approach for the analysis of host-ligand interactions.
Strat, Daniela; Missailidis, Sotiris; Drake, Alex F
2007-02-02
Traditional analysis of drug-binding data relies upon the Scatchard formalism. These methods rely upon the fitting of a linear equation providing intercept and gradient data that relate to physical properties, such as the binding constant, cooperativity coefficients and number of binding sites. However, the existence of different binding modes with different binding constants makes the implementation of these models difficult. This article describes a novel approach to the binding model of host-ligand interactions by using a derived analytical function describing the observed signal. The benefit of this method is that physically significant parameters, that is, binding constants and number of binding sites, are automatically derived by the use of a minimisation routine. This methodology was utilised to analyse the interactions between a novel antitumour agent and DNA. An optical spectroscopy study confirms that the pentacyclic acridine derivative (DH208) binds to nucleic acids. Two binding modes can be identified: a stronger one that involves intercalation and a weaker one that involves oriented outer-sphere binding. In both cases the plane of the bound acridine ring is parallel to the nucleic acid bases, orthogonal to the phosphate backbone. Ultraviolet (UV) and circular dichroism (CD) data were fitted using the proposed model. The binding constants and the number of binding sites derived from the model remained consistent across the different techniques used. The different wavelengths at which the measurements were made maintained the coherence of the results.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narashimha S.
2013-01-01
Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.
User-guided segmentation for volumetric retinal optical coherence tomography images
Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.
2014-01-01
Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962
Thermodynamically consistent data-driven computational mechanics
NASA Astrophysics Data System (ADS)
González, David; Chinesta, Francisco; Cueto, Elías
2018-05-01
In the paradigm of data-intensive science, automated, unsupervised discovering of governing equations for a given physical phenomenon has attracted a lot of attention in several branches of applied sciences. In this work, we propose a method able to avoid the identification of the constitutive equations of complex systems and rather work in a purely numerical manner by employing experimental data. In sharp contrast to most existing techniques, this method does not rely on the assumption on any particular form for the model (other than some fundamental restrictions placed by classical physics such as the second law of thermodynamics, for instance) nor forces the algorithm to find among a predefined set of operators those whose predictions fit best to the available data. Instead, the method is able to identify both the Hamiltonian (conservative) and dissipative parts of the dynamics while satisfying fundamental laws such as energy conservation or positive production of entropy, for instance. The proposed method is tested against some examples of discrete as well as continuum mechanics, whose accurate results demonstrate the validity of the proposed approach.
GUIDE-Seq enables genome-wide profiling of off-target cleavage by CRISPR-Cas nucleases
Nguyen, Nhu T.; Liebers, Matthew; Topkar, Ved V.; Thapar, Vishal; Wyvekens, Nicolas; Khayter, Cyd; Iafrate, A. John; Le, Long P.; Aryee, Martin J.; Joung, J. Keith
2014-01-01
CRISPR RNA-guided nucleases (RGNs) are widely used genome-editing reagents, but methods to delineate their genome-wide off-target cleavage activities have been lacking. Here we describe an approach for global detection of DNA double-stranded breaks (DSBs) introduced by RGNs and potentially other nucleases. This method, called Genome-wide Unbiased Identification of DSBs Enabled by Sequencing (GUIDE-Seq), relies on capture of double-stranded oligodeoxynucleotides into breaks Application of GUIDE-Seq to thirteen RGNs in two human cell lines revealed wide variability in RGN off-target activities and unappreciated characteristics of off-target sequences. The majority of identified sites were not detected by existing computational methods or ChIP-Seq. GUIDE-Seq also identified RGN-independent genomic breakpoint ‘hotspots’. Finally, GUIDE-Seq revealed that truncated guide RNAs exhibit substantially reduced RGN-induced off-target DSBs. Our experiments define the most rigorous framework for genome-wide identification of RGN off-target effects to date and provide a method for evaluating the safety of these nucleases prior to clinical use. PMID:25513782
User-guided segmentation for volumetric retinal optical coherence tomography images.
Yin, Xin; Chao, Jennifer R; Wang, Ruikang K
2014-08-01
Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.
A Buoyancy-based Method of Determining Fat Levels in Drosophila.
Hazegh, Kelsey E; Reis, Tânia
2016-11-01
Drosophila melanogaster is a key experimental system in the study of fat regulation. Numerous techniques currently exist to measure levels of stored fat in Drosophila, but most are expensive and/or laborious and have clear limitations. Here, we present a method to quickly and cheaply determine organismal fat levels in L3 Drosophila larvae. The technique relies on the differences in density between fat and lean tissues and allows for rapid detection of fat and lean phenotypes. We have verified the accuracy of this method by comparison to body fat percentage as determined by neutral lipid extraction and gas chromatography coupled with mass spectrometry (GCMS). We furthermore outline detailed protocols for the collection and synchronization of larvae as well as relevant experimental recipes. The technique presented below overcomes the major shortcomings in the most widely used lipid quantitation methods and provides a powerful way to quickly and sensitively screen L3 larvae for fat regulation phenotypes while maintaining the integrity of the larvae. This assay has wide applications for the study of metabolism and fat regulation using Drosophila.
A risk analysis for production processes with disposable bioreactors.
Merseburger, Tobias; Pahl, Ina; Müller, Daniel; Tanner, Markus
2014-01-01
: Quality management systems are, as a rule, tightly defined systems that conserve existing processes and therefore guarantee compliance with quality standards. But maintaining quality also includes introducing new enhanced production methods and making use of the latest findings of bioscience. The advances in biotechnology and single-use manufacturing methods for producing new drugs especially impose new challenges on quality management, as quality standards have not yet been set. New methods to ensure patient safety have to be established, as it is insufficient to rely only on current rules. A concept of qualification, validation, and manufacturing procedures based on risk management needs to be established and realized in pharmaceutical production. The chapter starts with an introduction to the regulatory background of the manufacture of medicinal products. It then continues with key methods of risk management. Hazards associated with the production of medicinal products with single-use equipment are described with a focus on bioreactors, storage containers, and connecting devices. The hazards are subsequently evaluated and criteria for risk evaluation are presented. This chapter concludes with aspects of industrial application of quality risk management.
NASA Astrophysics Data System (ADS)
Jara, Daniel; de Dreuzy, Jean-Raynald; Cochepin, Benoit
2017-12-01
Reactive transport modeling contributes to understand geophysical and geochemical processes in subsurface environments. Operator splitting methods have been proposed as non-intrusive coupling techniques that optimize the use of existing chemistry and transport codes. In this spirit, we propose a coupler relying on external geochemical and transport codes with appropriate operator segmentation that enables possible developments of additional splitting methods. We provide an object-oriented implementation in TReacLab developed in the MATLAB environment in a free open source frame with an accessible repository. TReacLab contains classical coupling methods, template interfaces and calling functions for two classical transport and reactive software (PHREEQC and COMSOL). It is tested on four classical benchmarks with homogeneous and heterogeneous reactions at equilibrium or kinetically-controlled. We show that full decoupling to the implementation level has a cost in terms of accuracy compared to more integrated and optimized codes. Use of non-intrusive implementations like TReacLab are still justified for coupling independent transport and chemical software at a minimal development effort but should be systematically and carefully assessed.
Improved detection of soma location and morphology in fluorescence microscopy images of neurons.
Kayasandik, Cihan Bilge; Labate, Demetrio
2016-12-01
Automated detection and segmentation of somas in fluorescent images of neurons is a major goal in quantitative studies of neuronal networks, including applications of high-content-screenings where it is required to quantify multiple morphological properties of neurons. Despite recent advances in image processing targeted to neurobiological applications, existing algorithms of soma detection are often unreliable, especially when processing fluorescence image stacks of neuronal cultures. In this paper, we introduce an innovative algorithm for the detection and extraction of somas in fluorescent images of networks of cultured neurons where somas and other structures exist in the same fluorescent channel. Our method relies on a new geometrical descriptor called Directional Ratio and a collection of multiscale orientable filters to quantify the level of local isotropy in an image. To optimize the application of this approach, we introduce a new construction of multiscale anisotropic filters that is implemented by separable convolution. Extensive numerical experiments using 2D and 3D confocal images show that our automated algorithm reliably detects somas, accurately segments them, and separates contiguous ones. We include a detailed comparison with state-of-the-art existing methods to demonstrate that our algorithm is extremely competitive in terms of accuracy, reliability and computational efficiency. Our algorithm will facilitate the development of automated platforms for high content neuron image processing. A Matlab code is released open-source and freely available to the scientific community. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Leveraging existing information for use in a National Nuclear Forensics Library (NNFL)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davydov, Jerry; Dion, Heather; LaMont, Stephen
A National Nuclear Forensics Library (NNFL) assists a State to assess whether nuclear material encountered out of regulatory control is of domestic or international origin. And by leveraging nuclear material registries, nuclear enterprise records, and safeguards accountancy information, as well as existing domestic technical capability and subject-matter domain expertise, states can better assess the effort required for setting up an NNFL. For states who are largely recipients of nuclear and radiological materials and have no internal production capabilities may create an NNFL that relies on existing information rather than carry out advanced analyses on domestic materials.
Leveraging existing information for use in a National Nuclear Forensics Library (NNFL)
Davydov, Jerry; Dion, Heather; LaMont, Stephen; ...
2015-12-16
A National Nuclear Forensics Library (NNFL) assists a State to assess whether nuclear material encountered out of regulatory control is of domestic or international origin. And by leveraging nuclear material registries, nuclear enterprise records, and safeguards accountancy information, as well as existing domestic technical capability and subject-matter domain expertise, states can better assess the effort required for setting up an NNFL. For states who are largely recipients of nuclear and radiological materials and have no internal production capabilities may create an NNFL that relies on existing information rather than carry out advanced analyses on domestic materials.
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.
The ethical tightrope: politics of intimacy and consensual method in sexuality research.
Zago, Luiz F; Holmes, Dave
2015-06-01
This paper seeks to analyze the construction of ethics in sexuality research in which qualitative methods are employed in the field of social sciences. Analyses are based on a bibliographic review of current discussions on research methods of queer theory and on the authors' own experiences of past research on sexuality. The article offers a theoretical perspective on the ways ethnography and in-depth interviews become methods that can rely on a consensual method and create a politics of intimacy between the researchers and research participants. The politics of intimacy may contribute to the production of a politically engaged knowledge while escaping from the moral matrix that usually governs the relationship between researchers and research participants. It is argued here that the researcher's sexed and gendered body matters for fieldwork; that the consensual method among participants may be employed in sexuality research as a fruitful tool; and that the relationships created among researchers and participants can pose a challenge to predetermined ethical guidelines in research. As a result, discussions problematize the existence of a politics of intimacy in sexuality research that is characterized by ethical relations among research participants. © 2014 John Wiley & Sons Ltd.
Gourmelon, Anne; Delrue, Nathalie
Ten years elapsed since the OECD published the Guidance document on the validation and international regulatory acceptance of test methods for hazard assessment. Much experience has been gained since then in validation centres, in countries and at the OECD on a variety of test methods that were subjected to validation studies. This chapter reviews validation principles and highlights common features that appear to be important for further regulatory acceptance across studies. Existing OECD-agreed validation principles will most likely generally remain relevant and applicable to address challenges associated with the validation of future test methods. Some adaptations may be needed to take into account the level of technique introduced in test systems, but demonstration of relevance and reliability will continue to play a central role as pre-requisite for the regulatory acceptance. Demonstration of relevance will become more challenging for test methods that form part of a set of predictive tools and methods, and that do not stand alone. OECD is keen on ensuring that while these concepts evolve, countries can continue to rely on valid methods and harmonised approaches for an efficient testing and assessment of chemicals.
Effect of Resin-modified Glass Ionomer Cement Dispensing/Mixing Methods on Mechanical Properties.
Sulaiman, T A; Abdulmajeed, A A; Altitinchi, A; Ahmed, S N; Donovan, T E
2018-03-23
Resin-modified glass ionomer cements (RMGIs) are often used for luting indirect restorations. Hand-mixing traditional cements demands significant time and may be technique sensitive. Efforts have been made by manufacturers to introduce the same cement using different dispensing/mixing methods. It is not known what effects these changes may have on the mechanical properties of the dental cement. The purpose of this study was to evaluate the mechanical properties (diametral tensile strength [DTS], compressive strength [CS], and fracture toughness [FT]) of RMGIs with different dispensing/mixing systems. The RMGI specimens (n=14)-RelyX Luting (hand mix), RelyX Luting Plus (clicker-hand mix), RelyX Luting Plus (automix) (3M ESPE), GC Fuji PLUS (capsule-automix), and GC FujiCEM 2 (automix) (GC)-were prepared for each mechanical test and examined after thermocycling (n=7/subgroup) for 20,000 cycles to the following: DTS, CS (ISO 9917-1) and FT (ISO standard 6872; Single-edge V-notched beam method). Specimens were mounted and loaded with a universal testing machine until failure occurred. Two-/one-way analysis of variance followed by Tukey honestly significantly different post hoc test was used to analyze data for statistical significance ( p<0.05). The interaction effect of both dispensing/mixing method and thermocycling was significant only for the CS test of the GC group ( p<0.05). The different dispensing/mixing methods had no effect on the DTS of the tested cements. The CS of GC Fuji PLUS was significantly higher than that of the automix version ( p<0.05). The FT decreased significantly when switching from RelyX (hand mix) to RelyX Luting Plus (clicker-hand mix) and to RelyX Luting Plus (automix) ( p<0.05). Except in the case of the DTS of the GC group and the CS of GC Fuji PLUS, thermocycling had a significant effect reducing the mechanical properties of the RMGI cements ( p<0.05). Introducing alternative dispensing/mixing methods for mixing RMGIs to reduce time and technique sensitivity may affect mechanical properties and is brand dependent.
Facilitating Subject Matter Expert (SME)-Built Knowledge Bases (KBS)
2004-12-01
exists in the field of economics. Most economics textbooks articulate the desirability of maintaining low inflation, ceteris paribus. However, policy...might say that functional knowledge is what the economic policymakers have and rely on to realize the principles agreed upon in economics textbooks . Note
One commonly used approach to CSO pollution abatement is to rely on a storm-event based design of storage-tank volume to capture CSO for pump-back and/or bleed-back (gravity flow) to the existing WWTP for treatment. However, this approach may not be by itself the most economical...
Often, human health risk assessments have relied on qualitative approaches for hazard identification to integrate evidence across multiple studies to conclude whether particular hazards exist. However, quantitative approaches for evidence integration, including the application o...
Assessing the acute toxicity of physically and chemically dispersed oil following an oil spill has generally relied on existing toxicological data for a relatively limited number of aquatic species. Recognition of differences in species sensitivities to contaminants has facilitat...
Orhan, U.; Erdogmus, D.; Roark, B.; Oken, B.; Purwar, S.; Hild, K. E.; Fowler, A.; Fried-Oken, M.
2013-01-01
RSVP Keyboard™ is an electroencephalography (EEG) based brain computer interface (BCI) typing system, designed as an assistive technology for the communication needs of people with locked-in syndrome (LIS). It relies on rapid serial visual presentation (RSVP) and does not require precise eye gaze control. Existing BCI typing systems which uses event related potentials (ERP) in EEG suffer from low accuracy due to low signal-to-noise ratio. Henceforth, RSVP Keyboard™ utilizes a context based decision making via incorporating a language model, to improve the accuracy of letter decisions. To further improve the contributions of the language model, we propose recursive Bayesian estimation, which relies on non-committing string decisions, and conduct an offline analysis, which compares it with the existing naïve Bayesian fusion approach. The results indicate the superiority of the recursive Bayesian fusion and in the next generation of RSVP Keyboard™ we plan to incorporate this new approach. PMID:23366432
Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.
Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria
2010-08-06
Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.
Kerogen extraction from subterranean oil shale resources
Looney, Mark Dean; Lestz, Robert Steven; Hollis, Kirk; Taylor, Craig; Kinkead, Scott; Wigand, Marcus
2010-09-07
The present invention is directed to methods for extracting a kerogen-based product from subsurface (oil) shale formations, wherein such methods rely on fracturing and/or rubblizing portions of said formations so as to enhance their fluid permeability, and wherein such methods further rely on chemically modifying the shale-bound kerogen so as to render it mobile. The present invention is also directed at systems for implementing at least some of the foregoing methods. Additionally, the present invention is also directed to methods of fracturing and/or rubblizing subsurface shale formations and to methods of chemically modifying kerogen in situ so as to render it mobile.
Kerogen extraction from subterranean oil shale resources
Looney, Mark Dean [Houston, TX; Lestz, Robert Steven [Missouri City, TX; Hollis, Kirk [Los Alamos, NM; Taylor, Craig [Los Alamos, NM; Kinkead, Scott [Los Alamos, NM; Wigand, Marcus [Los Alamos, NM
2009-03-10
The present invention is directed to methods for extracting a kerogen-based product from subsurface (oil) shale formations, wherein such methods rely on fracturing and/or rubblizing portions of said formations so as to enhance their fluid permeability, and wherein such methods further rely on chemically modifying the shale-bound kerogen so as to render it mobile. The present invention is also directed at systems for implementing at least some of the foregoing methods. Additionally, the present invention is also directed to methods of fracturing and/or rubblizing subsurface shale formations and to methods of chemically modifying kerogen in situ so as to render it mobile.
Sharma, Nripen S.; Jindal, Rohit; Mitra, Bhaskar; Lee, Serom; Li, Lulu; Maguire, Tim J.; Schloss, Rene; Yarmush, Martin L.
2014-01-01
Skin sensitization remains a major environmental and occupational health hazard. Animal models have been used as the gold standard method of choice for estimating chemical sensitization potential. However, a growing international drive and consensus for minimizing animal usage have prompted the development of in vitro methods to assess chemical sensitivity. In this paper, we examine existing approaches including in silico models, cell and tissue based assays for distinguishing between sensitizers and irritants. The in silico approaches that have been discussed include Quantitative Structure Activity Relationships (QSAR) and QSAR based expert models that correlate chemical molecular structure with biological activity and mechanism based read-across models that incorporate compound electrophilicity. The cell and tissue based assays rely on an assortment of mono and co-culture cell systems in conjunction with 3D skin models. Given the complexity of allergen induced immune responses, and the limited ability of existing systems to capture the entire gamut of cellular and molecular events associated with these responses, we also introduce a microfabricated platform that can capture all the key steps involved in allergic contact sensitivity. Finally, we describe the development of an integrated testing strategy comprised of two or three tier systems for evaluating sensitization potential of chemicals. PMID:24741377
Spectral unmixing of urban land cover using a generic library approach
NASA Astrophysics Data System (ADS)
Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben
2016-10-01
Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.
Can Simple Transmission Chains Foster Collective Intelligence in Binary-Choice Tasks?
Moussaïd, Mehdi; Seyed Yahosseini, Kyanoush
2016-01-01
In many social systems, groups of individuals can find remarkably efficient solutions to complex cognitive problems, sometimes even outperforming a single expert. The success of the group, however, crucially depends on how the judgments of the group members are aggregated to produce the collective answer. A large variety of such aggregation methods have been described in the literature, such as averaging the independent judgments, relying on the majority or setting up a group discussion. In the present work, we introduce a novel approach for aggregating judgments-the transmission chain-which has not yet been consistently evaluated in the context of collective intelligence. In a transmission chain, all group members have access to a unique collective solution and can improve it sequentially. Over repeated improvements, the collective solution that emerges reflects the judgments of every group members. We address the question of whether such a transmission chain can foster collective intelligence for binary-choice problems. In a series of numerical simulations, we explore the impact of various factors on the performance of the transmission chain, such as the group size, the model parameters, and the structure of the population. The performance of this method is compared to those of the majority rule and the confidence-weighted majority. Finally, we rely on two existing datasets of individuals performing a series of binary decisions to evaluate the expected performances of the three methods empirically. We find that the parameter space where the transmission chain has the best performance rarely appears in real datasets. We conclude that the transmission chain is best suited for other types of problems, such as those that have cumulative properties.
Can Simple Transmission Chains Foster Collective Intelligence in Binary-Choice Tasks?
Moussaïd, Mehdi; Seyed Yahosseini, Kyanoush
2016-01-01
In many social systems, groups of individuals can find remarkably efficient solutions to complex cognitive problems, sometimes even outperforming a single expert. The success of the group, however, crucially depends on how the judgments of the group members are aggregated to produce the collective answer. A large variety of such aggregation methods have been described in the literature, such as averaging the independent judgments, relying on the majority or setting up a group discussion. In the present work, we introduce a novel approach for aggregating judgments—the transmission chain—which has not yet been consistently evaluated in the context of collective intelligence. In a transmission chain, all group members have access to a unique collective solution and can improve it sequentially. Over repeated improvements, the collective solution that emerges reflects the judgments of every group members. We address the question of whether such a transmission chain can foster collective intelligence for binary-choice problems. In a series of numerical simulations, we explore the impact of various factors on the performance of the transmission chain, such as the group size, the model parameters, and the structure of the population. The performance of this method is compared to those of the majority rule and the confidence-weighted majority. Finally, we rely on two existing datasets of individuals performing a series of binary decisions to evaluate the expected performances of the three methods empirically. We find that the parameter space where the transmission chain has the best performance rarely appears in real datasets. We conclude that the transmission chain is best suited for other types of problems, such as those that have cumulative properties. PMID:27880825
NASA Astrophysics Data System (ADS)
Riah, Zoheir; Sommet, Raphael; Nallatamby, Jean C.; Prigent, Michel; Obregon, Juan
2004-05-01
We present in this paper a set of coherent tools for noise characterization and physics-based analysis of noise in semiconductor devices. This noise toolbox relies on a low frequency noise measurement setup with special high current capabilities thanks to an accurate and original calibration. It relies also on a simulation tool based on the drift diffusion equations and the linear perturbation theory, associated with the Green's function technique. This physics-based noise simulator has been implemented successfully in the Scilab environment and is specifically dedicated to HBTs. Some results are given and compared to those existing in the literature.
Yi, Ming; Stephens, Robert M.
2008-01-01
Analysis of microarray and other high throughput data often involves identification of genes consistently up or down-regulated across samples as the first step in extraction of biological meaning. This gene-level paradigm can be limited as a result of valid sample fluctuations and biological complexities. In this report, we describe a novel method, SLEPR, which eliminates this limitation by relying on pathway-level consistencies. Our method first selects the sample-level differentiated genes from each individual sample, capturing genes missed by other analysis methods, ascertains the enrichment levels of associated pathways from each of those lists, and then ranks annotated pathways based on the consistency of enrichment levels of individual samples from both sample classes. As a proof of concept, we have used this method to analyze three public microarray datasets with a direct comparison with the GSEA method, one of the most popular pathway-level analysis methods in the field. We found that our method was able to reproduce the earlier observations with significant improvements in depth of coverage for validated or expected biological themes, but also produced additional insights that make biological sense. This new method extends existing analyses approaches and facilitates integration of different types of HTP data. PMID:18818771
Methyl-esterified 3-hydroxybutyrate oligomers protect bacteria from hydroxyl radicals
USDA-ARS?s Scientific Manuscript database
Bacteria rely mainly on enzymes, glutathione and other low-molecular weight thiols to overcome oxidative stress. However, hydroxyl radicals are the most cytotoxic reactive oxygen species, and no known enzymatic system exists for their detoxification. We now show that methyl-esterified dimers and tri...
THE FEASIBILITY OF EPIDEMIOLOGIC STUDIES OF ARSENIC-RELATED HEALTH EFFECTS IN THE U.S.
The planning of the feasibility studies will rely on existing data on drinking water arsenic-exposed populations. Exposure concentrations of drinking water arsenic will be collected at the state and local levels, and other descriptive information about the populations exposed inc...
When BVD doesn't look like BVD
USDA-ARS?s Scientific Manuscript database
Due to the sheer number of different clinical presentations existing under the BVD umbrella, diagnosing BVD based on clinical signs is not advisable. Thus diagnosis relies upon testing of samples in diagnostic laboratories. In the US, most of the diagnostic effort has been focused on identifying a...
Existence and amplitude bounds for irrotational water waves in finite depth
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian
2017-12-01
We prove the existence of solutions to the irrotational water-wave problem in finite depth and derive an explicit upper bound on the amplitude of the nonlinear solutions in terms of the wavenumber, the total hydraulic head, the wave speed and the relative mass flux. Our approach relies upon a reformulation of the water-wave problem as a one-dimensional pseudo-differential equation and the Newton-Kantorovich iteration for Banach spaces. This article is part of the theme issue 'Nonlinear water waves'.
The Vlasov-Navier-Stokes System in a 2D Pipe: Existence and Stability of Regular Equilibria
NASA Astrophysics Data System (ADS)
Glass, Olivier; Han-Kwan, Daniel; Moussa, Ayman
2018-05-01
In this paper, we study the Vlasov-Navier-Stokes system in a 2D pipe with partially absorbing boundary conditions. We show the existence of stationary states for this system near small Poiseuille flows for the fluid phase, for which the kinetic phase is not trivial. We prove the asymptotic stability of these states with respect to appropriately compactly supported perturbations. The analysis relies on geometric control conditions which help to avoid any concentration phenomenon for the kinetic phase.
State Clean Energy Policies Analysis. State, Utility, and Municipal Loan Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lantz, Eric
2010-05-01
This report relies on six in-depth interviews with loan program administrators to provide descriptions of existing programs. Findings from the interviews are combined with a review of relevant literature to elicit best practices and lessons learned from existing loan programs. Data collected from each of the loan programs profiled are used to quantify the impacts of these specific loan programs on the commonly cited, overarching state clean energy goals of energy security, economic development, and environmental protection.
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State.
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
A weakly-compressible Cartesian grid approach for hydrodynamic flows
NASA Astrophysics Data System (ADS)
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
An Efficient Statistical Method to Compute Molecular Collisional Rate Coefficients
NASA Astrophysics Data System (ADS)
Loreau, Jérôme; Lique, François; Faure, Alexandre
2018-01-01
Our knowledge about the “cold” universe often relies on molecular spectra. A general property of such spectra is that the energy level populations are rarely at local thermodynamic equilibrium. Solving the radiative transfer thus requires the availability of collisional rate coefficients with the main colliding partners over the temperature range ∼10–1000 K. These rate coefficients are notoriously difficult to measure and expensive to compute. In particular, very few reliable collisional data exist for inelastic collisions involving reactive radicals or ions. In this Letter, we explore the use of a fast quantum statistical method to determine molecular collisional excitation rate coefficients. The method is benchmarked against accurate (but costly) rigid-rotor close-coupling calculations. For collisions proceeding through the formation of a strongly bound complex, the method is found to be highly satisfactory up to room temperature. Its accuracy decreases with decreasing potential well depth and with increasing temperature, as expected. This new method opens the way to the determination of accurate inelastic collisional data involving key reactive species such as {{{H}}}3+, H2O+, and H3O+ for which exact quantum calculations are currently not feasible.
NASA Astrophysics Data System (ADS)
Shimada, M.; Shimada, J.; Tsunashima, K.; Aoyama, C.
2017-12-01
Methane hydrate is anticipated to be the unconventional natural gas energy resource. Two types of methane hydrates are known to exist, based on the settings: "shallow" type and "sand layer" type. In comparison, shallow type is considered an advantage due to its high purity and the more simple exploration. However, not much development methods have been made in the area of extraction techniques. Currently, heating and depressurization are used as methods to collect sand layer methane hydrate, but these methods are still under examination and not yet to be implemented. This is probably because fossil fuel is used for the extraction process instead of natural energy. It is necessary to utilize natural energy instead of relying on fossil fuel. This is why sunlight is believed to be the most significant alternative. Solar power generation is commonly used to extract sunlight, but it is said that this process causes extreme energy loss since solar energy converted to electricity requires conversion to heat energy. A new method is contrived to accelerate the decomposition of methane hydrate with direct sunlight utilizing optical fibers. Authors will present details of this new method to collect methane hydrate with direct sunlight exposure.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
Dameron, O; Gibaud, B; Morandi, X
2004-06-01
The human cerebral cortex anatomy describes the brain organization at the scale of gyri and sulci. It is used as landmarks for neurosurgery as well as localization support for functional data analysis or inter-subject data comparison. Existing models of the cortex anatomy either rely on image labeling but fail to represent variability and structural properties or rely on a conceptual model but miss the inner 3D nature and relations of anatomical structures. This study was therefore conducted to propose a model of sulco-gyral anatomy for the healthy human brain. We hypothesized that both numeric knowledge (i.e., image-based) and symbolic knowledge (i.e., concept-based) have to be represented and coordinated. In addition, the representation of this knowledge should be application-independent in order to be usable in various contexts. Therefore, we devised a symbolic model describing specialization, composition and spatial organization of cortical anatomical structures. We also collected numeric knowledge such as 3D models of shape and shape variation about cortical anatomical structures. For each numeric piece of knowledge, a companion file describes the concept it refers to and the nature of the relationship. Demonstration software performs a mapping between the numeric and the symbolic aspects for browsing the knowledge base.
Concept for the fast modulation of light in amplitude and phase using analog tilt-mirror arrays
NASA Astrophysics Data System (ADS)
Roth, Matthias; Heber, Jörg; Janschek, Klaus
2017-02-01
The full complex, spatial modulation of light at high frame rates is essential for a variety of applications. In particular, emerging techniques applied to scattering media, such as Digital Optical Phase Conjugation and Wavefront Shaping, request challenging performance parameters. They refer to imaging tasks inside biological media, whose characteristics concerning the transmission and reflection of scattered light may change over time within milliseconds. Thus, these methods call for frame rates in the kilohertz range. Existing solutions typically over frame rate capabilities below 100 Hz, since they rely on liquid crystal spatial light modulators (SLMs). We propose a diffractive MEMS optical system for this application range. It relies on an analog, tilt-type micro mirror array (MMA) based on an established SLM technology, where the standard application is grayscale amplitude control. The new MMA system design allows the phase manipulation at high-speed as well. The article studies properties of the appropriate optical setup by simulating the propagation of the light. Relevant test patterns and sensitivity parameters of the system will be analyzed. Our results illustrate the main opportunities of the concept with particular focus on the tilt mirror technology. They indicate a promising path to realize the complex light modulation at frame rates above 1 kHz and resolutions well beyond 10,000 complex pixels.
Maximally Informative Stimuli and Tuning Curves for Sigmoidal Rate-Coding Neurons and Populations
NASA Astrophysics Data System (ADS)
McDonnell, Mark D.; Stocks, Nigel G.
2008-08-01
A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannon’s mutual information and Fisher information, and the optimality of Jeffrey’s prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.
Moisture Forecast Bias Correction in GEOS DAS
NASA Technical Reports Server (NTRS)
Dee, D.
1999-01-01
Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m
2010-04-15
This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less
Witnessing entanglement without entanglement witness operators.
Pezzè, Luca; Li, Yan; Li, Weidong; Smerzi, Augusto
2016-10-11
Quantum mechanics predicts the existence of correlations between composite systems that, although puzzling to our physical intuition, enable technologies not accessible in a classical world. Notwithstanding, there is still no efficient general method to theoretically quantify and experimentally detect entanglement of many qubits. Here we propose to detect entanglement by measuring the statistical response of a quantum system to an arbitrary nonlocal parametric evolution. We witness entanglement without relying on the tomographic reconstruction of the quantum state, or the realization of witness operators. The protocol requires two collective settings for any number of parties and is robust against noise and decoherence occurring after the implementation of the parametric transformation. To illustrate its user friendliness we demonstrate multipartite entanglement in different experiments with ions and photons by analyzing published data on fidelity visibilities and variances of collective observables.
Identifying Neck and Back Pain in Administrative Data: Defining the right cohort
Siroka, Andrew M.; Shane, Andrea C.; Trafton, Jodie A.; Wagner, Todd H.
2017-01-01
Structured Abstract Study design We reviewed existing methods for identifying patients with neck and back pain in administrative data. We compared these methods using data from the Department of Veterans Affairs. Objective To answer the following questions: 1) what diagnosis codes should be used to identify patients with neck and back pain in administrative data; 2) because the majority of complaints are characterized as non-specific or mechanical, what diagnosis codes should be used to identify patients with non-specific or mechanical problems in administrative data; and 3) what procedure and surgical codes should be used to identify patients who have undergone a surgical procedure on the neck or back. Summary of background data Musculoskeletal neck and back pain are pervasive problems, associated with chronic pain, disability, and high rates of healthcare utilization. Administrative data have been widely used in formative research which has largely relied on the original work of Volinn, Cherkin, Deyo and Einstadter and the Back Pain Patient Outcomes Assessment Team first published in 1992. Significant variation in reports of incidence, prevalence, and morbidity associated with these problems may be due to non standard or conflicting methods to define study cohorts. Methods A literature review produced seven methods for identifying neck and back pain in administrative data. These code lists were used to search VA data for patients with back and neck problems, and to further categorize each case by spinal segment involved, as non- specific/mechanical and as surgical or not. Results There is considerable overlap in most algorithms. However, gaps remain. Conclusions Gaps are evident in existing methods and a new framework to identify patients with neck and back pain in administrative data is proposed. PMID:22127268
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems.
Ortiz-Bayliss, José Carlos; Amaya, Ivan; Conant-Pablos, Santiago Enrique; Terashima-Marín, Hugo
2018-01-01
When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases.
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems
Amaya, Ivan
2018-01-01
When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases. PMID:29681923
Antimatter Requirements and Energy Costs for Near-Term Propulsion Applications
NASA Technical Reports Server (NTRS)
Schmidt, G. R.; Gerrish, H. P.; Martin, J. J.; Smith, G. A.; Meyer, K. J.
1999-01-01
The superior energy density of antimatter annihilation has often been pointed to as the ultimate source of energy for propulsion. However, the limited capacity and very low efficiency of present-day antiproton production methods suggest that antimatter may be too costly to consider for near-term propulsion applications. We address this issue by assessing the antimatter requirements for six different types of propulsion concepts, including two in which antiprotons are used to drive energy release from combined fission/fusion. These requirements are compared against the capacity of both the current antimatter production infrastructure and the improved capabilities that could exist within the early part of next century. Results show that although it may be impractical to consider systems that rely on antimatter as the sole source of propulsive energy, the requirements for propulsion based on antimatter-assisted fission/fusion do fall within projected near-term production capabilities. In fact, a new facility designed solely for antiproton production but based on existing technology could feasibly support interstellar precursor missions and omniplanetary spaceflight with antimatter costs ranging up to $6.4 million per mission.
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
Wang, Guoping; Ding, Xiong; Hu, Jiumei; Wu, Wenshuai; Sun, Jingjing; Mu, Ying
2017-10-24
Existing isothermal nucleic acid amplification (INAA) relying on the strand displacement activity of DNA polymerase usually requires at least two primers. However, in this paper, we report an unusual isothermal multimerization and amplification (UIMA) which only needs one primer and is efficiently initiated by the strand-displacing DNA polymerases with reverse transcription activities. On electrophoresis, the products of UIMA present a cascade-shape band and they are confirmed to be multimeric DNAs with repeated target sequences. In contrast to current methods, UIMA is simple to product multimeric DNA, due to the independent of multiple primers and rolling circle structures. Through assaying the synthesized single-stranded DNA targets, UIMA performs high sensitivity and specificity, as well as the universality. In addition, a plausible mechanism of UIMA is proposed, involving short DNA bending, mismatch extension, and template slippage. UIMA is a good explanation for why nonspecific amplification easily happens in existing INAAs. As the simplest INAA till now, UIMA provides a new insight for deeply understanding INAA and opens a new avenue for thoroughly addressing nonspecific amplification.
Efficient micromagnetics for magnetic storage devices
NASA Astrophysics Data System (ADS)
Escobar Acevedo, Marco Antonio
Micromagnetics is an important component for advancing the magnetic nanostructures understanding and design. Numerous existing and prospective magnetic devices rely on micromagnetic analysis, these include hard disk drives, magnetic sensors, memories, microwave generators, and magnetic logic. The ability to examine, describe, and predict the magnetic behavior, and macroscopic properties of nanoscale magnetic systems is essential for improving the existing devices, for progressing in their understanding, and for enabling new technologies. This dissertation describes efficient micromagnetic methods as required for magnetic storage analysis. Their performance and accuracy is demonstrated by studying realistic, complex, and relevant micromagnetic system case studies. An efficient methodology for dynamic micromagnetics in large scale simulations is used to study the writing process in a full scale model of a magnetic write head. An efficient scheme, tailored for micromagnetics, to find the minimum energy state on a magnetic system is presented. This scheme can be used to calculate hysteresis loops. An efficient scheme, tailored for micromagnetics, to find the minimum energy path between two stable states on a magnetic system is presented. This minimum energy path is intimately related to the thermal stability.
Fail-safe reactivity compensation method for a nuclear reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nygaard, Erik T.; Angelo, Peter L.; Aase, Scott B.
The present invention relates generally to the field of compensation methods for nuclear reactors and, in particular to a method for fail-safe reactivity compensation in solution-type nuclear reactors. In one embodiment, the fail-safe reactivity compensation method of the present invention augments other control methods for a nuclear reactor. In still another embodiment, the fail-safe reactivity compensation method of the present invention permits one to control a nuclear reaction in a nuclear reactor through a method that does not rely on moving components into or out of a reactor core, nor does the method of the present invention rely on themore » constant repositioning of control rods within a nuclear reactor in order to maintain a critical state.« less
NASA Astrophysics Data System (ADS)
Krishnan, Chethan; Maheshwari, Shubham; Bala Subramanian, P. N.
2017-08-01
We write down a Robin boundary term for general relativity. The construction relies on the Neumann result of arXiv:1605.01603 in an essential way. This is unlike in mechanics and (polynomial) field theory, where two formulations of the Robin problem exist: one with Dirichlet as the natural limiting case, and another with Neumann.
New Threat to National Security: Environmental Deterioration
1989-04-10
years of this person’s life; only scattered information exists about the middle span; only at age 42 did Earth begin to flower. Dinosaurs and the...of which rely primarily on fish for their protein and which have inadequate health facilities. There are many unanswered questions and additional
How to Desire Differently: Home Education as a Heterotopia
ERIC Educational Resources Information Center
Pattison, Harriet
2015-01-01
This article explores the co-existence of, and relationship between, alternative education in the form of home education and mainstream schooling. Home education is conceptually subordinate to schooling, relying on schooling for its status as alternative, but also being tied to schooling through the dominant discourse that forms our understandings…
Education of Blind Persons in Ethiopia.
ERIC Educational Resources Information Center
Maru, A. A.; Cook, M. J.
1990-01-01
The paper reviews the historical and cultural attitudes of Ethiopians toward blind children, the education of blind children, the special situation of orphaned blind children, limitations of existing educational models, and development of a new model that relies on elements of community-based rehabilitation and the employment of blind high school…
Evolution, Entropy, & Biological Information
ERIC Educational Resources Information Center
Peterson, Jacob
2014-01-01
A logical question to be expected from students: "How could life develop, that is, change, evolve from simple, primitive organisms into the complex forms existing today, while at the same time there is a generally observed decline and disorganization--the second law of thermodynamics?" The explanations in biology textbooks relied upon by…
77 FR 74484 - Proposed Data Collections Submitted for Public Comment and Recommendations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... surveillance of school-associated homicides and suicides. The system relies on existing public records and... the United States died violent deaths due to suicide, homicide, and unintentional firearm injuries... suicide occurs in or around school, it becomes a matter of particularly intense public interest and...
The Universality of Role Systems.
ERIC Educational Resources Information Center
Davis, Philip W.
1994-01-01
Outlines a way of conceiving the area of language identified by case or grammatical relation that does not rely on the specification of universal inventory. The alternative proposes the existence of principles of intelligence, which in their operation in language, yield the language performance that is interpreted as ROLES. (Contains 80…
Prison Volunteers: Profiles, Motivations, Satisfaction
ERIC Educational Resources Information Center
Tewksbury, Richard; Dabney, Dean
2004-01-01
Large numbers of correctional institutions rely on volunteers to assist staff in various programs and tasks. At present there exists a paucity of literature describing these programs and/or subjecting them to systematic evaluation. The present study uses self-report data from a sample of active volunteers at a medium-security Southern prison to…
Effects of Organizational Trust on Organizational Learning and Creativity
ERIC Educational Resources Information Center
Jiang, Yi; Chen, Wen-Ke
2017-01-01
In the knowledge economy era, the competitive advantage of an enterprise is established on intangible resources and capability. Trust allows individuals acquiring and exchanging intellectual capitals, especially in ambiguous and uncertain situations, and knowledge exchange relies on the existence of trust. Different from past other industries,…
Appraising Administrative Operations: A Guide for Universities and Colleges.
ERIC Educational Resources Information Center
Griffin, Gerald; Burks, David R.
The guide describes for colleges and universities how to establish and conduct a program of continuous improvement of administrative operations. An objective appraisal-review process is described that relies on mobilizing the pool of managerial and analytical talent already existing at every institution. For decisionmakers, an overview of…
Duarte-Carvajalino, Julio M.; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe
2013-01-01
Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis. PMID:23596381
Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe
2013-01-01
Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis.
Theoretical Sum Frequency Generation Spectroscopy of Peptides
2015-01-01
Vibrational sum frequency generation (SFG) has become a very promising technique for the study of proteins at interfaces, and it has been applied to important systems such as anti-microbial peptides, ion channel proteins, and human islet amyloid polypeptide. Moreover, so-called “chiral” SFG techniques, which rely on polarization combinations that generate strong signals primarily for chiral molecules, have proven to be particularly discriminatory of protein secondary structure. In this work, we present a theoretical strategy for calculating protein amide I SFG spectra by combining line-shape theory with molecular dynamics simulations. We then apply this method to three model peptides, demonstrating the existence of a significant chiral SFG signal for peptides with chiral centers, and providing a framework for interpreting the results on the basis of the dependence of the SFG signal on the peptide orientation. We also examine the importance of dynamical and coupling effects. Finally, we suggest a simple method for determining a chromophore’s orientation relative to the surface using ratios of experimental heterodyne-detected signals with different polarizations, and test this method using theoretical spectra. PMID:25203677
Strategies for distributing cancer screening decision aids in primary care.
Brackett, Charles; Kearing, Stephen; Cochran, Nan; Tosteson, Anna N A; Blair Brooks, W
2010-02-01
Decision aids (DAs) have been shown to facilitate shared decision making about cancer screening. However, little data exist on optimal strategies for dissemination. Our objective was to compare different decision aid distribution models. Eligible patients received video decision aids for prostate cancer (PSA) or colon cancer screening (CRC) through 4 distribution methods. Outcome measures included DA loans (N), % of eligible patients receiving DA, and patient and provider satisfaction. Automatically mailing DAs to all age/gender appropriate patients led to near universal receipt by screening-eligible patients, but also led to ineligible patients receiving DAs. Three different elective (non-automatic) strategies led to low rates of receipt. Clinician satisfaction was higher when patients viewed the DA before the visit, and this model facilitated implementation of the screening choice. Regardless of timing or distribution method, patient satisfaction was high. An automatic DA distribution method is more effective than relying on individual initiative. Enabling patients to view the DA before the visit is preferred. Systematically offering DAs to all eligible patients before their appointments is the ideal strategy, but may be challenging to implement. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Tight-binding analysis of Si and GaAs ultrathin bodies with subatomic wave-function resolution
NASA Astrophysics Data System (ADS)
Tan, Yaohua P.; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy B.; Klimeck, Gerhard
2015-08-01
Empirical tight-binding (ETB) methods are widely used in atomistic device simulations. Traditional ways of generating the ETB parameters rely on direct fitting to bulk experiments or theoretical electronic bands. However, ETB calculations based on existing parameters lead to unphysical results in ultrasmall structures like the As-terminated GaAs ultrathin bodies (UTBs). In this work, it is shown that more transferable ETB parameters with a short interaction range can be obtained by a process of mapping ab initio bands and wave functions to ETB models. This process enables the calibration of not only the ETB energy bands but also the ETB wave functions with corresponding ab initio calculations. Based on the mapping process, ETB models of Si and GaAs are parameterized with respect to hybrid functional calculations. Highly localized ETB basis functions are obtained. Both the ETB energy bands and wave functions with subatomic resolution of UTBs show good agreement with the corresponding hybrid functional calculations. The ETB methods can then be used to explain realistically extended devices in nonequilibrium that cannot be tackled with ab initio methods.
Determining the near-surface current profile from measurements of the wave dispersion relation
NASA Astrophysics Data System (ADS)
Smeltzer, Benjamin; Maxwell, Peter; Aesøy, Eirik; Ellingsen, Simen
2017-11-01
The current-induced Doppler shifts of waves can yield information about the background mean flow, providing an attractive method of inferring the current profile in the upper layer of the ocean. We present measurements of waves propagating on shear currents in a laboratory water channel, as well as theoretical investigations of inversion techniques for determining the vertical current structure. Spatial and temporal measurements of the free surface profile obtained using a synthetic Schlieren method are analyzed to determine the wave dispersion relation and Doppler shifts as a function of wavelength. The vertical current profile can then be inferred from the Doppler shifts using an inversion algorithm. Most existing algorithms rely on a priori assumptions of the shape of the current profile, and developing a method that uses less stringent assumptions is a focus of this study, allowing for measurement of more general current profiles. The accuracy of current inversion algorithms are evaluated by comparison to measurements of the mean flow profile from particle image velocimetry (PIV), and a discussion of the sensitivity to errors in the Doppler shifts is presented.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
Spatial Copula Model for Imputing Traffic Flow Data from Remote Microwave Sensors
Ma, Xiaolei; Du, Bowen; Yu, Bin
2017-01-01
Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks. PMID:28934164
To cut or not to cut? Assessing the modular structure of brain networks.
Chang, Yu-Teng; Pantazis, Dimitrios; Leahy, Richard M
2014-05-01
A wealth of methods has been developed to identify natural divisions of brain networks into groups or modules, with one of the most prominent being modularity. Compared with the popularity of methods to detect community structure, only a few methods exist to statistically control for spurious modules, relying almost exclusively on resampling techniques. It is well known that even random networks can exhibit high modularity because of incidental concentration of edges, even though they have no underlying organizational structure. Consequently, interpretation of community structure is confounded by the lack of principled and computationally tractable approaches to statistically control for spurious modules. In this paper we show that the modularity of random networks follows a transformed version of the Tracy-Widom distribution, providing for the first time a link between module detection and random matrix theory. We compute parametric formulas for the distribution of modularity for random networks as a function of network size and edge variance, and show that we can efficiently control for false positives in brain and other real-world networks. Copyright © 2014 Elsevier Inc. All rights reserved.
WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Niu, T
2015-06-15
Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less
Improved spring model-based collaborative indoor visible light positioning
NASA Astrophysics Data System (ADS)
Luo, Zhijie; Zhang, WeiNan; Zhou, GuoFu
2016-06-01
Gaining accuracy with indoor positioning of individuals is important as many location-based services rely on the user's current position to provide them with useful services. Many researchers have studied indoor positioning techniques based on WiFi and Bluetooth. However, they have disadvantages such as low accuracy or high cost. In this paper, we propose an indoor positioning system in which visible light radiated from light-emitting diodes is used to locate the position of receivers. Compared with existing methods using light-emitting diode light, we present a high-precision and simple implementation collaborative indoor visible light positioning system based on an improved spring model. We first estimate coordinate position information using the visible light positioning system, and then use the spring model to correct positioning errors. The system can be employed easily because it does not require additional sensors and the occlusion problem of visible light would be alleviated. We also describe simulation experiments, which confirm the feasibility of our proposed method.
NASA Astrophysics Data System (ADS)
O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.
2013-01-01
We describe a method for determining an optimal centroid-moment tensor solution of an earthquake from a set of static displacements measured using a network of Global Positioning System receivers. Using static displacements observed after the 4 April 2010, MW 7.2 El Mayor-Cucapah, Mexico, earthquake, we perform an iterative inversion to obtain the source mechanism and location, which minimize the least-squares difference between data and synthetics. The efficiency of our algorithm for forward modeling static displacements in a layered elastic medium allows the inversion to be performed in real-time on a single processor without the need for precomputed libraries of excitation kernels; we present simulated real-time results for the El Mayor-Cucapah earthquake. The only a priori information that our inversion scheme needs is a crustal model and approximate source location, so the method proposed here may represent an improvement on existing early warning approaches that rely on foreknowledge of fault locations and geometries.
Link prediction based on local weighted paths for complex networks
NASA Astrophysics Data System (ADS)
Yao, Yabing; Zhang, Ruisheng; Yang, Fan; Yuan, Yongna; Hu, Rongjing; Zhao, Zhili
As a significant problem in complex networks, link prediction aims to find the missing and future links between two unconnected nodes by estimating the existence likelihood of potential links. It plays an important role in understanding the evolution mechanism of networks and has broad applications in practice. In order to improve prediction performance, a variety of structural similarity-based methods that rely on different topological features have been put forward. As one topological feature, the path information between node pairs is utilized to calculate the node similarity. However, many path-dependent methods neglect the different contributions of paths for a pair of nodes. In this paper, a local weighted path (LWP) index is proposed to differentiate the contributions between paths. The LWP index considers the effect of the link degrees of intermediate links and the connectivity influence of intermediate nodes on paths to quantify the path weight in the prediction procedure. The experimental results on 12 real-world networks show that the LWP index outperforms other seven prediction baselines.
Convolutional networks for vehicle track segmentation
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2017-10-01
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple and fast models to label track pixels. These models, however, are unable to capture natural track features, such as continuity and parallelism. More powerful but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3×3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate in low power and have limited training data. As a result, we aim for small and efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our six-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.
Water and Sanitation Technology Citizen Needs Assestment in Kolorai Island
NASA Astrophysics Data System (ADS)
Pracastino Heston, Yudha; Rayi Ayuningtyas, Yonanda
2018-05-01
Kolorai is an island located in Pulau Morotai Regency. Its existence as an island in the National Tourism Strategic Area (KSPN) of Morotai. It has a land area of 4.33 km², with a population of ± 550 people. The citizen relying on wells as a source of clean water, and has been developing a piping system to distribute clean water, using a water reservoir which is located in each resident’s house. There are 11 public toilets scattered in Kolorai Island. The emerging problems are related to the availability of energy for the distribution and adequacy of clean water source quality. The research was conducted to answer the problem solution needs in a participatory method, with technology product support from Ministry of Public Work and Housing, Research and Development Agency. The research was conducted with qualitative approach, using mix method. The research result is a comprehensive solution to fulfill the needs of clean water and sanitation.
Campbell, Kieran R; Yau, Christopher
2017-03-15
Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.
Software Safety Risk in Legacy Safety-Critical Computer Systems
NASA Technical Reports Server (NTRS)
Hill, Janice; Baggs, Rhoda
2007-01-01
Safety-critical computer systems must be engineered to meet system and software safety requirements. For legacy safety-critical computer systems, software safety requirements may not have been formally specified during development. When process-oriented software safety requirements are levied on a legacy system after the fact, where software development artifacts don't exist or are incomplete, the question becomes 'how can this be done?' The risks associated with only meeting certain software safety requirements in a legacy safety-critical computer system must be addressed should such systems be selected as candidates for reuse. This paper proposes a method for ascertaining formally, a software safety risk assessment, that provides measurements for software safety for legacy systems which may or may not have a suite of software engineering documentation that is now normally required. It relies upon the NASA Software Safety Standard, risk assessment methods based upon the Taxonomy-Based Questionnaire, and the application of reverse engineering CASE tools to produce original design documents for legacy systems.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang
2013-01-01
Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.
NASA Astrophysics Data System (ADS)
Truebenbach, Alexandra E.; Darling, Jeremy
2017-06-01
A large fraction of active galactic nuclei (AGN) are 'invisible' in extant optical surveys due to either distance or dust-obscuration. The existence of this large population of dust-obscured, infrared (IR)-bright AGN is predicted by models of galaxy-supermassive black hole coevolution and is required to explain the observed X-ray and IR backgrounds. Recently, IR colour cuts with Wide-field Infrared Survey Explorer have identified a portion of this missing population. However, as the host galaxy brightness relative to that of the AGN increases, it becomes increasingly difficult to differentiate between IR emission originating from the AGN and from its host galaxy. As a solution, we have developed a new method to select obscured AGN using their 20-cm continuum emission to identify the objects as AGN. We created the resulting invisible AGN catalogue by selecting objects that are detected in AllWISE (mid-IR) and FIRST (20 cm), but are not detected in SDSS (optical) or 2MASS (near-IR), producing a final catalogue of 46 258 objects. 30 per cent of the objects are selected by existing selection methods, while the remaining 70 per cent represent a potential previously unidentified population of candidate AGN that are missed by mid-IR colour cuts. Additionally, by relying on a radio continuum detection, this technique is efficient at detecting radio-loud AGN at z ≥ 0.29, regardless of their level of dust obscuration or their host galaxy's relative brightness.
Marquet, Pierre; Longeray, Pierre-Henry; Barlesi, Fabrice; Ameye, Véronique; Augé, Pascale; Cazeneuve, Béatrice; Chatelut, Etienne; Diaz, Isabelle; Diviné, Marine; Froguel, Philippe; Goni, Sylvia; Gueyffier, François; Hoog-Labouret, Natalie; Mourah, Samia; Morin-Surroca, Michèle; Perche, Olivier; Perin-Dureau, Florent; Pigeon, Martine; Tisseau, Anne; Verstuyft, Céline
2015-01-01
Personalized medicine is based on: 1) improved clinical or non-clinical methods (including biomarkers) for a more discriminating and precise diagnosis of diseases; 2) targeted therapies of the choice or the best drug for each patient among those available; 3) dose adjustment methods to optimize the benefit-risk ratio of the drugs chosen; 4) biomarkers of efficacy, toxicity, treatment discontinuation, relapse, etc. Unfortunately, it is still too often a theoretical concept because of the lack of convenient diagnostic methods or treatments, particularly of drugs corresponding to each subtype of pathology, hence to each patient. Stratified medicine is a component of personalized medicine employing biomarkers and companion diagnostics to target the patients likely to present the best benefit-risk balance for a given active compound. The concept of targeted therapy, mostly used in cancer treatment, relies on the existence of a defined molecular target, involved or not in the pathological process, and/or on the existence of a biomarker able to identify the target population, which should logically be small as compared to the population presenting the disease considered. Targeted therapies and biomarkers represent important stakes for the pharmaceutical industry, in terms of market access, of return on investment and of image among the prescribers. At the same time, they probably represent only the first generation of products resulting from the combination of clinical, pathophysiological and molecular research, i.e. of translational research. © 2015 Société Française de Pharmacologie et de Thérapeutique.
MCore: A High-Order Finite-Volume Dynamical Core for Atmospheric General Circulation Models
NASA Astrophysics Data System (ADS)
Ullrich, P.; Jablonowski, C.
2011-12-01
The desire for increasingly accurate predictions of the atmosphere has driven numerical models to smaller and smaller resolutions, while simultaneously exponentially driving up the cost of existing numerical models. Even with the modern rapid advancement of computational performance, it is estimated that it will take more than twenty years before existing models approach the scales needed to resolve atmospheric convection. However, smarter numerical methods may allow us to glimpse the types of results we would expect from these fine-scale simulations while only requiring a fraction of the computational cost. The next generation of atmospheric models will likely need to rely on both high-order accuracy and adaptive mesh refinement in order to properly capture features of interest. We present our ongoing research on developing a set of ``smart'' numerical methods for simulating the global non-hydrostatic fluid equations which govern atmospheric motions. We have harnessed a high-order finite-volume based approach in developing an atmospheric dynamical core on the cubed-sphere. This type of method is desirable for applications involving adaptive grids, since it has been shown that spuriously reflected wave modes are intrinsically damped out under this approach. The model further makes use of an implicit-explicit Runge-Kutta-Rosenbrock (IMEX-RKR) time integrator for accurate and efficient coupling of the horizontal and vertical model components. We survey the algorithmic development of the model and present results from idealized dynamical core test cases, as well as give a glimpse at future work with our model.
The use of virtual environments for percentage view analysis.
Schofield, Damian; Cox, Christopher J B
2005-09-01
It is recognised that Visual Impact Assessment (VIA), unlike many other aspects of Environmental Impact Assessments (EIA), relies less upon measurement than upon experience and judgement. Hence, it is necessary for a more structured and consistent approach towards VIA, reducing the amount of bias and subjectivity. For proposed developments, there are very few quantitative techniques for the evaluation of visibility, and these existing methods can be highly inaccurate and time consuming. Percentage view changes are one of the few quantitative techniques, and the use of computer technology can reduce the inaccuracy and the time spent evaluating the visibility of either existing or proposed developments. For over 10 years, research work undertaken by the authors at the University of Nottingham has employed Computer Graphics (CG) and Virtual Reality (VR) in civilian and industrial contexts for environmental planning, design visualisation, accident reconstruction, risk analysis, data visualisation and training simulators. This paper describes a method to quantitatively assess the visual impact of proposed developments on the landscape using CG techniques. This method allows the determination of accurate percentage view changes with the use of a computer-generated model of the environment and the application of specialist software that has been developed at the University of Nottingham. The principles are easy to understand and therefore planners, authorisation agencies and members of the public can use and understand the results. A case study is shown to demonstrate the application and the capabilities of the technology.
Identifying Causal Variants at Loci with Multiple Signals of Association
Hormozdiari, Farhad; Kostem, Emrah; Kang, Eun Yong; Pasaniuc, Bogdan; Eskin, Eleazar
2014-01-01
Although genome-wide association studies have successfully identified thousands of risk loci for complex traits, only a handful of the biologically causal variants, responsible for association at these loci, have been successfully identified. Current statistical methods for identifying causal variants at risk loci either use the strength of the association signal in an iterative conditioning framework or estimate probabilities for variants to be causal. A main drawback of existing methods is that they rely on the simplifying assumption of a single causal variant at each risk locus, which is typically invalid at many risk loci. In this work, we propose a new statistical framework that allows for the possibility of an arbitrary number of causal variants when estimating the posterior probability of a variant being causal. A direct benefit of our approach is that we predict a set of variants for each locus that under reasonable assumptions will contain all of the true causal variants with a high confidence level (e.g., 95%) even when the locus contains multiple causal variants. We use simulations to show that our approach provides 20–50% improvement in our ability to identify the causal variants compared to the existing methods at loci harboring multiple causal variants. We validate our approach using empirical data from an expression QTL study of CHI3L2 to identify new causal variants that affect gene expression at this locus. CAVIAR is publicly available online at http://genetics.cs.ucla.edu/caviar/. PMID:25104515
Identifying causal variants at loci with multiple signals of association.
Hormozdiari, Farhad; Kostem, Emrah; Kang, Eun Yong; Pasaniuc, Bogdan; Eskin, Eleazar
2014-10-01
Although genome-wide association studies have successfully identified thousands of risk loci for complex traits, only a handful of the biologically causal variants, responsible for association at these loci, have been successfully identified. Current statistical methods for identifying causal variants at risk loci either use the strength of the association signal in an iterative conditioning framework or estimate probabilities for variants to be causal. A main drawback of existing methods is that they rely on the simplifying assumption of a single causal variant at each risk locus, which is typically invalid at many risk loci. In this work, we propose a new statistical framework that allows for the possibility of an arbitrary number of causal variants when estimating the posterior probability of a variant being causal. A direct benefit of our approach is that we predict a set of variants for each locus that under reasonable assumptions will contain all of the true causal variants with a high confidence level (e.g., 95%) even when the locus contains multiple causal variants. We use simulations to show that our approach provides 20-50% improvement in our ability to identify the causal variants compared to the existing methods at loci harboring multiple causal variants. We validate our approach using empirical data from an expression QTL study of CHI3L2 to identify new causal variants that affect gene expression at this locus. CAVIAR is publicly available online at http://genetics.cs.ucla.edu/caviar/. Copyright © 2014 by the Genetics Society of America.
Hayre, C M; Blackman, S; Carlton, K; Eyden, A
2018-02-01
Since the discovery of X-rays by Rontgen in 1895, lead (Pb) has been used to limit ionising radiation for both operators and patients due to its high density and high atomic number (Z = 82). This study explores the attitudes and perceptions of diagnostic radiographers applying Pb protection during general radiographic examinations, an area underexplored within a contemporary radiographic environment(s). This paper presents findings from a wider ethnographic study undertaken in the United Kingdom (UK). The use of participant observation and semi-structured interviews were the methods of choice. Participant observation enabled the overt researcher to uncover whether Pb remained an essential tool for radiographers. Semi-structured interviews later supported or refuted the limited use of Pb protection by radiographers. These methods enabled the construction of original phenomena within the clinical environment. Two themes are discussed. Firstly, radiographers, underpinned by their own values and beliefs towards radiation risk, identify a dichotomy of applying Pb protection. The cessation of Pb may be linked to cultural myths, relying on 'word of mouth' of peers and not on the existing evidence-base. Secondly, radiographers acknowledge that protecting pregnant patients may be primarily a 'personal choice' in clinical environments, which can alter if a patient requests 'are you going to cover me up?' This paper concludes by affirming the complexities surrounding Pb protection in clinical environments. It is proposed that the use of Pb protection in general radiography may become increasingly fragmented in the future if radiographers continue rely on cultural norms. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
Sterkers, Yvon; Varlet-Marie, Emmanuelle; Cassaing, Sophie; Brenier-Pinchart, Marie-Pierre; Brun, Sophie; Dalle, Frédéric; Delhaes, Laurence; Filisetti, Denis; Pelloux, Hervé; Yera, Hélène; Bastien, Patrick
2010-01-01
Although screening for maternal toxoplasmic seroconversion during pregnancy is based on immunodiagnostic assays, the diagnosis of clinically relevant toxoplasmosis greatly relies upon molecular methods. A problem is that this molecular diagnosis is subject to variation of performances, mainly due to a large diversity of PCR methods and primers and the lack of standardization. The present multicentric prospective study, involving eight laboratories proficient in the molecular prenatal diagnosis of toxoplasmosis, was a first step toward the harmonization of this diagnosis among university hospitals in France. Its aim was to compare the analytical performances of different PCR protocols used for Toxoplasma detection. Each center extracted the same concentrated Toxoplasma gondii suspension and tested serial dilutions of the DNA using its own assays. Differences in analytical sensitivities were observed between assays, particularly at low parasite concentrations (≤2 T. gondii genomes per reaction tube), with “performance scores” differing by a 20-fold factor among laboratories. Our data stress the fact that differences do exist in the performances of molecular assays in spite of expertise in the matter; we propose that laboratories work toward a detection threshold defined for a best sensitivity of this diagnosis. Moreover, on the one hand, intralaboratory comparisons confirmed previous studies showing that rep529 is a more adequate DNA target for this diagnosis than the widely used B1 gene. But, on the other hand, interlaboratory comparisons showed differences that appear independent of the target, primers, or technology and that hence rely essentially on proficiency and care in the optimization of PCR conditions. PMID:20610670
Giga, Noreen M.; Binakonsky, Jane; Ross, Craig; Siegel, Michael
2011-01-01
Background Flavored alcoholic beverages are popular among underage drinkers. Existing studies that assessed flavored alcoholic beverage use among youth relied upon respondents to correctly classify the beverages they consume, without defining what alcohol brands belong to this category. Objectives To demonstrate a new method for analyzing the consumption of flavored alcoholic beverages among youth on a brand-specific basis, without relying upon youth to correctly classify brands they consume. Methods Using a pre-recruited internet panel developed by Knowledge Networks, we measured the brands of alcohol consumed by a national sample of youth drinkers, ages 16-20 years, in the United States. The sample consisted of 108 youths who had consumed at least one drink of an alcoholic beverage in the past 30 days. We measured the brand-specific consumption of alcoholic beverages within the past 30 days, ascertaining the consumption of 380 alcohol brands, including 14 brands of flavored alcoholic beverages. Results Measuring the brand-specific consumption of flavored alcoholic beverages was feasible. Based on a brand-specific identification of flavored alcoholic beverages, nearly half of youth drinkers in the sample reported having consumed such beverages in the past 30 days. Flavored alcoholic beverage preference was concentrated among the top four brands, which accounted for nearly all of the consumption volume reported in our study. Conclusions and Scientific Significance These findings underscore the need to assess youth alcohol consumption at the brand level and the potential value of such data in better understanding underage youth drinking behavior and the factors that influence it. PMID:21517708
NASA Astrophysics Data System (ADS)
Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha
2017-11-01
Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.
Visualization and dissemination of global crustal models on virtual globes
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Pan, Xin; Sun, Jian-zhong
2016-05-01
Global crustal models, such as CRUST 5.1 and its descendants, are very useful in a broad range of geoscience applications. The current method for representing the existing global crustal models relies heavily on dedicated computer programs to read and work with those models. Therefore, it is not suited to visualize and disseminate global crustal information to non-geological users. This shortcoming is becoming obvious as more and more people from both academic and non-academic institutions are interested in understanding the structure and composition of the crust. There is a pressing need to provide a modern, universal and user-friendly method to represent and visualize the existing global crustal models. In this paper, we present a systematic framework to easily visualize and disseminate the global crustal structure on virtual globes. Based on crustal information exported from the existing global crustal models, we first create a variety of KML-formatted crustal models with different levels of detail (LODs). And then the KML-formatted models can be loaded into a virtual globe for 3D visualization and model dissemination. A Keyhole Markup Language (KML) generator (Crust2KML) is developed to automatically convert crustal information obtained from the CRUST 1.0 model into KML-formatted global crustal models, and a web application (VisualCrust) is designed to disseminate and visualize those models over the Internet. The presented framework and associated implementations can be conveniently exported to other applications to support visualizing and analyzing the Earth's internal structure on both regional and global scales in a 3D virtual-globe environment.
Myocardium tracking via matching distributions.
Ben Ayed, Ismail; Li, Shuo; Ross, Ian; Islam, Ali
2009-01-01
The goal of this study is to investigate automatic myocardium tracking in cardiac Magnetic Resonance (MR) sequences using global distribution matching via level-set curve evolution. Rather than relying on the pixelwise information as in existing approaches, distribution matching compares intensity distributions, and consequently, is well-suited to the myocardium tracking problem. Starting from a manual segmentation of the first frame, two curves are evolved in order to recover the endocardium (inner myocardium boundary) and the epicardium (outer myocardium boundary) in all the frames. For each curve, the evolution equation is sought following the maximization of a functional containing two terms: (1) a distribution matching term measuring the similarity between the non-parametric intensity distributions sampled from inside and outside the curve to the model distributions of the corresponding regions estimated from the previous frame; (2) a gradient term for smoothing the curve and biasing it toward high gradient of intensity. The Bhattacharyya coefficient is used as a similarity measure between distributions. The functional maximization is obtained by the Euler-Lagrange ascent equation of curve evolution, and efficiently implemented via level-set. The performance of the proposed distribution matching was quantitatively evaluated by comparisons with independent manual segmentations approved by an experienced cardiologist. The method was applied to ten 2D mid-cavity MR sequences corresponding to ten different subjects. Although neither shape prior knowledge nor curve coupling were used, quantitative evaluation demonstrated that the results were consistent with manual segmentations. The proposed method compares well with existing methods. The algorithm also yields a satisfying reproducibility. Distribution matching leads to a myocardium tracking which is more flexible and applicable than existing methods because the algorithm uses only the current data, i.e., does not require a training, and consequently, the solution is not bounded to some shape/intensity prior information learned from of a finite training set.
Creating a spatially-explicit index: a method for assessing the global wildfire-water risk
NASA Astrophysics Data System (ADS)
Robinne, François-Nicolas; Parisien, Marc-André; Flannigan, Mike; Miller, Carol; Bladon, Kevin D.
2017-04-01
The wildfire-water risk (WWR) has been defined as the potential for wildfires to adversely affect water resources that are important for downstream ecosystems and human water needs for adequate water quantity and quality, therefore compromising the security of their water supply. While tools and methods are numerous for watershed-scale risk analysis, the development of a toolbox for the large-scale evaluation of the wildfire risk to water security has only started recently. In order to provide managers and policy-makers with an adequate tool, we implemented a method for the spatial analysis of the global WWR based on the Driving forces-Pressures-States-Impacts-Responses (DPSIR) framework. This framework relies on the cause-and-effect relationships existing between the five categories of the DPSIR chain. As this approach heavily relies on data, we gathered an extensive set of spatial indicators relevant to fire-induced hydrological hazards and water consumption patterns by human and natural communities. When appropriate, we applied a hydrological routing function to our indicators in order to simulate downstream accumulation of potentially harmful material. Each indicator was then assigned a DPSIR category. We collapsed the information in each category using a principal component analysis in order to extract the most relevant pixel-based information provided by each spatial indicator. Finally, we compiled our five categories using an additive indexation process to produce a spatially-explicit index of the WWR. A thorough sensitivity analysis has been performed in order to understand the relationship between the final risk values and the spatial pattern of each category used during the indexation. For comparison purposes, we aggregated index scores by global hydrological regions, or hydrobelts, to get a sense of regional DPSIR specificities. This rather simple method does not necessitate the use of complex physical models and provides a scalable and efficient tool for the analysis of global water security issues.
Current state of ethics literature synthesis: a systematic review of reviews.
Mertz, Marcel; Kahrass, Hannes; Strech, Daniel
2016-10-03
Modern standards for evidence-based decision making in clinical care and public health still rely solely on eminence-based input when it comes to normative ethical considerations. Manuals for clinical guideline development or health technology assessment (HTA) do not explain how to search, analyze, and synthesize relevant normative information in a systematic and transparent manner. In the scientific literature, however, systematic or semi-systematic reviews of ethics literature already exist, and scholarly debate on their opportunities and limitations has recently bloomed. A systematic review was performed of all existing systematic or semi-systematic reviews for normative ethics literature on medical topics. The study further assessed how these reviews report on their methods for search, selection, analysis, and synthesis of ethics literature. We identified 84 reviews published between 1997 and 2015 in 65 different journals and demonstrated an increasing publication rate for this type of review. While most reviews reported on different aspects of search and selection methods, reporting was much less explicit for aspects of analysis and synthesis methods: 31 % did not fulfill any criteria related to the reporting of analysis methods; for example, only 25 % of the reviews reported the ethical approach needed to analyze and synthesize normative information. While reviews of ethics literature are increasingly published, their reporting quality for analysis and synthesis of normative information should be improved. Guiding questions are: What was the applied ethical approach and technical procedure for identifying and extracting the relevant normative information units? What method and procedure was employed for synthesizing normative information? Experts and stakeholders from bioethics, HTA, guideline development, health care professionals, and patient organizations should work together to further develop this area of evidence-based health care.
Cosmology with Gravitational Wave/Fast Radio Burst Associations
NASA Astrophysics Data System (ADS)
Wei, Jun-Jie; Wu, Xue-Feng; Gao, He
2018-06-01
Recently, some theoretical models predicted that a small fraction of fast radio bursts (FRBs) could be associated with gravitational waves (GWs). In this Letter, we discuss the possibility of using GW/FRB association systems, if they are commonly detected in the future, as a complementary cosmic probe. We propose that upgraded standard sirens can be constructed from the joint measurements of luminosity distances D L derived from GWs and dispersion measures DMIGM derived from FRBs (i.e., the combination D L · DMIGM). Moreover, unlike the traditional standard-siren approach (i.e., the D L method) and the DMIGM method that rely on the optimization of the Hubble constant H 0, this D L · DMIGM method has the advantage of being independent of H 0. Through Monte Carlo simulations, we prove that the D L · DMIGM method is more effective for constraining cosmological parameters than D L or DMIGM separately, and that it enables us to achieve accurate multimessenger cosmology from approximately 100 GW/FRB systems. Additionally, even if GW/FRB associations do not exist, the methodology developed here can still be applied to those GWs and FRBs that occur at the same redshifts.
One- and two-stage Arrhenius models for pharmaceutical shelf life prediction.
Fan, Zhewen; Zhang, Lanju
2015-01-01
One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.
Gamifying Video Object Segmentation.
Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela
2017-10-01
Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.
Border preserving skin lesion segmentation
NASA Astrophysics Data System (ADS)
Kamali, Mostafa; Samei, Golnoosh
2008-03-01
Melanoma is a fatal cancer with a growing incident rate. However it could be cured if diagnosed in early stages. The first step in detecting melanoma is the separation of skin lesion from healthy skin. There are particular features associated with a malignant lesion whose successful detection relies upon accurately extracted borders. We propose a two step approach. First, we apply K-means clustering method (to 3D RGB space) that extracts relatively accurate borders. In the second step we perform an extra refining step for detecting the fading area around some lesions as accurately as possible. Our method has a number of novelties. Firstly as the clustering method is directly applied to the 3D color space, we do not overlook the dependencies between different color channels. In addition, it is capable of extracting fine lesion borders up to pixel level in spite of the difficulties associated with fading areas around the lesion. Performing clustering in different color spaces reveals that 3D RGB color space is preferred. The application of the proposed algorithm to an extensive data-base of skin lesions shows that its performance is superior to that of existing methods both in terms of accuracy and computational complexity.
Statistical analysis of secondary particle distributions in relativistic nucleus-nucleus collisions
NASA Technical Reports Server (NTRS)
Mcguire, Stephen C.
1987-01-01
The use is described of several statistical techniques to characterize structure in the angular distributions of secondary particles from nucleus-nucleus collisions in the energy range 24 to 61 GeV/nucleon. The objective of this work was to determine whether there are correlations between emitted particle intensity and angle that may be used to support the existence of the quark gluon plasma. The techniques include chi-square null hypothesis tests, the method of discrete Fourier transform analysis, and fluctuation analysis. We have also used the method of composite unit vectors to test for azimuthal asymmetry in a data set of 63 JACEE-3 events. Each method is presented in a manner that provides the reader with some practical detail regarding its application. Of those events with relatively high statistics, Fe approaches 0 at 55 GeV/nucleon was found to possess an azimuthal distribution with a highly non-random structure. No evidence of non-statistical fluctuations was found in the pseudo-rapidity distributions of the events studied. It is seen that the most effective application of these methods relies upon the availability of many events or single events that possess very high multiplicities.
Vortex flows in the solar chromosphere. I. Automatic detection method
NASA Astrophysics Data System (ADS)
Kato, Y.; Wedemeyer, S.
2017-05-01
Solar "magnetic tornadoes" are produced by rotating magnetic field structures that extend from the upper convection zone and the photosphere to the corona of the Sun. Recent studies show that these kinds of rotating features are an integral part of atmospheric dynamics and occur on a large range of spatial scales. A systematic statistical study of magnetic tornadoes is a necessary next step towards understanding their formation and their role in mass and energy transport in the solar atmosphere. For this purpose, we develop a new automatic detection method for chromospheric swirls, meaning the observable signature of solar tornadoes or, more generally, chromospheric vortex flows and rotating motions. Unlike existing studies that rely on visual inspections, our new method combines a line integral convolution (LIC) imaging technique and a scalar quantity that represents a vortex flow on a two-dimensional plane. We have tested two detection algorithms, based on the enhanced vorticity and vorticity strength quantities, by applying them to three-dimensional numerical simulations of the solar atmosphere with CO5BOLD. We conclude that the vorticity strength method is superior compared to the enhanced vorticity method in all aspects. Applying the method to a numerical simulation of the solar atmosphere reveals very abundant small-scale, short-lived chromospheric vortex flows that have not been found previously by visual inspection.
Harriet Martineau: Principal Economic Educator.
ERIC Educational Resources Information Center
O'Donnell, Margaret G.
Although she encountered criticism of her work, Harriet Martineau was the most widely read economics educator of 19th century Great Britain. Martineau wrote for the masses; she was convinced that it was each citizen's civic duty to learn economics. She relied on the body of knowledge which existed in her day: Mill's "Elements of Political…
Evaluating Quality Learning in Higher Education: Re-Examining the Evidence
ERIC Educational Resources Information Center
Lodge, Jason M.; Bonsanquet, Agnes
2014-01-01
The ways in which the value-added benefits of higher education are conceptualised and measured have come under increased scrutiny as universities become more accountable to their funding bodies in a difficult economic climate. Existing approaches for understanding quality learning often rely on measuring the subjective student experience or on…
What's App with that? Selecting Educational Apps for Young Children with Disabilities
ERIC Educational Resources Information Center
More, Cori M.; Travers, Jason C.
2013-01-01
Educational research will likely never be able to keep pace with technological innovation. It therefore will become increasingly important that early childhood professionals rely on existing knowledge to effectively evaluate and integrate emerging technology in the natural environment rather than waiting for a broad platform of research to inform…
China's Social Work Education in the Face of Change
ERIC Educational Resources Information Center
Fang, Yuan
2013-01-01
Between 1952 and 1979, social work was banned as an academic discipline, and social workers relied on experience alone in carrying out their duties. Since then social work training has been offered in universities and vocational schools; and existing social workers have received in-service training. However, social work education is still in its…
Geographic Disparity in Funding for School Nutrition Environments: Evidence from Mississippi Schools
ERIC Educational Resources Information Center
Chang, Yunhee; Carithers, Teresa; Leeke, Shannon; Chin, Felicia
2016-01-01
Background: Despite the federal initiatives on equitable provision of school nutrition programs, geographic disparity in childhood obesity persists. It may be partly because built-in school nutrition environments rely on each school's efficient use of existing operational funds or its ability to obtain expanded financial support. This study…
ERIC Educational Resources Information Center
Long, Nicole M.; Kahana, Michael J.
2017-01-01
Although episodic and semantic memory share overlapping neural mechanisms, it remains unclear how our pre-existing semantic associations modulate the formation of new, episodic associations. When freely recalling recently studied words, people rely on both episodic and semantic associations, shown through temporal and semantic clustering of…
Changing Relations between Learning and Work
ERIC Educational Resources Information Center
Jefferson, Anne L.; Levitan, Barbara
2006-01-01
The change in technology, in communications and in the dynamics of human and institutional interactions has created a need to shift relations between the world of learning and the world of work. New levers need to be added to those currently relied on. Consequently the learning-work relationship that exists between tertiary education institutions…
ERIC Educational Resources Information Center
Couch, Charlie D.
2011-01-01
The persistence patterns of student athletes continues to gain interest among the higher education community, particularly among private, faith-based institutions belonging to the NAIA who continue to rely on student athlete recruitment to optimize overall enrollment patterns. Unfortunately, few studies exist in the literature surrounding student…
The Challenge and the Opportunity of Lexical Inferencing in Language Minority Students
ERIC Educational Resources Information Center
Shahar-Yames, Daphna; Prior, Anat
2018-01-01
Lexical inferencing from text is a powerful tool for vocabulary and reading comprehension enhancement. Lexical inferencing relies on the pre-requisite skills of reading and existing vocabulary, and is also linked to non-verbal inferencing abilities and reading comprehension. In this study, we examined whether Fifth-grade Russian-speaking language…
Early longleaf pine seedling survivorship on hydric soils
Susan Cohen; Joan Walker
2006-01-01
We established a study to evaluate site preparation in restoring longleaf pine on poorly drained sites. Most existing longleaf pine stands occur on drier sites, and traditional approaches to restoring longleaf pine on wetter sites may rely on intensive practices that compromise the integrity of the ground layer vegetation. We applied silvicultural treatments to improve...
Acquisition of German Pluralization Rules in Monolingual and Multilingual Children
ERIC Educational Resources Information Center
Zaretsky, Eugen; Lange, Benjamin P.; Euler, Harald A.; Neumann, Katrin
2013-01-01
Existing studies on plural acquisition in German have relied on small samples and thus hardly deliver generalizable and differentiated results. Here, overgeneralizations of certain plural allomorphs and other tendencies in the acquisition of German plural markers are described on the basis of test data from 7,394 3- to 5-year-old monolingual…
Cities are human phenomena born of the need for economic, social and spiritual interactions among people. Early cities relied on solar energy for their support and as a result growth was often constrained by the local availability of energy and materials. Modern cities can exist ...
76 FR 9375 - Proposed Extension of Existing Information Collection; Sealing of Abandoned Areas
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-17
... prevent potentially explosive or toxic gases from migrating into the active working areas of underground... behind the seal must be monitored to prevent methane from reaching the explosive range. Miners rely on... used; Enhance the quality, utility, and clarity of the information to be collected; and Minimize the...
Education, Life Expectancy and Family Bargaining: The Ben-Porath Effect Revisited
ERIC Educational Resources Information Center
Leker, Laura; Ponthiere, Gregory
2015-01-01
Following Ben-Porath [1967. "The Production of Human Capital and the Life-Cycle of Earnings." "Journal of Political Economy" 75 (3): 352-365], the influence of life expectancy on education and on human capital has attracted much attention among growth theorists. Whereas existing growth models rely on an education decision made…
Adjuncts in Social Work Programs: Good Practice or Unethical?
ERIC Educational Resources Information Center
Pearlman, Catherine A.
2013-01-01
Social work education programs rely heavily on adjunct instructors, as do most academic institutions. This article adds to existing literature on adjuncts by focusing on the unique issues in social work education, using social work values and ethics as a focus. The benefits and detriments for adjuncts, programs, and students in schools of social…
21 CFR 13.25 - Disclosure of data and information by the participants.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Division of Dockets Management— (1) The relevant portions of the existing administrative record of the... other documentary information relied on; and (5) A signed statement that, to the best of the director's... Management all information specified in paragraph (a)(2) through (5) of this section and any objections that...
Effects of chemicals and pathway inhibitors on a human in vitro model of secondary palatal fusion.
The mechanisms of tissue and organ formation during embryonic development are unique, but many tissues like the iris, urethra, heart, neural tube, and palate rely upon common cellular and tissue events including tissue fusion. Few human in vitro assays exist to study human embryo...
Reliable Record Matching for a College Admissions System.
ERIC Educational Resources Information Center
Fitt, Paul D.
Prospective student data, supplied by various national college testing and student search services, can be matched with existing student records in a college admissions database. Instead of relying on one unique record identifier, such as the student's social security number, a technique has been developed that is based on a number of common data…
Light transmittance following midstory removal in a riparian hardwood forest
Bradford J. Ostrom; Edward F. Loewenstein
2006-01-01
Midstory cover may negatively affect the growth of desirable oak reproduction. Where such cover exists, midstory control may be warranted prior to a regeneration harvest so that species that rely on large advance reproduction for regeneration can become established and grow into a more competitive position before overstory removal. Unfortunately, how midstory removals...
Evidence from lattice data for a new particle on the worldsheet of the QCD flux tube.
Dubovsky, Sergei; Flauger, Raphael; Gorbenko, Victor
2013-08-09
We propose a new approach for the calculation of the spectrum of excitations of QCD flux tubes. It relies on the fact that the worldsheet theory is integrable at low energies. With this approach, energy levels can be calculated for much shorter flux tubes than was previously possible, allowing for a quantitative comparison with existing lattice data. The improved theoretical control makes it manifest that existing lattice data provides strong evidence for a new pseudoscalar particle localized on the QCD flux tube--the worldsheet axion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avonto, Cristina; Chittiboyina, Amar G.; Rua, Diego
2015-12-01
Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles aftermore » incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, ‘HTS-DCYA assay’, is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. - Highlights: • A novel fluorescence-based method to detect electrophilic sensitizers is proposed. • A model fluorescent thiol was used to directly quantify the reaction products. • A discussion of the reaction workflow and critical parameters is presented. • The method could provide a useful tool to complement existing chemical assays.« less
Kirchner, Sebastian; Fothergill, Joanne L; Wright, Elli A; James, Chloe E; Mowat, Eilidh; Winstanley, Craig
2012-06-05
There is growing concern about the relevance of in vitro antimicrobial susceptibility tests when applied to isolates of P. aeruginosa from cystic fibrosis (CF) patients. Existing methods rely on single or a few isolates grown aerobically and planktonically. Predetermined cut-offs are used to define whether the bacteria are sensitive or resistant to any given antibiotic. However, during chronic lung infections in CF, P. aeruginosa populations exist in biofilms and there is evidence that the environment is largely microaerophilic. The stark difference in conditions between bacteria in the lung and those during diagnostic testing has called into question the reliability and even relevance of these tests. Artificial sputum medium (ASM) is a culture medium containing the components of CF patient sputum, including amino acids, mucin and free DNA. P. aeruginosa growth in ASM mimics growth during CF infections, with the formation of self-aggregating biofilm structures and population divergence. The aim of this study was to develop a microtitre-plate assay to study antimicrobial susceptibility of P. aeruginosa based on growth in ASM, which is applicable to both microaerophilic and aerobic conditions. An ASM assay was developed in a microtitre plate format. P. aeruginosa biofilms were allowed to develop for 3 days prior to incubation with antimicrobial agents at different concentrations for 24 hours. After biofilm disruption, cell viability was measured by staining with resazurin. This assay was used to ascertain the sessile cell minimum inhibitory concentration (SMIC) of tobramycin for 15 different P. aeruginosa isolates under aerobic and microaerophilic conditions and SMIC values were compared to those obtained with standard broth growth. Whilst there was some evidence for increased MIC values for isolates grown in ASM when compared to their planktonic counterparts, the biggest differences were found with bacteria tested in microaerophilic conditions, which showed a much increased resistance up to a > 128 fold, towards tobramycin in the ASM system when compared to assays carried out in aerobic conditions. The lack of association between current susceptibility testing methods and clinical outcome has questioned the validity of current methods. Several in vitro models have been used previously to study P. aeruginosa biofilms. However, these methods rely on surface attached biofilms, whereas the ASM biofilms resemble those observed in the CF lung. In addition, reduced oxygen concentration in the mucus has been shown to alter the behavior of P. aeruginosa and affect antibiotic susceptibility. Therefore using ASM under microaerophilic conditions may provide a more realistic environment in which to study antimicrobial susceptibility.
Microencapsulation and Electrostatic Processing Method
NASA Technical Reports Server (NTRS)
Morrison, Dennis R. (Inventor); Mosier, Benjamin (Inventor)
2000-01-01
Methods are provided for forming spherical multilamellar microcapsules having alternating hydrophilic and hydrophobic liquid layers, surrounded by flexible, semi-permeable hydrophobic or hydrophilic outer membranes which can be tailored specifically to control the diffusion rate. The methods of the invention rely on low shear mixing and liquid-liquid diffusion process and are particularly well suited for forming microcapsules containing both hydrophilic and hydrophobic drugs. These methods can be carried out in the absence of gravity and do not rely on density-driven phase separation, mechanical mixing or solvent evaporation phases. The methods include the process of forming, washing and filtering microcapsules. In addition, the methods contemplate coating microcapsules with ancillary coatings using an electrostatic field and free fluid electrophoresis of the microcapsules. The microcapsules produced by such methods are particularly useful in the delivery of pharmaceutical compositions.
Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case.
Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco
2017-05-25
This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola-Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5 % and 95.4 % for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100 % for both signs. The Viola-Jones approach has detection rates below 20 % for distances between 30 and 48 m, and barely improves in the 20-30 m range with detection rates of up to 60 % . Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at the Robotics and Automation Laboratory of PUC to develop driver assistance systems.
Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case
Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco
2017-01-01
This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola–Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5% and 95.4% for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100% for both signs. The Viola–Jones approach has detection rates below 20% for distances between 30 and 48 m, and barely improves in the 20–30 m range with detection rates of up to 60%. Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at the Robotics and Automation Laboratory of PUC to develop driver assistance systems. PMID:28587071
Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob
2016-08-01
In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).
2016-09-08
10.1118/1.4935531. A new radiation detection method relies on high-energy current (HEC) formed by secondary charged particles in the detector material...photocurrent, radiation detection , self-powered, thin-film U U U SAR 17 Dr. Joseph Wander Reset A Self-powered thin-film radiation detector using intrinsic...Program, Lowell, MA 01854 Purpose: We introduce a radiation detection method that relies on high-energy current (HEC) formed by secondary 10 charged
Air Force Maintenance Technician Performance Measurement.
1979-12-28
R G A N I Z A T IO N N A M E A N D A D R S A R E A P HO R U I T N U M B E R AFIT STUDENT AT: Arizona State Univ II. CONTROLLING OFFICE NAME AND...inflated, or provide incomplete and non -current coverage of maintenance organizations. The performance aopraisal method developed relies on subjective...highly inflated, or provided incomplete and non -current coverage of maintenance organizations. The performance appraisal method developed relied on
Witnessing entanglement without entanglement witness operators
Pezzè, Luca; Li, Yan; Li, Weidong; Smerzi, Augusto
2016-01-01
Quantum mechanics predicts the existence of correlations between composite systems that, although puzzling to our physical intuition, enable technologies not accessible in a classical world. Notwithstanding, there is still no efficient general method to theoretically quantify and experimentally detect entanglement of many qubits. Here we propose to detect entanglement by measuring the statistical response of a quantum system to an arbitrary nonlocal parametric evolution. We witness entanglement without relying on the tomographic reconstruction of the quantum state, or the realization of witness operators. The protocol requires two collective settings for any number of parties and is robust against noise and decoherence occurring after the implementation of the parametric transformation. To illustrate its user friendliness we demonstrate multipartite entanglement in different experiments with ions and photons by analyzing published data on fidelity visibilities and variances of collective observables. PMID:27681625
A perspective on future directions in aerospace propulsion system simulation
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.
1989-01-01
The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Comparison of 2c- and 3cLIF droplet temperature imaging
NASA Astrophysics Data System (ADS)
Palmer, Johannes; Reddemann, Manuel A.; Kirsch, Valeri; Kneer, Reinhold
2018-06-01
This work presents "pulsed 2D-3cLIF-EET" as a measurement setup for micro-droplet internal temperature imaging. The setup relies on a third color channel that allows correcting spatially changing energy transfer rates between the two applied fluorescent dyes. First measurement results are compared with results of two slightly different versions of the recent "pulsed 2D-2cLIF-EET" method. Results reveal a higher temperature measurement accuracy of the recent 2cLIF setup. Average droplet temperature is determined by the 2cLIF setup with an uncertainty of less than 1 K and a spatial deviation of about 3.7 K. The new 3cLIF approach would become competitive, if the existing droplet size dependency is anticipated by an additional calibration and if the processing algorithm includes spatial measurement errors more appropriately.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Ancient Cosmology, superfine structure of the Universe and Anthropological Principle
NASA Astrophysics Data System (ADS)
Arakelyan, Hrant; Vardanyan, Susan
2015-07-01
The modern cosmology by its spirit, conception of the Big Bang is closer to the ancient cosmology, than to the cosmological paradigm of the XIX century. Repeating the speculations of the ancients, but using at the same time subtle mathematical methods and relying on the steadily accumulating empirical material, the modern theory tends to a quantitative description of nature, in which increasing role are playing the numerical ratios between the physical constants. The detailed analysis of the influence of the numerical values -- of physical quantities on the physical state of the universe revealed amazing relations called fine and hyperfine tuning. In order to explain, why the observable universe comes to be a certain set of interrelated fundamental parameters, in fact a speculative anthropic principle was proposed, which focuses on the fact of the existence of sentient beings.
Gaps in affiliation indexing in Scopus and PubMed
Schmidt, Cynthia M.; Cox, Roxanne; Fial, Alissa V.; Hartman, Teresa L.; Magee, Martha L.
2016-01-01
Objective The authors sought to determine whether unexpected gaps existed in Scopus's author affiliation indexing of publications written by the University of Nebraska Medical Center or Nebraska Medicine (UNMC/NM) authors during 2014. Methods First, we compared Scopus affiliation identifier search results to PubMed affiliation keyword search results. Then, we searched Scopus using affiliation keywords (UNMC, etc.) and compared the results to PubMed affiliation keyword and Scopus affiliation identifier searches. Results We found that Scopus's records for approximately 7% of UNMC/NM authors' publications lacked appropriate UNMC/NM author affiliation identifiers, and many journals' publishers were supplying incomplete author affiliation information to PubMed. Conclusions Institutions relying on Scopus to track their impact should determine whether Scopus's affiliation identifiers will, in fact, identify all articles published by their authors and investigators. PMID:27076801
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
The Highly Adaptive Lasso Estimator
Benkeser, David; van der Laan, Mark
2017-01-01
Estimation of a regression functions is a common goal of statistical learning. We propose a novel nonparametric regression estimator that, in contrast to many existing methods, does not rely on local smoothness assumptions nor is it constructed using local smoothing techniques. Instead, our estimator respects global smoothness constraints by virtue of falling in a class of right-hand continuous functions with left-hand limits that have variation norm bounded by a constant. Using empirical process theory, we establish a fast minimal rate of convergence of our proposed estimator and illustrate how such an estimator can be constructed using standard software. In simulations, we show that the finite-sample performance of our estimator is competitive with other popular machine learning techniques across a variety of data generating mechanisms. We also illustrate competitive performance in real data examples using several publicly available data sets. PMID:29094111
Chen, I-Min A; Markowitz, Victor M; Palaniappan, Krishna; Szeto, Ernest; Chu, Ken; Huang, Jinghua; Ratner, Anna; Pillay, Manoj; Hadjithomas, Michalis; Huntemann, Marcel; Mikhailova, Natalia; Ovchinnikova, Galina; Ivanova, Natalia N; Kyrpides, Nikos C
2016-04-26
The exponential growth of genomic data from next generation technologies renders traditional manual expert curation effort unsustainable. Many genomic systems have included community annotation tools to address the problem. Most of these systems adopted a "Wiki-based" approach to take advantage of existing wiki technologies, but encountered obstacles in issues such as usability, authorship recognition, information reliability and incentive for community participation. Here, we present a different approach, relying on tightly integrated method rather than "Wiki-based" method, to support community annotation and user collaboration in the Integrated Microbial Genomes (IMG) system. The IMG approach allows users to use existing IMG data warehouse and analysis tools to add gene, pathway and biosynthetic cluster annotations, to analyze/reorganize contigs, genes and functions using workspace datasets, and to share private user annotations and workspace datasets with collaborators. We show that the annotation effort using IMG can be part of the research process to overcome the user incentive and authorship recognition problems thus fostering collaboration among domain experts. The usability and reliability issues are addressed by the integration of curated information and analysis tools in IMG, together with DOE Joint Genome Institute (JGI) expert review. By incorporating annotation operations into IMG, we provide an integrated environment for users to perform deeper and extended data analysis and annotation in a single system that can lead to publications and community knowledge sharing as shown in the case studies.
Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N
2017-07-01
Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.
Changes in Adult Child Caregiver Networks
ERIC Educational Resources Information Center
Szinovacz, Maximiliane E.; Davey, Adam
2007-01-01
Purpose: Caregiving research has typically relied on cross-sectional data that focus on the primary caregiver. This approach neglects the dynamic and systemic character of caregiver networks. Our analyses addressed changes in adult child care networks over a 2-year period. Design and Methods: The study relied on pooled data from Waves 1 through 5…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
..., scientific data or information relied on to support the adequacy of water treatment methods, treatment monitoring results, water testing results, and scientific data or information relied on to support any... recommendations in the Sprout Guides to test spent irrigation water; several comments supported expanded testing...
Soneja, Sutyajeet I; Tielsch, James M; Khatry, Subarna K; Curriero, Frank C; Breysse, Patrick N
2016-03-01
Black carbon (BC) is a major contributor to hydrological cycle change and glacial retreat within the Indo-Gangetic Plain (IGP) and surrounding region. However, significant variability exists for estimates of BC regional concentration. Existing inventories within the IGP suffer from limited representation of rural sources, reliance on idealized point source estimates (e.g., utilization of emission factors or fuel-use estimates for cooking along with demographic information), and difficulty in distinguishing sources. Inventory development utilizes two approaches, termed top down and bottom up, which rely on various sources including transport models, emission factors, and remote sensing applications. Large discrepancies exist for BC source attribution throughout the IGP depending on the approach utilized. Cooking with biomass fuels, a major contributor to BC production has great source apportionment variability. Areas requiring attention tied to research of cookstove and biomass fuel use that have been recognized to improve emission inventory estimates include emission factors, particulate matter speciation, and better quantification of regional/economic sectors. However, limited attention has been given towards understanding ambient small-scale spatial variation of BC between cooking and non-cooking periods in low-resource environments. Understanding the indoor to outdoor relationship of BC emissions due to cooking at a local level is a top priority to improve emission inventories as many health and climate applications rely upon utilization of accurate emission inventories.
Inference on the Strength of Balancing Selection for Epistatically Interacting Loci
Buzbas, Erkan Ozge; Joyce, Paul; Rosenberg, Noah A.
2011-01-01
Existing inference methods for estimating the strength of balancing selection in multi-locus genotypes rely on the assumption that there are no epistatic interactions between loci. Complex systems in which balancing selection is prevalent, such as sets of human immune system genes, are known to contain components that interact epistatically. Therefore, current methods may not produce reliable inference on the strength of selection at these loci. In this paper, we address this problem by presenting statistical methods that can account for epistatic interactions in making inference about balancing selection. A theoretical result due to Fearnhead (2006) is used to build a multi-locus Wright-Fisher model of balancing selection, allowing for epistatic interactions among loci. Antagonistic and synergistic types of interactions are examined. The joint posterior distribution of the selection and mutation parameters is sampled by Markov chain Monte Carlo methods, and the plausibility of models is assessed via Bayes factors. As a component of the inference process, an algorithm to generate multi-locus allele frequencies under balancing selection models with epistasis is also presented. Recent evidence on interactions among a set of human immune system genes is introduced as a motivating biological system for the epistatic model, and data on these genes are used to demonstrate the methods. PMID:21277883
Condom negotiation: experiences of sexually active young women.
East, Leah; Jackson, Debra; O'Brien, Louise; Peters, Kathleen
2011-01-01
This paper is a report of a study of sexually active young women's experiences of negotiating condom use both before and after diagnosis of a sexually transmitted infection. The male condom is the most efficient method in preventing and reducing the transmission of sexually transmitted infections. However, condom use can be hindered by factors including societal norms and gender roles, which can create difficulties for women in initiating and negotiating condom use in heterosexual partnerships. A feminist narrative approach was used, and ten women's stories were collected via online interviews in 2007. None of the women initiated or negotiated use of the male condom for various reasons. Some relied on their male partners to initiate condom use, some were unable to practise safer sex due to the abuse and unequal gender dynamics that existed in their sexual relationships, and some thought that condom use was not necessary because of a belief that they were in safe and monogamous relationships. Even following diagnosis of a sexually transmitted infection, some women said that they were not empowered enough to initiate condom use with subsequent sexual partners, resulting in continued high-risk sexual behaviour. Successful condom promotion relies on the recognition of the gender factors that impede young women's condom negotiation and use. Strategies that overcome gender dynamics and empower women to negotiate condom use have the ability to promote condom use among this group. © 2010 Blackwell Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thakur, Gautam S; Bhaduri, Budhendra L; Piburn, Jesse O
Geospatial intelligence has traditionally relied on the use of archived and unvarying data for planning and exploration purposes. In consequence, the tools and methods that are architected to provide insight and generate projections only rely on such datasets. Albeit, if this approach has proven effective in several cases, such as land use identification and route mapping, it has severely restricted the ability of researchers to inculcate current information in their work. This approach is inadequate in scenarios requiring real-time information to act and to adjust in ever changing dynamic environments, such as evacuation and rescue missions. In this work, wemore » propose PlanetSense, a platform for geospatial intelligence that is built to harness the existing power of archived data and add to that, the dynamics of real-time streams, seamlessly integrated with sophisticated data mining algorithms and analytics tools for generating operational intelligence on the fly. The platform has four main components i) GeoData Cloud a data architecture for storing and managing disparate datasets; ii) Mechanism to harvest real-time streaming data; iii) Data analytics framework; iv) Presentation and visualization through web interface and RESTful services. Using two case studies, we underpin the necessity of our platform in modeling ambient population and building occupancy at scale.« less
Alternatives for Jet Engine Control
NASA Technical Reports Server (NTRS)
Leake, R. J.; Sain, M. K.
1976-01-01
Approaches are developed as alternatives to current design methods which rely heavily on linear quadratic and Riccati equation methods. The main alternatives are discussed in two broad categories, local multivariable frequency domain methods and global nonlinear optimal methods.
ERIC Educational Resources Information Center
Harris, Tracy A.
2010-01-01
Community college leaders rely on enrollment management professionals (EMPs) to recruit and retain students, but research does not report the attributes these professionals should possess to contribute to student recruitment and retention. The purpose of this exploratory study was to determine if characteristics exist among EMPs that contribute to…
ERIC Educational Resources Information Center
von der Embse, Nathaniel P.; Iaccarino, Stephanie; Mankin, Ariel; Kilgus, Stephen P.; Magen, Eran
2017-01-01
School systems are the primary providers for the increasing number of children with mental health needs. School-based universal screening offers a valuable way to identify children that would benefit from school-based mental health services. However, many existing screening systems rely on teacher ratings alone and do not incorporate student…
Adult Support and Substance Use among Homeless Youths Who Attend High School
ERIC Educational Resources Information Center
Ferguson, Kristin M.; Xie, Bin
2012-01-01
Background: Despite high rates of substance use among homeless youths, little is known about the interaction of substance-use risk and protective factors. Further, limited research exists on substance use by school-attending homeless youths, as extant studies have relied on street- and shelter-based samples. Objective: The purpose of this study…
ERIC Educational Resources Information Center
Darrah, Brenda
Researchers for small businesses, which may have no access to expensive databases or market research reports, must often rely on information found on the Internet, which can be difficult to find. Although current conventional Internet search engines are now able to index over on billion documents, there are many more documents existing in…
Youth Services Librarians as Managers: A How-To Guide from Budgeting to Personnel.
ERIC Educational Resources Information Center
Staerkel, Kathleen, Comp.; And Others
Administrators of public library youth services departments and managers of school library media centers often rely on broad sources for advice on managing their specialized youth services. This book is designed to assist youth services librarians in becoming well-versed in management skills crucial to the continued existence of quality service to…
Grouping and Emergent Features in Vision: Toward a Theory of Basic Gestalts
ERIC Educational Resources Information Center
Pomerantz, James R.; Portillo, Mary C.
2011-01-01
Gestalt phenomena are often so powerful that mere demonstrations can confirm their existence, but Gestalts have proven hard to define and measure. Here we outline a theory of basic Gestalts (TBG) that defines Gestalts as emergent features (EFs). The logic relies on discovering wholes that are more discriminable than are the parts from which they…
Solar Heating and Cooling of Buildings (Phase O). Volume 1: Executive Summary.
ERIC Educational Resources Information Center
TRW Systems Group, Redondo Beach, CA.
The purpose of this study was to establish the technical and economic feasibility of using solar energy for the heating and cooling of buildings. Five selected building types in 14 selected cities were used to determine loads for space heating, space cooling and dehumidification, and domestic service hot water heating. Relying on existing and…
The Restructuring of Southern Agriculture: Data Needs for Economic and Policy Research.
ERIC Educational Resources Information Center
Skees, Jerry R.; Reed, Michael R.
The changing structure of Southern farming amid the pressures of the farm crisis produces an information gap that has forced policymakers to rely on trial and error in institutional design. Existing data systems monitoring the farm sector either use the county as the primary observation unit, or they survey different individual farmers each year.…
Making Health Information Clear and Readable for the Masses
ERIC Educational Resources Information Center
Staggers, Sydney M.; Brann, Maria
2011-01-01
Many federal agencies rely on print materials to convey important information to the public, and many of the materials are written at a 10th-grade reading level or above, further limiting those individuals with low literacy. As such, the Department of Health and Human Services (DHHS) recommends the application of existing best practices for…
A Comprehensive Review of Helicopter Noise Literature
1975-06-01
Broadband Noise .... ........................ .0.0.0 13 Impulsive Noise .......... ........... ............... ... *. i Introduction... broadband noise is probably the turbulence in the flow seen by the rotor blades. Trhe prediction of rotor broadband noise based on rotor geometry amd...acoustic processes, but rely on generalization of existing test data. The recent impetus to study broadband noise is the result of reducing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-02
... workforce. At a minimum, the subcontractor shall provide-- (A) A brief description of the types of jobs... description may rely on job titles, broader labor categories, or the subcontractor's existing practice for... funded awards of $25,000 or more, to report jobs information to the prime contractor for reporting into...
ERIC Educational Resources Information Center
Bush, Drew; Sieber, Renee; Seiler, Gale; Chandler, Mark
2016-01-01
A gap has existed between the tools and processes of scientists working on anthropogenic global climate change (AGCC) and the technologies and curricula available to educators teaching the subject through student inquiry. Designing realistic scientific inquiry into AGCC poses a challenge because research on it relies on complex computer models,…
Trigonometric Transforms for Image Reconstruction
1998-06-01
applying trigo - nometric transforms to image reconstruction problems. Many existing linear image reconstruc- tion techniques rely on knowledge of...ancestors. The research performed for this dissertation represents the first time the symmetric convolution-multiplication property of trigo - nometric...Fourier domain. The traditional representation of these filters will be similar to new trigo - nometric transform versions derived in later chapters
An Assessment of Children Literacy Development in Nigeria in the Context of EFA 2015 Policy Targets
ERIC Educational Resources Information Center
Ozohu-Suleiman, Yakubu
2012-01-01
The paper analyses the interface between Nigeria's anticipated failure in the Education for All (EFA) 2015 targets and her policy implementation strategies in relation to children literacy. The principal purpose is to locate evidences that may explain the expected failure. The paper relies largely on secondary data and existing literature to…
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Development of Methodologies for IV and V of Neural Networks
NASA Technical Reports Server (NTRS)
Taylor, Brian; Darrah, Marjorie
2003-01-01
Non-deterministic systems often rely upon neural network (NN) technology to "lean" to manage flight systems under controlled conditions using carefully chosen training sets. How can these adaptive systems be certified to ensure that they will become increasingly efficient and behave appropriately in real-time situations? The bulk of Independent Verification and Validation (IV&V) research of non-deterministic software control systems such as Adaptive Flight Controllers (AFC's) addresses NNs in well-behaved and constrained environments such as simulations and strict process control. However, neither substantive research, nor effective IV&V techniques have been found to address AFC's learning in real-time and adapting to live flight conditions. Adaptive flight control systems offer good extensibility into commercial aviation as well as military aviation and transportation. Consequently, this area of IV&V represents an area of growing interest and urgency. ISR proposes to further the current body of knowledge to meet two objectives: Research the current IV&V methods and assess where these methods may be applied toward a methodology for the V&V of Neural Network; and identify effective methods for IV&V of NNs that learn in real-time, including developing a prototype test bed for IV&V of AFC's. Currently. no practical method exists. lSR will meet these objectives through the tasks identified and described below. First, ISR will conduct a literature review of current IV&V technology. TO do this, ISR will collect the existing body of research on IV&V of non-deterministic systems and neural network. ISR will also develop the framework for disseminating this information through specialized training. This effort will focus on developing NASA's capability to conduct IV&V of neural network systems and to provide training to meet the increasing need for IV&V expertise in such systems.
Debris-flow runout predictions based on the average channel slope (ACS)
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Prediction of the runout distance of a debris flow is an important element in the delineation of potentially hazardous areas on alluvial fans and for the siting of mitigation structures. Existing runout estimation methods rely on input parameters that are often difficult to estimate, including volume, velocity, and frictional factors. In order to provide a simple method for preliminary estimates of debris-flow runout distances, we developed a model that provides runout predictions based on the average channel slope (ACS model) for non-volcanic debris flows that emanate from confined channels and deposit on well-defined alluvial fans. This model was developed from 20 debris-flow events in the western United States and British Columbia. Based on a runout estimation method developed for snow avalanches, this model predicts debris-flow runout as an angle of reach from a fixed point in the drainage channel to the end of the runout zone. The best fixed point was found to be the mid-point elevation of the drainage channel, measured from the apex of the alluvial fan to the top of the drainage basin. Predicted runout lengths were more consistent than those obtained from existing angle-of-reach estimation methods. Results of the model compared well with those of laboratory flume tests performed using the same range of channel slopes. The robustness of this model was tested by applying it to three debris-flow events not used in its development: predicted runout ranged from 82 to 131% of the actual runout for these three events. Prediction interval multipliers were also developed so that the user may calculate predicted runout within specified confidence limits. ?? 2008 Elsevier B.V. All rights reserved.
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.
Tong, Tong; Wolz, Robin; Coupé, Pierrick; Hajnal, Joseph V; Rueckert, Daniel
2013-08-01
We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labeling approaches that rely on comparing image similarities between atlases and target images. In addition, we propose a Fixed Discriminative Dictionary Learning for Segmentation (F-DDLS) strategy, which can learn dictionaries offline and perform segmentations online, enabling a significant speed-up in the segmentation stage. The proposed method has been evaluated for the hippocampus segmentation of 80 healthy ICBM subjects and 202 ADNI images. The robustness of the proposed method, especially of our F-DDLS strategy, was validated by training and testing on different subject groups in the ADNI database. The influence of different parameters was studied and the performance of the proposed method was also compared with that of the nonlocal patch-based approach. The proposed method achieved a median Dice coefficient of 0.879 on 202 ADNI images and 0.890 on 80 ICBM subjects, which is competitive compared with state-of-the-art methods. Copyright © 2013 Elsevier Inc. All rights reserved.
Niarchos, Athanasios; Siora, Anastasia; Konstantinou, Evangelia; Kalampoki, Vasiliki; Lagoumintzis, George; Poulas, Konstantinos
2017-01-01
During the last few decades, the recombinant protein expression finds more and more applications. The cloning of protein-coding genes into expression vectors is required to be directional for proper expression, and versatile in order to facilitate gene insertion in multiple different vectors for expression tests. In this study, the TA-GC cloning method is proposed, as a new, simple and efficient method for the directional cloning of protein-coding genes in expression vectors. The presented method features several advantages over existing methods, which tend to be relatively more labour intensive, inflexible or expensive. The proposed method relies on the complementarity between single A- and G-overhangs of the protein-coding gene, obtained after a short incubation with T4 DNA polymerase, and T and C overhangs of the novel vector pET-BccI, created after digestion with the restriction endonuclease BccI. The novel protein-expression vector pET-BccI also facilitates the screening of transformed colonies for recombinant transformants. Evaluation experiments of the proposed TA-GC cloning method showed that 81% of the transformed colonies contained recombinant pET-BccI plasmids, and 98% of the recombinant colonies expressed the desired protein. This demonstrates that TA-GC cloning could be a valuable method for cloning protein-coding genes in expression vectors.
Niarchos, Athanasios; Siora, Anastasia; Konstantinou, Evangelia; Kalampoki, Vasiliki; Poulas, Konstantinos
2017-01-01
During the last few decades, the recombinant protein expression finds more and more applications. The cloning of protein-coding genes into expression vectors is required to be directional for proper expression, and versatile in order to facilitate gene insertion in multiple different vectors for expression tests. In this study, the TA-GC cloning method is proposed, as a new, simple and efficient method for the directional cloning of protein-coding genes in expression vectors. The presented method features several advantages over existing methods, which tend to be relatively more labour intensive, inflexible or expensive. The proposed method relies on the complementarity between single A- and G-overhangs of the protein-coding gene, obtained after a short incubation with T4 DNA polymerase, and T and C overhangs of the novel vector pET-BccI, created after digestion with the restriction endonuclease BccI. The novel protein-expression vector pET-BccI also facilitates the screening of transformed colonies for recombinant transformants. Evaluation experiments of the proposed TA-GC cloning method showed that 81% of the transformed colonies contained recombinant pET-BccI plasmids, and 98% of the recombinant colonies expressed the desired protein. This demonstrates that TA-GC cloning could be a valuable method for cloning protein-coding genes in expression vectors. PMID:29091919
Zhou, Ronggang; Chan, Alan H S
2017-01-01
In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.
NASA Astrophysics Data System (ADS)
Vitasurya, V. R.; Hardiman, G.; Sari, S. R.
2018-01-01
This paper aims to reveal local values used by Brayut villagers to maintain the existence of the traditional house as a dwelling. The transformation of traditional houses goes as time passes, influenced by internal aspects related to the needs of residents and external aspects related to the regional development by the government. Traditional Javanese house as a cultural identity of Javanese people, especially in the village, has also experienced the transformation phenomenon. Modernization affects local residents’ needs and the Government’s Development Program for tourism village influences demands of change. An unfocused transformation can lead to a total change that can eliminate the cultural identity of the rural Java community. The method used is the case study by taking three models of Javanese house in Brayut Village. Brayut Tourism Village is a cultural tourism village that relies on tradition as a tourist attraction. The existence of traditional Javanese house is an important asset for retaining its authenticity as a dwelling. Three models taken as the case studies represent the traditional Javanese house types. The result obtained is that the family bond is a major factor in preserving the traditional Javanese house in Brayut Village, Yogyakarta.
Is human fecundity changing? A discussion of research and ...
Fecundity, the biologic capacity to reproduce, is essential for the health of individuals and is, therefore, fundamental for understanding human health at the population level. Given the absence of a population (bio)marker, fecundity is assessed indirectly by various individual based (e.g. semen quality, ovulation) or couple-based (e.g. time-to-pregnancy) endpoints. Population monitoring of fecundity is challenging, and often defaults to relying on rates of births (fertility) or adverse outcomes such as genitourinary malformations and reproductive site cancers . In light of reported declines in semen quality and fertility rates in some global regions among other changes, the question as to whether human fecundity is changing needs investigation. We review existing data and novel methodological approaches aimed at answering this question from a transdisciplinary perspective. The existing literature is insufficient for answering this question; we provide an overview of currently available resources and novel methods suitable for delineating temporal patterns in human fecundity in future research. This paper is a result of a workshop conducted by the National Institutes of Health on September 20-21, 2015. The paper poses five questions relevant to the topic of "Is human fecundity changing?".
Computing the multifractal spectrum from time series: an algorithmic approach.
Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E
2009-12-01
We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
Smarr, Melissa M; Sapra, Katherine J; Gemmill, Alison; Kahn, Linda G; Wise, Lauren A; Lynch, Courtney D; Factor-Litvak, Pam; Mumford, Sunni L; Skakkebaek, Niels E; Slama, Rémy; Lobdell, Danelle T; Stanford, Joseph B; Jensen, Tina Kold; Boyle, Elizabeth Heger; Eisenberg, Michael L; Turek, Paul J; Sundaram, Rajeshwari; Thoma, Marie E; Buck Louis, Germaine M
2017-03-01
Fecundity, the biologic capacity to reproduce, is essential for the health of individuals and is, therefore, fundamental for understanding human health at the population level. Given the absence of a population (bio)marker, fecundity is assessed indirectly by various individual-based (e.g. semen quality, ovulation) or couple-based (e.g. time-to-pregnancy) endpoints. Population monitoring of fecundity is challenging, and often defaults to relying on rates of births (fertility) or adverse outcomes such as genitourinary malformations and reproductive site cancers. In light of reported declines in semen quality and fertility rates in some global regions among other changes, the question as to whether human fecundity is changing needs investigation. We review existing data and novel methodological approaches aimed at answering this question from a transdisciplinary perspective. The existing literature is insufficient for answering this question; we provide an overview of currently available resources and novel methods suitable for delineating temporal patterns in human fecundity in future research. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
Evolving effective behaviours to interact with tag-based populations
NASA Astrophysics Data System (ADS)
Yucel, Osman; Crawford, Chad; Sen, Sandip
2015-07-01
Tags and other characteristics, externally perceptible features that are consistent among groups of animals or humans, can be used by others to determine appropriate response strategies in societies. This usage of tags can be extended to artificial environments, where agents can significantly reduce cognitive effort spent on appropriate strategy choice and behaviour selection by reusing strategies for interacting with new partners based on their tags. Strategy selection mechanisms developed based on this idea have successfully evolved stable cooperation in games such as the Prisoner's Dilemma game but relies upon payoff sharing and matching methods that limit the applicability of the tag framework. Our goal is to develop a general classification and behaviour selection approach based on the tag framework. We propose and evaluate alternative tag matching and adaptation schemes for a new, incoming individual to select appropriate behaviour against any population member of an existing, stable society. Our proposed approach allows agents to evolve both the optimal tag for the environment as well as appropriate strategies for existing agent groups. We show that these mechanisms will allow for robust selection of optimal strategies by agents entering a stable society and analyse the various environments where this approach is effective.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; McManamay, Ryan A; Nagle, Nicholas N
Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may thereforemore » not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.« less
Pasaniuc, Bogdan; Sankararaman, Sriram; Torgerson, Dara G.; Gignoux, Christopher; Zaitlen, Noah; Eng, Celeste; Rodriguez-Cintron, William; Chapela, Rocio; Ford, Jean G.; Avila, Pedro C.; Rodriguez-Santana, Jose; Chen, Gary K.; Le Marchand, Loic; Henderson, Brian; Reich, David; Haiman, Christopher A.; Gonzàlez Burchard, Esteban; Halperin, Eran
2013-01-01
Motivation: Local ancestry analysis of genotype data from recently admixed populations (e.g. Latinos, African Americans) provides key insights into population history and disease genetics. Although methods for local ancestry inference have been extensively validated in simulations (under many unrealistic assumptions), no empirical study of local ancestry accuracy in Latinos exists to date. Hence, interpreting findings that rely on local ancestry in Latinos is challenging. Results: Here, we use 489 nuclear families from the mainland USA, Puerto Rico and Mexico in conjunction with 3204 unrelated Latinos from the Multiethnic Cohort study to provide the first empirical characterization of local ancestry inference accuracy in Latinos. Our approach for identifying errors does not rely on simulations but on the observation that local ancestry in families follows Mendelian inheritance. We measure the rate of local ancestry assignments that lead to Mendelian inconsistencies in local ancestry in trios (MILANC), which provides a lower bound on errors in the local ancestry estimates. We show that MILANC rates observed in simulations underestimate the rate observed in real data, and that MILANC varies substantially across the genome. Second, across a wide range of methods, we observe that loci with large deviations in local ancestry also show enrichment in MILANC rates. Therefore, local ancestry estimates at such loci should be interpreted with caution. Finally, we reconstruct ancestral haplotype panels to be used as reference panels in local ancestry inference and show that ancestry inference is significantly improved by incoroprating these reference panels. Availability and implementation: We provide the reconstructed reference panels together with the maps of MILANC rates as a public resource for researchers analyzing local ancestry in Latinos at http://bogdanlab.pathology.ucla.edu. Contact: bpasaniuc@mednet.ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23572411
Learning multiple relative attributes with humans in the loop.
Qian, Buyue; Wang, Xiang; Cao, Nan; Jiang, Yu-Gang; Davidson, Ian
2014-12-01
Semantic attributes have been recognized as a more spontaneous manner to describe and annotate image content. It is widely accepted that image annotation using semantic attributes is a significant improvement to the traditional binary or multiclass annotation due to its naturally continuous and relative properties. Though useful, existing approaches rely on an abundant supervision and high-quality training data, which limit their applicability. Two standard methods to overcome small amounts of guidance and low-quality training data are transfer and active learning. In the context of relative attributes, this would entail learning multiple relative attributes simultaneously and actively querying a human for additional information. This paper addresses the two main limitations in existing work: 1) it actively adds humans to the learning loop so that minimal additional guidance can be given and 2) it learns multiple relative attributes simultaneously and thereby leverages dependence amongst them. In this paper, we formulate a joint active learning to rank framework with pairwise supervision to achieve these two aims, which also has other benefits such as the ability to be kernelized. The proposed framework optimizes over a set of ranking functions (measuring the strength of the presence of attributes) simultaneously and dependently on each other. The proposed pairwise queries take the form of which one of these two pictures is more natural? These queries can be easily answered by humans. Extensive empirical study on real image data sets shows that our proposed method, compared with several state-of-the-art methods, achieves superior retrieval performance while requires significantly less human inputs.
Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J
2016-04-01
Increased adoption of electronic health records has resulted in increased availability of free text clinical data for secondary use. A variety of approaches to obtain actionable information from unstructured free text data exist. These approaches are resource intensive, inherently complex and rely on structured clinical data and dictionary-based approaches. We sought to evaluate the potential to obtain actionable information from free text pathology reports using routinely available tools and approaches that do not depend on dictionary-based approaches. We obtained pathology reports from a large health information exchange and evaluated the capacity to detect cancer cases from these reports using 3 non-dictionary feature selection approaches, 4 feature subset sizes, and 5 clinical decision models: simple logistic regression, naïve bayes, k-nearest neighbor, random forest, and J48 decision tree. The performance of each decision model was evaluated using sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using automated, informed, and manual feature selection approaches yielded similar results. Furthermore, non-dictionary classification approaches identified cancer cases present in free text reports with evaluation measures approaching and exceeding 80-90% for most metrics. Our methods are feasible and practical approaches for extracting substantial information value from free text medical data, and the results suggest that these methods can perform on par, if not better, than existing dictionary-based approaches. Given that public health agencies are often under-resourced and lack the technical capacity for more complex methodologies, these results represent potentially significant value to the public health field. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1975-01-01
It is shown that many of the basic industries that the U.S. has relied upon in the past for economic growth and development are now so obsolete, so old, and so technologically inferior to that of foreign competitors that the U.S. is losing its international competitive position. The most conservative estimate suggests that it will require $325 billion between now and 1982 merely to meet existing and currently anticipated pollution requirements and that it would take an additional $197 billion to replace outmoded existing facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
For this project with the U.S. Department of Energy Building America team Home Innovation Research Labs, the retrofit insulated panels relied on an enhanced expanded polystyrene (EPS) for thermal resistance of R-4.5/inch, which is an improvement of 10% over conventional (white-colored) EPS. EPS, measured by its life cycle, is an alternative to commonly used extruded polystyrene and spray polyurethane foam. It is a closed-cell product made up of 90% air, and it requires about 85% fewer petroleum products for processing than other rigid foams.
Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement
Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...
2013-12-10
A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less
Optimization of monopiles for offshore wind turbines.
Kallehave, Dan; Byrne, Byron W; LeBlanc Thilsted, Christian; Mikkelsen, Kristian Kousgaard
2015-02-28
The offshore wind industry currently relies on subsidy schemes to be competitive with fossil-fuel-based energy sources. For the wind industry to survive, it is vital that costs are significantly reduced for future projects. This can be partly achieved by introducing new technologies and partly through optimization of existing technologies and design methods. One of the areas where costs can be reduced is in the support structure, where better designs, cheaper fabrication and quicker installation might all be possible. The prevailing support structure design is the monopile structure, where the simple design is well suited to mass-fabrication, and the installation approach, based on conventional impact driving, is relatively low-risk and robust for most soil conditions. The range of application of the monopile for future wind farms can be extended by using more accurate engineering design methods, specifically tailored to offshore wind industry design. This paper describes how state-of-the-art optimization approaches are applied to the design of current wind farms and monopile support structures and identifies the main drivers where more accurate engineering methods could impact on a next generation of highly optimized monopiles. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Using DNA chips for identification of tephritid pest species.
Chen, Yen-Hou; Liu, Lu-Yan; Tsai, Wei-Huang; Haymer, David S; Lu, Kuang-Hui
2014-08-01
The ability correctly to identify species in a rapid and reliable manner is critical in many situations. For insects in particular, the primary tools for such identification rely on adult-stage morphological characters. For a number of reasons, however, there is a clear need for alternatives. This paper reports on the development of a new method employing DNA biochip technology for the identification of pest species within the family Tephritidae. The DNA biochip developed and tested here quickly and efficiently identifies and discriminates between several tephritid species, except for some that are members of a complex of closely related taxa and that may in fact not represent distinct biological species. The use of these chips offers a number of potential advantages over current methods. Results can be obtained in less than 5 h using material from any stage of the life cycle and with greater sensitivity than other methods currently available. This technology provides a novel tool for the rapid and reliable identification of several major pest species that may be intercepted in imported fruits or other commodities. The existing chips can also easily be expanded to incorporate additional markers and species as needed. © 2013 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
NASA Astrophysics Data System (ADS)
Roberts, Brenden; Vidick, Thomas; Motrunich, Olexei I.
2017-12-01
The success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad et al. [Math. Phys. 356, 65 (2017), 10.1007/s00220-017-2973-z]. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce a practical adaptation of the RRG procedure which, while no longer theoretically guaranteed to converge, finds matrix product state ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in realistic situations. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a treelike manner. We evaluate the algorithm numerically, finding similar performance to density matrix renormalization group (DMRG) in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.
Localizing ECoG electrodes on the cortical anatomy without post-implantation imaging
Gupta, Disha; Hill, N. Jeremy; Adamo, Matthew A.; Ritaccio, Anthony; Schalk, Gerwin
2014-01-01
Introduction Electrocorticographic (ECoG) grids are placed subdurally on the cortex in people undergoing cortical resection to delineate eloquent cortex. ECoG signals have high spatial and temporal resolution and thus can be valuable for neuroscientific research. The value of these data is highest when they can be related to the cortical anatomy. Existing methods that establish this relationship rely either on post-implantation imaging using computed tomography (CT), magnetic resonance imaging (MRI) or X-Rays, or on intra-operative photographs. For research purposes, it is desirable to localize ECoG electrodes on the brain anatomy even when post-operative imaging is not available or when intra-operative photographs do not readily identify anatomical landmarks. Methods We developed a method to co-register ECoG electrodes to the underlying cortical anatomy using only a pre-operative MRI, a clinical neuronavigation device (such as BrainLab VectorVision), and fiducial markers. To validate our technique, we compared our results to data collected from six subjects who also had post-grid implantation imaging available. We compared the electrode coordinates obtained by our fiducial-based method to those obtained using existing methods, which are based on co-registering pre- and post-grid implantation images. Results Our fiducial-based method agreed with the MRI–CT method to within an average of 8.24 mm (mean, median = 7.10 mm) across 6 subjects in 3 dimensions. It showed an average discrepancy of 2.7 mm when compared to the results of the intra-operative photograph method in a 2D coordinate system. As this method does not require post-operative imaging such as CTs, our technique should prove useful for research in intra-operative single-stage surgery scenarios. To demonstrate the use of our method, we applied our method during real-time mapping of eloquent cortex during a single-stage surgery. The results demonstrated that our method can be applied intra-operatively in the absence of post-operative imaging to acquire ECoG signals that can be valuable for neuroscientific investigations. PMID:25379417
Regan, R. Steve; LaFontaine, Jacob H.
2017-10-05
This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.
Al Kazzi, Elie S; Hutfless, Susan
2015-01-01
By 2018, Medicare payments will be tied to quality of care. The Centers for Medicare and Medicaid Services currently use quality-based metric for some reimbursements through their different programs. Existing and future quality metrics will rely on risk adjustment to avoid unfairly punishing those who see the sickest, highest-risk patients. Despite the limitations of the data used for risk adjustment, there are potential solutions to improve the accuracy of these codes by calibrating data by merging databases and compiling information collected for multiple reporting programs to improve accuracy. In addition, healthcare staff should be informed about the importance of risk adjustment for quality of care assessment and reimbursement. As the number of encounters tied to value-based reimbursements increases in inpatient and outpatient care, coupled with accurate data collection and utilization, the methods used for risk adjustment could be expanded to better account for differences in the care delivered in diverse settings.
CoMoDo: identifying dynamic protein domains based on covariances of motion.
Wieninger, Silke A; Ullmann, G Matthias
2015-06-09
Most large proteins are built of several domains, compact units which enable functional protein motions. Different domain assignment approaches exist, which mostly rely on concepts of stability, folding, and evolution. We describe the automatic assignment method CoMoDo, which identifies domains based on protein dynamics. Covariances of atomic fluctuations, here calculated by an Elastic Network Model, are used to group residues into domains of different hierarchical levels. The so-called dynamic domains facilitate the study of functional protein motions involved in biological processes like ligand binding and signal transduction. By applying CoMoDo to a large number of proteins, we demonstrate that dynamic domains exhibit features absent in the commonly assigned structural domains, which can deliver insight into the interactions between domains and between subunits of multimeric proteins. CoMoDo is distributed as free open source software at www.bisb.uni-bayreuth.de/CoMoDo.html .
Crew exploration vehicle (CEV) attitude control using a neural-immunology/memory network
NASA Astrophysics Data System (ADS)
Weng, Liguo; Xia, Min; Wang, Wei; Liu, Qingshan
2015-01-01
This paper addresses the problem of the crew exploration vehicle (CEV) attitude control. CEVs are NASA's next-generation human spaceflight vehicles, and they use reaction control system (RCS) jet engines for attitude adjustment, which calls for control algorithms for firing the small propulsion engines mounted on vehicles. In this work, the resultant CEV dynamics combines both actuation and attitude dynamics. Therefore, it is highly nonlinear and even coupled with significant uncertainties. To cope with this situation, a neural-immunology/memory network is proposed. It is inspired by the human memory and immune systems. The control network does not rely on precise system dynamics information. Furthermore, the overall control scheme has a simple structure and demands much less computation as compared with most existing methods, making it attractive for real-time implementation. The effectiveness of this approach is also verified via simulation.
Material parameter estimation with terahertz time-domain spectroscopy.
Dorney, T D; Baraniuk, R G; Mittleman, D M
2001-07-01
Imaging systems based on terahertz (THz) time-domain spectroscopy offer a range of unique modalities owing to the broad bandwidth, subpicosecond duration, and phase-sensitive detection of the THz pulses. Furthermore, the possibility exists for combining spectroscopic characterization or identification with imaging because the radiation is broadband in nature. To achieve this, we require novel methods for real-time analysis of THz waveforms. This paper describes a robust algorithm for extracting material parameters from measured THz waveforms. Our algorithm simultaneously obtains both the thickness and the complex refractive index of an unknown sample under certain conditions. In contrast, most spectroscopic transmission measurements require knowledge of the sample's thickness for an accurate determination of its optical parameters. Our approach relies on a model-based estimation, a gradient descent search, and the total variation measure. We explore the limits of this technique and compare the results with literature data for optical parameters of several different materials.
Social Skills Deficits in a Virtual Environment Among Spanish Children With ADHD.
García-Castellar, Rosa; Jara-Jiménez, Pilar; Sánchez-Chiva, Desirée; Mikami, Amori Y
2018-06-01
Research assessing the social skills of children with ADHD has predominantly relied upon North American samples. In addition, most existing work has been conducted using methodology that fails to use a controlled peer stimulus; such methods may be more vulnerable to cultural influence. We examined the social skills of 52 Spanish children (ages 8-12) with and without ADHD using a controlled Chat Room Task, which simulates a virtual social environment where peers' responses are held constant, so that participants' social skills may be assessed. After statistical control of typing and reading comprehension skills, Spanish children with ADHD gave fewer prosocial comments and had greater difficulty remembering central details from the conversation between the peers, relative to comparison children. The virtual Chat Room Task may be useful to assess social skills deficits using a controlled paradigm, resulting in the identification of common social deficiencies cross-culturally.
Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.
Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai
2017-07-15
Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.
Light water reactor lower head failure analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rempe, J.L.; Chavez, S.A.; Thinnes, G.L.
1993-10-01
This document presents the results from a US Nuclear Regulatory Commission-sponsored research program to investigate the mode and timing of vessel lower head failure. Major objectives of the analysis were to identify plausible failure mechanisms and to develop a method for determining which failure mode would occur first in different light water reactor designs and accident conditions. Failure mechanisms, such as tube ejection, tube rupture, global vessel failure, and localized vessel creep rupture, were studied. Newly developed models and existing models were applied to predict which failure mechanism would occur first in various severe accident scenarios. So that a broadermore » range of conditions could be considered simultaneously, calculations relied heavily on models with closed-form or simplified numerical solution techniques. Finite element techniques-were employed for analytical model verification and examining more detailed phenomena. High-temperature creep and tensile data were obtained for predicting vessel and penetration structural response.« less
Imaging of Brown Adipose Tissue: State of the Art
Sampath, Srihari C.; Sampath, Srinath C.; Bredella, Miriam A.; Cypess, Aaron M.
2016-01-01
The rates of diabetes, obesity, and metabolic disease have reached epidemic proportions worldwide. In recent years there has been renewed interest in combating these diseases not only by modifying energy intake and lifestyle factors, but also by inducing endogenous energy expenditure. This approach has largely been stimulated by the recent recognition that brown adipose tissue (BAT)—long known to promote heat production and energy expenditure in infants and hibernating mammals—also exists in adult humans. This landmark finding relied on the use of clinical fluorine 18 fluorodeoxyglucose positron emission tomography/computed tomography, and imaging techniques continue to play a crucial and increasingly central role in understanding BAT physiology and function. Herein, the authors review the origins of BAT imaging, discuss current preclinical and clinical strategies for imaging BAT, and discuss imaging methods that will provide crucial insight into metabolic disease and how it may be treated by modulating BAT activity. © RSNA, 2016 PMID:27322970
Diagnosing dehydration? Blend evidence with clinical observations.
Armstrong, Lawrence E; Kavouras, Stavros A; Walsh, Neil P; Roberts, William O
2016-11-01
The purpose of the review is to provide recommendations to improve clinical decision-making based on the strengths and weaknesses of commonly used hydration biomarkers and clinical assessment methods. There is widespread consensus regarding treatment, but not the diagnosis of dehydration. Even though it is generally accepted that a proper clinical diagnosis of dehydration can only be made biochemically rather than relying upon clinical signs and symptoms, no gold standard biochemical hydration index exists. Other than clinical biomarkers in blood (i.e., osmolality and blood urea nitrogen/creatinine) and in urine (i.e., osmolality and specific gravity), blood pressure assessment and clinical symptoms in the eye (i.e., tear production and palpitating pressure) and the mouth (i.e., thirst and mucous wetness) can provide important information for diagnosing dehydration. We conclude that clinical observations based on a combination of history, physical examination, laboratory values, and clinician experience remain the best approach to the diagnosis of dehydration.
Imaging of Brown Adipose Tissue: State of the Art.
Sampath, Srihari C; Sampath, Srinath C; Bredella, Miriam A; Cypess, Aaron M; Torriani, Martin
2016-07-01
The rates of diabetes, obesity, and metabolic disease have reached epidemic proportions worldwide. In recent years there has been renewed interest in combating these diseases not only by modifying energy intake and lifestyle factors, but also by inducing endogenous energy expenditure. This approach has largely been stimulated by the recent recognition that brown adipose tissue (BAT)-long known to promote heat production and energy expenditure in infants and hibernating mammals-also exists in adult humans. This landmark finding relied on the use of clinical fluorine 18 fluorodeoxyglucose positron emission tomography/computed tomography, and imaging techniques continue to play a crucial and increasingly central role in understanding BAT physiology and function. Herein, the authors review the origins of BAT imaging, discuss current preclinical and clinical strategies for imaging BAT, and discuss imaging methods that will provide crucial insight into metabolic disease and how it may be treated by modulating BAT activity. (©) RSNA, 2016.
Data from clinical notes: a perspective on the tension between structure and flexible documentation
Denny, Joshua C; Xu, Hua; Lorenzi, Nancy; Stead, William W; Johnson, Kevin B
2011-01-01
Clinical documentation is central to patient care. The success of electronic health record system adoption may depend on how well such systems support clinical documentation. A major goal of integrating clinical documentation into electronic heath record systems is to generate reusable data. As a result, there has been an emphasis on deploying computer-based documentation systems that prioritize direct structured documentation. Research has demonstrated that healthcare providers value different factors when writing clinical notes, such as narrative expressivity, amenability to the existing workflow, and usability. The authors explore the tension between expressivity and structured clinical documentation, review methods for obtaining reusable data from clinical notes, and recommend that healthcare providers be able to choose how to document patient care based on workflow and note content needs. When reusable data are needed from notes, providers can use structured documentation or rely on post-hoc text processing to produce structured data, as appropriate. PMID:21233086
Leveraging Psychological Insights to Encourage the Responsible Use of Consumer Debt.
Hershfield, Hal E; Sussman, Abigail B; O'Brien, Rourke L; Bryan, Christopher J
2015-11-01
U.S. consumers currently hold $880 billion in revolving debt, with a mean household credit card balance of approximately $6,000. Although economic factors play a role in this societal issue, it is clear that psychological forces also affect consumers' decisions to take on and maintain unmanageable debt balances. We examine three psychological barriers to the responsible use of credit and debt. We discuss the tendency for consumers to (a) make erroneous predictions about future spending habits, (b) rely too heavily on values presented on billing statements, and (c) categorize debt and saving into separate mental accounts. To overcome these obstacles, we urge policymakers to implement methods that facilitate better budgeting of future expenses, modify existing credit card statement disclosures, and allow consumers to easily apply government transfers (such as tax credits) to debt repayment. In doing so, we highlight minimal and inexpensive ways to remedy the debt problem. © The Author(s) 2015.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Raymond Charles; Beaver, Justin M; Buckner, Mark A
Power system disturbances are inherently complex and can be attributed to a wide range of sources, including both natural and man-made events. Currently, the power system operators are heavily relied on to make decisions regarding the causes of experienced disturbances and the appropriate course of action as a response. In the case of cyber-attacks against a power system, human judgment is less certain since there is an overt attempt to disguise the attack and deceive the operators as to the true state of the system. To enable the human decision maker, we explore the viability of machine learning as amore » means for discriminating types of power system disturbances, and focus specifically on detecting cyber-attacks where deception is a core tenet of the event. We evaluate various machine learning methods as disturbance discriminators and discuss the practical implications for deploying machine learning systems as an enhancement to existing power system architectures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Gisbergen, J.G.M.; Meijer, H.E.H.
1991-01-01
The microrheology of polymer blends as influenced by crosslinks induced in the dispersed phase via electron beam irradiation, is systematically investigated for the model system polystyrene/low density polyethylene (PS/LDPE). Both break-up of threads and coalescence of particles are delayed to a large extent, but are not inhibited completely and occur faster than would be expected for a nonirradiated material with a comparable viscosity. Small amplitude, dynamic rheological measurements indicated that in the irradiated materials a yield stress could exist. In contrast, direct microrheological measurements showed that this yield stress, which would prevent both break-up and coalescence, could not be realizedmore » by EB irradiation. Apparently, the direct study of the microrheology of a blend system is important for the prediction of the development of its morphology and it is not possible to rely only on rheological data obtained via other methods.« less
Experimental determination of entanglement with a single measurement.
Walborn, S P; Souto Ribeiro, P H; Davidovich, L; Mintert, F; Buchleitner, A
2006-04-20
Nearly all protocols requiring shared quantum information--such as quantum teleportation or key distribution--rely on entanglement between distant parties. However, entanglement is difficult to characterize experimentally. All existing techniques for doing so, including entanglement witnesses or Bell inequalities, disclose the entanglement of some quantum states but fail for other states; therefore, they cannot provide satisfactory results in general. Such methods are fundamentally different from entanglement measures that, by definition, quantify the amount of entanglement in any state. However, these measures suffer from the severe disadvantage that they typically are not directly accessible in laboratory experiments. Here we report a linear optics experiment in which we directly observe a pure-state entanglement measure, namely concurrence. Our measurement set-up includes two copies of a quantum state: these 'twin' states are prepared in the polarization and momentum degrees of freedom of two photons, and concurrence is measured with a single, local measurement on just one of the photons.
Evolutionary dynamics on any population structure
NASA Astrophysics Data System (ADS)
Allen, Benjamin; Lippner, Gabor; Chen, Yu-Ting; Fotouhi, Babak; Momeni, Naghmeh; Yau, Shing-Tung; Nowak, Martin A.
2017-03-01
Evolution occurs in populations of reproducing individuals. The structure of a population can affect which traits evolve. Understanding evolutionary game dynamics in structured populations remains difficult. Mathematical results are known for special structures in which all individuals have the same number of neighbours. The general case, in which the number of neighbours can vary, has remained open. For arbitrary selection intensity, the problem is in a computational complexity class that suggests there is no efficient algorithm. Whether a simple solution for weak selection exists has remained unanswered. Here we provide a solution for weak selection that applies to any graph or network. Our method relies on calculating the coalescence times of random walks. We evaluate large numbers of diverse population structures for their propensity to favour cooperation. We study how small changes in population structure—graph surgery—affect evolutionary outcomes. We find that cooperation flourishes most in societies that are based on strong pairwise ties.
Swift, B
1998-11-30
Estimating the post-mortem interval in skeletal remains is a notoriously difficult task; forensic pathologists often rely heavily upon experience in recognising morphological appearances. Previous techniques have involved measuring physical or chemical changes within the hydroxyapatite matrix, radiocarbon dating and 90Sr dating, though no individual test has been advocated. Within this paper it is proposed that measuring the equilibrium between two naturally occurring radio-isotopes, 210Po and 210Pb, and comparison with post-mortem examination samples would produce a new method of dating human skeletal remains. Possible limitations exist, notably the effect of diagenesis, time limitations and relative cost, though this technique could provide a relatively accurate means of determining the post-mortem interval. It is therefore proposed that a large study be undertaken to provide a calibration scale against which bones uncovered can be dated.
[Non-pharmacologic treatment of arterial hypertension in hemodialysis patients].
Chazot, C; Charra, B
2007-10-01
High blood pressure in dialysis patients is related to extracellular volume excess and the related increase of systemic vascular resistances. Scribner has early described the treatment of hypertension with ultrafiltration and low salt diet, without any drugs. The dry weight method relies on the progressive reduction of the postdialysis body weight until blood pressure is normalized. Additional measures are needed such as low salt diet, neutral sodium balance during dialysis treatment, stop of antihypertensive drugs, adequate length of the dialysis session, and patient education. It may exist a lag time between the normalization of the extracellular volume and blood pressure. It is related to the correction of the hemodynamic consequences of the extracellular volume overload. Moreover, the dry weight may potentially vary in patients undergoing catabolic intercurrent events. The complications of these changes (severe hypertension, pulmonary oedema) must be anticipated by the nephrologist and the staff to avoid additional morbidity to the patient.
[Critical of the additive model of the randomized controlled trial].
Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine
2008-01-01
Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.
An immune clock of human pregnancy
Aghaeepour, Nima; Ganio, Edward A.; Mcilwain, David; Tsai, Amy S.; Tingle, Martha; Van Gassen, Sofie; Gaudilliere, Dyani K.; Baca, Quentin; McNeil, Leslie; Okada, Robin; Ghaemi, Mohammad S.; Furman, David; Wong, Ronald J.; Winn, Virginia D.; Druzin, Maurice L.; El-Sayed, Yaser Y.; Quaintance, Cecele; Gibbs, Ronald; Darmstadt, Gary L.; Shaw, Gary M.; Stevenson, David K.; Tibshirani, Robert; Nolan, Garry P.; Lewis, David B.; Angst, Martin S.; Gaudilliere, Brice
2017-01-01
The maintenance of pregnancy relies on finely tuned immune adaptations. We demonstrate that these adaptations are precisely timed, reflecting an immune clock of pregnancy in women delivering at term. Using mass cytometry, the abundance and functional responses of all major immune cell subsets were quantified in serial blood samples collected throughout pregnancy. Cell signaling–based Elastic Net, a regularized regression method adapted from the elastic net algorithm, was developed to infer and prospectively validate a predictive model of interrelated immune events that accurately captures the chronology of pregnancy. Model components highlighted existing knowledge and revealed previously unreported biology, including a critical role for the interleukin-2–dependent STAT5ab signaling pathway in modulating T cell function during pregnancy. These findings unravel the precise timing of immunological events occurring during a term pregnancy and provide the analytical framework to identify immunological deviations implicated in pregnancy-related pathologies. PMID:28864494
Quantifying Motor Impairment in Movement Disorders.
FitzGerald, James J; Lu, Zhongjiao; Jareonsettasin, Prem; Antoniades, Chrystalina A
2018-01-01
Until recently the assessment of many movement disorders has relied on clinical rating scales that despite careful design are inherently subjective and non-linear. This makes accurate and truly observer-independent quantification difficult and limits the use of sensitive parametric statistical methods. At last, devices capable of measuring neurological problems quantitatively are becoming readily available. Examples include the use of oculometers to measure eye movements and accelerometers to measure tremor. Many applications are being developed for use on smartphones. The benefits include not just more accurate disease quantification, but also consistency of data for longitudinal studies, accurate stratification of patients for entry into trials, and the possibility of automated data capture for remote follow-up. In this mini review, we will look at movement disorders with a particular focus on Parkinson's disease, describe some of the limitations of existing clinical evaluation tools, and illustrate the ways in which objective metrics have already been successful.
On the Interactions Between Planetary and Mesoscale Dynamics in the Oceans
NASA Astrophysics Data System (ADS)
Grooms, I.; Julien, K. A.; Fox-Kemper, B.
2011-12-01
Multiple-scales asymptotic methods are used to investigate the interaction of planetary and mesoscale dynamics in the oceans. We find three regimes. In the first, the slow, large-scale planetary flow sets up a baroclinically unstable background which leads to vigorous mesoscale eddy generation, but the eddy dynamics do not affect the planetary dynamics. In the second, the planetary flow feels the effects of the eddies, but appears to be unable to generate them. The first two regimes rely on horizontally isotropic large-scale dynamics. In the third regime, large-scale anisotropy, as exists for example in the Antarctic Circumpolar Current and in western boundary currents, allows the large-scale dynamics to both generate and respond to mesoscale eddies. We also discuss how the investigation may be brought to bear on the problem of parameterization of unresolved mesoscale dynamics in ocean general circulation models.
Forensic Identification of Gender from Fingerprints.
Huynh, Crystal; Brunelle, Erica; Halámková, Lenka; Agudelo, Juliana; Halámek, Jan
2015-11-17
In the past century, forensic investigators have universally accepted fingerprinting as a reliable identification method, which relies mainly on pictorial comparisons. Despite developments to software systems in order to increase the probability and speed of identification, there has been limited success in the efforts that have been made to move away from the discipline's absolute dependence on the existence of a prerecorded matching fingerprint. Here, we have revealed that an information-rich latent fingerprint has not been used to its full potential. In our approach, the content present in the sweat left behind-namely the amino acids-can be used to determine physical such as gender of the originator. As a result, we were able to focus on the biochemical content in the fingerprint using a biocatalytic assay, coupled with a specially designed extraction protocol, for determining gender rather than focusing solely on the physical image.
Trojan Horse Antibiotics—A Novel Way to Circumvent Gram-Negative Bacterial Resistance?
Tillotson, Glenn S.
2016-01-01
Antibiotic resistance has been emerged as a major global health problem. In particular, gram-negative species pose a significant clinical challenge as bacteria develop or acquire more resistance mechanisms. Often, these bacteria possess multiple resistance mechanisms, thus nullifying most of the major classes of drugs. Novel approaches to this issue are urgently required. However, the challenges of developing new agents are immense. Introducing novel agents is fraught with hurdles, thus adapting known antibiotic classes by altering their chemical structure could be a way forward. A chemical addition to existing antibiotics known as a siderophore could be a solution to the gram-negative resistance issue. Siderophore molecules rely on the bacterial innate need for iron ions and thus can utilize a Trojan Horse approach to gain access to the bacterial cell. The current approaches to using this potential method are reviewed. PMID:27773991
A quantum spin-probe molecular microscope
NASA Astrophysics Data System (ADS)
Perunicic, V. S.; Hill, C. D.; Hall, L. T.; Hollenberg, L. C. L.
2016-10-01
Imaging the atomic structure of a single biomolecule is an important challenge in the physical biosciences. Whilst existing techniques all rely on averaging over large ensembles of molecules, the single-molecule realm remains unsolved. Here we present a protocol for 3D magnetic resonance imaging of a single molecule using a quantum spin probe acting simultaneously as the magnetic resonance sensor and source of magnetic field gradient. Signals corresponding to specific regions of the molecule's nuclear spin density are encoded on the quantum state of the probe, which is used to produce a 3D image of the molecular structure. Quantum simulations of the protocol applied to the rapamycin molecule (C51H79NO13) show that the hydrogen and carbon substructure can be imaged at the angstrom level using current spin-probe technology. With prospects for scaling to large molecules and/or fast dynamic conformation mapping using spin labels, this method provides a realistic pathway for single-molecule microscopy.
Trojan Horse Antibiotics-A Novel Way to Circumvent Gram-Negative Bacterial Resistance?
Tillotson, Glenn S
2016-01-01
Antibiotic resistance has been emerged as a major global health problem. In particular, gram-negative species pose a significant clinical challenge as bacteria develop or acquire more resistance mechanisms. Often, these bacteria possess multiple resistance mechanisms, thus nullifying most of the major classes of drugs. Novel approaches to this issue are urgently required. However, the challenges of developing new agents are immense. Introducing novel agents is fraught with hurdles, thus adapting known antibiotic classes by altering their chemical structure could be a way forward. A chemical addition to existing antibiotics known as a siderophore could be a solution to the gram-negative resistance issue. Siderophore molecules rely on the bacterial innate need for iron ions and thus can utilize a Trojan Horse approach to gain access to the bacterial cell. The current approaches to using this potential method are reviewed.
Light-Actuated Micromechanical Relays for Zero-Power Infrared Detection
2017-03-01
Light-Actuated Micromechanical Relays for Zero-Power Infrared Detection Zhenyun Qian, Sungho Kang, Vageeswar Rajaram, Cristian Cassella, Nicol E...near-zero power infrared (IR) detection . Differently from any existing switching element, the proposed LMR relies on a plasmonically-enhanced...chip enabling the monolithic fabrication of multiple LMRs connected together to form a logic topology suitable for the detection of specific
ERIC Educational Resources Information Center
Economic Commission for Latin America and the Caribbean, Santiago (Chile).
In areas such as Latin America and the Caribbean, many institutions are setting up databases using existing computerized information and documentation networks, and relying on new microcomputer equipment for their implementation. Increasing awareness of the implications of this practice prompted this conference, which provided the first step of a…
ERIC Educational Resources Information Center
Chankseliani, Maia; Relly, Susan James
2016-01-01
This paper examines the entrepreneurial inclinations of young people who achieved excellence in vocational occupations. We propose a three-capital approach to the study of entrepreneurship. Relying on the existing theories and original qualitative and quantitative data analyses, findings from interviews with 30 entrepreneurial and 10…
ERIC Educational Resources Information Center
Pope Zinsser, Kam Lara
2017-01-01
Research indicates that adjunct faculty continues to grow in the higher education setting. Overall, universities continue to hire adjunct faculty to facilitate online courses and as a cost saving measure. While institutions continue to rely on adjunct faculty, a disconnection exists between the adjunct and the higher education administrators. This…
ERIC Educational Resources Information Center
Radakovic, Nenad
2015-01-01
Research in mathematics education stresses the importance of content knowledge in solving authentic tasks in statistics and in risk-based decision making. Existing research supports the claim that students rely on content knowledge and context expertise to make sense of data. In this article, however, I present evidence that the relationship…
Non-Tenure-Track Faculty's Social Construction of a Supportive Work Environment
ERIC Educational Resources Information Center
Kezar, Adrianna
2013-01-01
Background: The number of non-tenure-track faculty (NTTF), including both full-time (FT) and part-time (PT) positions, has risen to two-thirds of faculty positions across the academy. To date, most of the studies of NTTF have relied on secondary data or large-scale surveys. Few qualitative studies exist that examine the experience, working…
ERIC Educational Resources Information Center
Allison, Caleb; Laxman, Kumar; Lai, Mei
2016-01-01
Existing research shows that high school students do not possess information literacy skills adequate to function in a high-tech society that relies so heavily on information. If students are taught these skills, they struggle to apply them. This small-scale intervention focused on helping Geography students at a low-socioeconomic high school in…
26 CFR 1.6049-5 - Interest and original issue discount subject to reporting after December 31, 1982.
Code of Federal Regulations, 2011 CFR
2011-04-01
.... A payor may rely on documentary evidence if the payor has established procedures to obtain, review....1441-6(c)(3) or (4)); and the payor obtains, reviews, and maintains such documentary evidence in... expired). A new account shall be treated as an existing account if the account holder already holds an...
Staying on Course: The Effects of Savings and Assets on the College Progress of Young Adults
ERIC Educational Resources Information Center
Elliott, William; Beverly, Sondra
2011-01-01
Increasingly, college graduation is seen as a necessary step toward achieving the American Dream. However, large disparities exist in graduation rates. For many families, the current family income is not enough to finance college. Therefore, many young adults have to rely on education loans, which may be difficult to repay, leaving them strapped…
NASA Astrophysics Data System (ADS)
Jorge, Marco G.; Brennand, Tracy A.
2017-07-01
Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.
Moltke, Ida; Albrechtsen, Anders; Hansen, Thomas v.O.; Nielsen, Finn C.; Nielsen, Rasmus
2011-01-01
All individuals in a finite population are related if traced back long enough and will, therefore, share regions of their genomes identical by descent (IBD). Detection of such regions has several important applications—from answering questions about human evolution to locating regions in the human genome containing disease-causing variants. However, IBD regions can be difficult to detect, especially in the common case where no pedigree information is available. In particular, all existing non-pedigree based methods can only infer IBD sharing between two individuals. Here, we present a new Markov Chain Monte Carlo method for detection of IBD regions, which does not rely on any pedigree information. It is based on a probabilistic model applicable to unphased SNP data. It can take inbreeding, allele frequencies, genotyping errors, and genomic distances into account. And most importantly, it can simultaneously infer IBD sharing among multiple individuals. Through simulations, we show that the simultaneous modeling of multiple individuals makes the method more powerful and accurate than several other non-pedigree based methods. We illustrate the potential of the method by applying it to data from individuals with breast and/or ovarian cancer, and show that a known disease-causing mutation can be mapped to a 2.2-Mb region using SNP data from only five seemingly unrelated affected individuals. This would not be possible using classical linkage mapping or association mapping. PMID:21493780
Capturing the semiotic relationship between terms
NASA Astrophysics Data System (ADS)
Hargood, Charlie; Millard, David E.; Weal, Mark J.
2010-04-01
Tags describing objects on the web are often treated as facts about a resource, whereas it is quite possible that they represent more subjective observations. Existing methods of term expansion expand terms based on dictionary definitions or statistical information on term occurrence. Here we propose the use of a thematic model for term expansion based on semiotic relationships between terms; this has been shown to improve a system's thematic understanding of content and tags and to tease out the more subjective implications of those tags. Such a system relies on a thematic model that must be made by hand. In this article, we explore a method to capture a semiotic understanding of particular terms using a rule-based guide to authoring a thematic model. Experimentation shows that it is possible to capture valid definitions that can be used for semiotic term expansion but that the guide itself may not be sufficient to support this on a large scale. We argue that whilst the formation of super definitions will mitigate some of these problems, the development of an authoring support tool may be necessary to solve others.
Toward Failure Modeling In Complex Dynamic Systems: Impact of Design and Manufacturing Variations
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; McAdams, Daniel A.; Clancy, Daniel (Technical Monitor)
2001-01-01
When designing vehicle vibration monitoring systems for aerospace devices, it is common to use well-established models of vibration features to determine whether failures or defects exist. Most of the algorithms used for failure detection rely on these models to detect significant changes during a flight environment. In actual practice, however, most vehicle vibration monitoring systems are corrupted by high rates of false alarms and missed detections. Research conducted at the NASA Ames Research Center has determined that a major reason for the high rates of false alarms and missed detections is the numerous sources of statistical variations that are not taken into account in the. modeling assumptions. In this paper, we address one such source of variations, namely, those caused during the design and manufacturing of rotating machinery components that make up aerospace systems. We present a novel way of modeling the vibration response by including design variations via probabilistic methods. The results demonstrate initial feasibility of the method, showing great promise in developing a general methodology for designing more accurate aerospace vehicle vibration monitoring systems.
Transmission effects in unfolding electronic-vibrational electron-molecule energy-loss spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Shiyang; Khakoo, Murtadha A.; Johnson, Paul V.
2006-03-15
The results of an investigation concerning the sensitivity of conventional unfolding methods applied to electronic-vibrational electron-energy-loss spectra to the transmission efficiency of electron spectrometers are presented. This investigation was made in an effort to understand differences in the differential cross sections for excitation of low-lying electronic states determined experimentally by various groups using electronic-vibrational energy-loss spectra of N{sub 2}. In these experiments, very similar spectral unfolding methods were used, which relied on similar Franck-Condon factors. However, the overall analyses of the electron scattering spectra (by the individual groups) resulted in large differences among the differential cross sections determined from thesemore » energy-loss spectra. The transmission response of the experimental apparatus to different-energy scattered electrons has often been discussed as a key factor that caused these disagreements. The present investigation shows in contrast that the effect of transmission is smaller than that required to independently explain such differences, implying that other systematic effects are responsible for the existing differences between measurements.« less
Survival of Near-Critical Branching Brownian Motion
NASA Astrophysics Data System (ADS)
Berestycki, Julien; Berestycki, Nathanaël; Schweinsberg, Jason
2011-06-01
Consider a system of particles performing branching Brownian motion with negative drift μ= sqrt{2 - \\varepsilon} and killed upon hitting zero. Initially there is one particle at x>0. Kesten (Stoch. Process. Appl. 7:9-47, 1978) showed that the process survives with positive probability if and only if ɛ>0. Here we are interested in the asymptotics as ɛ→0 of the survival probability Q μ ( x). It is proved that if L=π/sqrt{\\varepsilon} then for all x∈ℝ, lim ɛ→0 Q μ ( L+ x)= θ( x)∈(0,1) exists and is a traveling wave solution of the Fisher-KPP equation. Furthermore, we obtain sharp asymptotics of the survival probability when x< L and L- x→∞. The proofs rely on probabilistic methods developed by the authors in (Berestycki et al. in arXiv: 1001.2337, 2010). This completes earlier work by Harris, Harris and Kyprianou (Ann. Inst. Henri Poincaré Probab. Stat. 42:125-145, 2006) and confirms predictions made by Derrida and Simon (Europhys. Lett. 78:60006, 2007), which were obtained using nonrigorous PDE methods.
A Balanced Approach to Adaptive Probability Density Estimation.
Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy
2017-01-01
Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
Convolutional networks for vehicle track segmentation
Quach, Tu-Thach
2017-08-19
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are unable to capture natural track features such as continuity and parallelism. More powerful, but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3-by-3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate inmore » low power and have limited training data. As a result, we aim for small, efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our 6-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.« less