Adjustment technique without explicit formation of normal equations /conjugate gradient method/
NASA Technical Reports Server (NTRS)
Saxena, N. K.
1974-01-01
For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).
Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu
2017-01-23
Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.
Characterizing unknown systematics in large scale structure surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Nishant; Ho, Shirley; Myers, Adam D.
Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data,more » we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.« less
Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations
NASA Astrophysics Data System (ADS)
Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane
2018-04-01
Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.
Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns
NASA Technical Reports Server (NTRS)
Shaeffer, John
2008-01-01
Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.
Gabrilovich, Evgeniy
2013-01-01
Background Postmarket drug safety surveillance largely depends on spontaneous reports by patients and health care providers; hence, less common adverse drug reactions—especially those caused by long-term exposure, multidrug treatments, or those specific to special populations—often elude discovery. Objective Here we propose a low cost, fully automated method for continuous monitoring of adverse drug reactions in single drugs and in combinations thereof, and demonstrate the discovery of heretofore-unknown ones. Methods We used aggregated search data of large populations of Internet users to extract information related to drugs and adverse reactions to them, and correlated these data over time. We further extended our method to identify adverse reactions to combinations of drugs. Results We validated our method by showing high correlations of our findings with known adverse drug reactions (ADRs). However, although acute early-onset drug reactions are more likely to be reported to regulatory agencies, we show that less acute later-onset ones are better captured in Web search queries. Conclusions Our method is advantageous in identifying previously unknown adverse drug reactions. These ADRs should be considered as candidates for further scrutiny by medical regulatory authorities, for example, through phase 4 trials. PMID:23778053
Numerical Methods for 2-Dimensional Modeling
1980-12-01
high-order finite element methods, and a multidimensional version of the method of lines, both utilizing an optimized stiff integrator for the time...integration. The finite element methods have proved disappointing, but the method of lines has provided an unexpectedly large gain in speed. Two...diffusion problems with the same number of unknowns (a 21 x 41 grid), solved by second-order finite element methods, took over seven minutes on the Cray-i
NASA Astrophysics Data System (ADS)
Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng
2018-03-01
Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.
Mapping of unknown industrial plant using ROS-based navigation mobile robot
NASA Astrophysics Data System (ADS)
Priyandoko, G.; Ming, T. Y.; Achmad, M. S. H.
2017-10-01
This research examines how humans work with teleoperated unmanned mobile robot inspection in industrial plant area resulting 2D/3D map for further critical evaluation. This experiment focuses on two parts, the way human-robot doing remote interactions using robust method and the way robot perceives the environment surround as a 2D/3D perspective map. ROS (robot operating system) as a tool was utilized in the development and implementation during the research which comes up with robust data communication method in the form of messages and topics. RGBD SLAM performs the visual mapping function to construct 2D/3D map using Kinect sensor. The results showed that the mobile robot-based teleoperated system are successful to extend human perspective in term of remote surveillance in large area of industrial plant. It was concluded that the proposed work is robust solution for large mapping within an unknown construction building.
Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.
Chen, Mou; Tao, Gang
2016-08-01
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
A Mobile Anchor Assisted Localization Algorithm Based on Regular Hexagon in Wireless Sensor Networks
Rodrigues, Joel J. P. C.
2014-01-01
Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints of cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use several mobile anchors which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. This paper proposes a mobile anchor assisted localization algorithm based on regular hexagon (MAALRH) in two-dimensional WSNs, which can cover the whole monitoring area with a boundary compensation method. Unknown nodes calculate their positions by using trilateration. We compare the MAALRH with HILBERT, CIRCLES, and S-CURVES algorithms in terms of localization ratio, localization accuracy, and path length. Simulations show that the MAALRH can achieve high localization ratio and localization accuracy when the communication range is not smaller than the trajectory resolution. PMID:25133212
Si, Wenjie; Dong, Xunde; Yang, Feifei
2018-03-01
This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Handling qualities of large flexible control-configured aircraft
NASA Technical Reports Server (NTRS)
Swaim, R. L.
1979-01-01
The approach to an analytical study of flexible airplane longitudinal handling qualities was to parametrically vary the natural frequencies of two symmetric elastic modes to induce mode interactions with the rigid body dynamics. Since the structure of the pilot model was unknown for such dynamic interactions, the optimal control pilot modeling method is being applied and used in conjunction with pilot rating method.
Fission meter and neutron detection using poisson distribution comparison
Rowland, Mark S; Snyderman, Neal J
2014-11-18
A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.
Optimization of high-throughput nanomaterial developmental toxicity testing in zebrafish embryos
Nanomaterial (NM) developmental toxicities are largely unknown. With an extensive variety of NMs available, high-throughput screening methods may be of value for initial characterization of potential hazard. We optimized a zebrafish embryo test as an in vivo high-throughput assay...
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Probabilistic double guarantee kidnapping detection in SLAM.
Tian, Yang; Ma, Shugen
2016-01-01
For determining whether kidnapping has happened and which type of kidnapping it is while a robot performs autonomous tasks in an unknown environment, a double guarantee kidnapping detection (DGKD) method has been proposed. The good performance of DGKD in a relative small environment is shown. However, a limitation of DGKD is found in a large-scale environment by our recent work. In order to increase the adaptability of DGKD in a large-scale environment, an improved method called probabilistic double guarantee kidnapping detection is proposed in this paper to combine probability of features' positions and the robot's posture. Simulation results demonstrate the validity and accuracy of the proposed method.
Evaluation of Two PCR-based Swine-specific Fecal Source Tracking Assays (Abstract)
Several PCR-based methods have been proposed to identify swine fecal pollution in environmental waters. However, the utility of these assays in identifying swine fecal contamination on a broad geographic scale is largely unknown. In this study, we evaluated the specificity, distr...
Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.
Choi, Yun Ho; Yoo, Sung Jin
2016-12-01
A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.
The development of methods and processes to mass produce nanocomponents, materials with characteristic lengths less than 100 nm, has led to the emergence of a large number of consumer goods (nanoproducts) containing these materials. The unknown health effects and risks associate...
Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei
2016-01-01
Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field. PMID:26861348
Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
2004-01-01
A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.
NASA Astrophysics Data System (ADS)
Xue, Zhaohui; Du, Peijun; Li, Jun; Su, Hongjun
2017-02-01
The generally limited availability of training data relative to the usually high data dimension pose a great challenge to accurate classification of hyperspectral imagery, especially for identifying crops characterized with highly correlated spectra. However, traditional parametric classification models are problematic due to the need of non-singular class-specific covariance matrices. In this research, a novel sparse graph regularization (SGR) method is presented, aiming at robust crop mapping using hyperspectral imagery with very few in situ data. The core of SGR lies in propagating labels from known data to unknown, which is triggered by: (1) the fraction matrix generated for the large unknown data by using an effective sparse representation algorithm with respect to the few training data serving as the dictionary; (2) the prediction function estimated for the few training data by formulating a regularization model based on sparse graph. Then, the labels of large unknown data can be obtained by maximizing the posterior probability distribution based on the two ingredients. SGR is more discriminative, data-adaptive, robust to noise, and efficient, which is unique with regard to previously proposed approaches and has high potentials in discriminating crops, especially when facing insufficient training data and high-dimensional spectral space. The study area is located at Zhangye basin in the middle reaches of Heihe watershed, Gansu, China, where eight crop types were mapped with Compact Airborne Spectrographic Imager (CASI) and Shortwave Infrared Airborne Spectrogrpahic Imager (SASI) hyperspectral data. Experimental results demonstrate that the proposed method significantly outperforms other traditional and state-of-the-art methods.
Germain, Ronald N
2017-10-16
A dichotomy exists in the field of vaccinology about the promise versus the hype associated with application of "systems biology" approaches to rational vaccine design. Some feel it is the only way to efficiently uncover currently unknown parameters controlling desired immune responses or discover what elements actually mediate these responses. Others feel that traditional experimental, often reductionist, methods for incrementally unraveling complex biology provide a more solid way forward, and that "systems" approaches are costly ways to collect data without gaining true insight. Here I argue that both views are inaccurate. This is largely because of confusion about what can be gained from classical experimentation versus statistical analysis of large data sets (bioinformatics) versus methods that quantitatively explain emergent properties of complex assemblies of biological components, with the latter reflecting what was previously called "physiology." Reductionist studies will remain essential for generating detailed insight into the functional attributes of specific elements of biological systems, but such analyses lack the power to provide a quantitative and predictive understanding of global system behavior. But by employing (1) large-scale screening methods for discovery of unknown components and connections in the immune system ( omics ), (2) statistical analysis of large data sets ( bioinformatics ), and (3) the capacity of quantitative computational methods to translate these individual components and connections into models of emergent behavior ( systems biology ), we will be able to better understand how the overall immune system functions and to determine with greater precision how to manipulate it to produce desired protective responses. Copyright © 2017 Cold Spring Harbor Laboratory Press; all rights reserved.
ERIC Educational Resources Information Center
Marty, Phillip J.; McDermott, Robert J.
Informational pamphlets about breast self-examination (BSE) and testicular self-examination (TSE) are widely distributed in health care settings, but the pamphlets' effectiveness in promoting knowledge and positive attitudes about these early cancer detection procedures is largely unknown. A study compared pamphlets with alternative methods of…
Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim
2013-01-01
Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…
USDA-ARS?s Scientific Manuscript database
An emerging poultry meat quality concern is associated with chicken breast fillets having an uncharacteristically hard or rigid feel (called the wooden breast condition). The cause of the wooden breast condition is still largely unknown, and there is no single objective evaluation method or system k...
NASA Astrophysics Data System (ADS)
Scheingraber, Christoph; Käser, Martin; Allmann, Alexander
2017-04-01
Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
A transient response analysis of the space shuttle vehicle during liftoff
NASA Technical Reports Server (NTRS)
Brunty, J. A.
1990-01-01
A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.
NASA Technical Reports Server (NTRS)
Green, M. J.; Nachtsheim, P. R.
1972-01-01
A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.
ERIC Educational Resources Information Center
Harrop, Clare; Tu, Nicole; Landa, Rebecca; Kasier, Ann; Kasari, Connie
2018-01-01
Sensory behaviors are widely reported in autism spectrum disorder (ASD). However, the impact of these behaviors on families remains largely unknown. This study explored how caregivers of minimally verbal children with ASD responded to their child's sensory behaviors. Using a mixed-methods approach, we examined two variables for each endorsed child…
NASA Astrophysics Data System (ADS)
Fustes, D.; Manteiga, M.; Dafonte, C.; Arcay, B.; Ulla, A.; Smith, K.; Borrachero, R.; Sordo, R.
2013-11-01
Aims: A new method applied to the segmentation and further analysis of the outliers resulting from the classification of astronomical objects in large databases is discussed. The method is being used in the framework of the Gaia satellite Data Processing and Analysis Consortium (DPAC) activities to prepare automated software tools that will be used to derive basic astrophysical information that is to be included in final Gaia archive. Methods: Our algorithm has been tested by means of simulated Gaia spectrophotometry, which is based on SDSS observations and theoretical spectral libraries covering a wide sample of astronomical objects. Self-organizing maps networks are used to organize the information in clusters of objects, as homogeneously as possible according to their spectral energy distributions, and to project them onto a 2D grid where the data structure can be visualized. Results: We demonstrate the usefulness of the method by analyzing the spectra that were rejected by the SDSS spectroscopic classification pipeline and thus classified as "UNKNOWN". First, our method can help distinguish between astrophysical objects and instrumental artifacts. Additionally, the application of our algorithm to SDSS objects of unknown nature has allowed us to identify classes of objects with similar astrophysical natures. In addition, the method allows for the potential discovery of hundreds of new objects, such as white dwarfs and quasars. Therefore, the proposed method is shown to be very promising for data exploration and knowledge discovery in very large astronomical databases, such as the archive from the upcoming Gaia mission.
61. Picking Floor, Large Pile of Waste Rock and Wood ...
61. Picking Floor, Large Pile of Waste Rock and Wood date unknown Historic Photograph, Photographer Unknown; Collection of William Everett, Jr. (Wilkes-Barre, PA), photocopy by Joseph E.B. Elliot - Huber Coal Breaker, 101 South Main Street, Ashley, Luzerne County, PA
Clustering redshift distributions for the Dark Energy Survey
NASA Astrophysics Data System (ADS)
Helsby, Jennifer
Accurate determination of photometric redshifts and their errors is critical for large scale structure and weak lensing studies for constraining cosmology from deep, wide imaging surveys. Current photometric redshift methods suffer from bias and scatter due to incomplete training sets. Exploiting the clustering between a sample of galaxies for which we have spectroscopic redshifts and a sample of galaxies for which the redshifts are unknown can allow us to reconstruct the true redshift distribution of the unknown sample. Here we use this method in both simulations and early data from the Dark Energy Survey (DES) to determine the true redshift distributions of galaxies in photometric redshift bins. We find that cross-correlating with the spectroscopic samples currently used for training provides a useful test of photometric redshifts and provides reliable estimates of the true redshift distribution in a photometric redshift bin. We discuss the use of the cross-correlation method in validating template- or learning-based approaches to redshift estimation and its future use in Stage IV surveys.
ERIC Educational Resources Information Center
Roberts, Andrea L.; Rosario, Margaret; Slopen, Natalie; Calzo, Jerel P.; Austin, S. Bryn
2013-01-01
Objective: Childhood gender nonconformity has been associated with increased risk of caregiver abuse and bullying victimization outside the home, but it is unknown whether as a consequence children who are nonconforming are at higher risk of depressive symptoms. Method: Using data from a large national cohort (N = 10,655), we examined differences…
ERIC Educational Resources Information Center
Rey, Jason Goering
2010-01-01
Online education is a modality of teaching that has proliferated throughout higher education in such a rapid form and without any guidelines that its quality and merit is largely unknown, hotly debated, and still evolving. Institutions have used online education as a method of reducing costs and increasing enrollments and students have flocked to…
Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.
Lijun Long; Jun Zhao
2017-04-01
In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.
13-fold resolution gain through turbid layer via translated unknown speckle illumination
Guo, Kaikai; Zhang, Zibang; Jiang, Shaowei; Liao, Jun; Zhong, Jingang; Eldar, Yonina C.; Zheng, Guoan
2017-01-01
Fluorescence imaging through a turbid layer holds great promise for various biophotonics applications. Conventional wavefront shaping techniques aim to create and scan a focus spot through the turbid layer. Finding the correct input wavefront without direct access to the target plane remains a critical challenge. In this paper, we explore a new strategy for imaging through turbid layer with a large field of view. In our setup, a fluorescence sample is sandwiched between two turbid layers. Instead of generating one focus spot via wavefront shaping, we use an unshaped beam to illuminate the turbid layer and generate an unknown speckle pattern at the target plane over a wide field of view. By tilting the input wavefront, we raster scan the unknown speckle pattern via the memory effect and capture the corresponding low-resolution fluorescence images through the turbid layer. Different from the wavefront-shaping-based single-spot scanning, the proposed approach employs many spots (i.e., speckles) in parallel for extending the field of view. Based on all captured images, we jointly recover the fluorescence object, the unknown optical transfer function of the turbid layer, the translated step size, and the unknown speckle pattern. Without direct access to the object plane or knowledge of the turbid layer, we demonstrate a 13-fold resolution gain through the turbid layer using the reported strategy. We also demonstrate the use of this technique to improve the resolution of a low numerical aperture objective lens allowing to obtain both large field of view and high resolution at the same time. The reported method provides insight for developing new fluorescence imaging platforms and may find applications in deep-tissue imaging. PMID:29359102
Experiments In Characterizing Vibrations Of A Structure
NASA Technical Reports Server (NTRS)
Yam, Yeung; Hadaegh, Fred Y.; Bayard, David S.
1993-01-01
Report discusses experiments conducted to test methods of identification of vibrational and coupled rotational/vibrational modes of flexible structure. Report one in series that chronicle development of integrated system of methods, sensors, actuators, analog and digital signal-processing equipment, and algorithms to suppress vibrations in large, flexible structure even when dynamics of structure partly unknown and/or changing. Two prior articles describing aspects of research, "Autonomous Frequency-Domain Indentification" (NPO-18099), and "Automated Characterization Of Vibrations Of A Structure" (NPO-18141).
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
A numerical method for measuring capacitive soft sensors through one channel
NASA Astrophysics Data System (ADS)
Tairych, Andreas; Anderson, Iain A.
2018-03-01
Soft capacitive stretch sensors are well suited for unobtrusive wearable body motion capture. Conventional sensing methods measure sensor capacitances through separate channels. In sensing garments with many sensors, this results in high wiring complexity, and a large footprint of rigid sensing circuit boards. We have developed a more efficient sensing method that detects multiple sensors through only one channel, and one set of wires. It is based on a R-C transmission line assembled from capacitive conductive fabric stretch sensors, and external resistors. The unknown capacitances are identified by solving a system of nonlinear equations. These equations are established by modelling and continuously measuring transmission line reactances at different frequencies. Solving these equations numerically with a Newton-Raphson solver for the unknown capacitances enables real time reading of all sensors. The method was verified with a prototype comprising three sensors that is capable of detecting both individually and simultaneously stretched sensors. Instead of using three channels and six wires to detect the sensors, the task was achieved with only one channel and two wires.
Dykema, John A; Keith, David W; Anderson, James G; Weisenstein, Debra
2014-12-28
Although solar radiation management (SRM) through stratospheric aerosol methods has the potential to mitigate impacts of climate change, our current knowledge of stratospheric processes suggests that these methods may entail significant risks. In addition to the risks associated with current knowledge, the possibility of 'unknown unknowns' exists that could significantly alter the risk assessment relative to our current understanding. While laboratory experimentation can improve the current state of knowledge and atmospheric models can assess large-scale climate response, they cannot capture possible unknown chemistry or represent the full range of interactive atmospheric chemical physics. Small-scale, in situ experimentation under well-regulated circumstances can begin to remove some of these uncertainties. This experiment-provisionally titled the stratospheric controlled perturbation experiment-is under development and will only proceed with transparent and predominantly governmental funding and independent risk assessment. We describe the scientific and technical foundation for performing, under external oversight, small-scale experiments to quantify the risks posed by SRM to activation of halogen species and subsequent erosion of stratospheric ozone. The paper's scope includes selection of the measurement platform, relevant aspects of stratospheric meteorology, operational considerations and instrument design and engineering.
Supervised de novo reconstruction of metabolic pathways from metabolome-scale compound sets
Kotera, Masaaki; Tabei, Yasuo; Yamanishi, Yoshihiro; Tokimatsu, Toshiaki; Goto, Susumu
2013-01-01
Motivation: The metabolic pathway is an important biochemical reaction network involving enzymatic reactions among chemical compounds. However, it is assumed that a large number of metabolic pathways remain unknown, and many reactions are still missing even in known pathways. Therefore, the most important challenge in metabolomics is the automated de novo reconstruction of metabolic pathways, which includes the elucidation of previously unknown reactions to bridge the metabolic gaps. Results: In this article, we develop a novel method to reconstruct metabolic pathways from a large compound set in the reaction-filling framework. We define feature vectors representing the chemical transformation patterns of compound–compound pairs in enzymatic reactions using chemical fingerprints. We apply a sparsity-induced classifier to learn what we refer to as ‘enzymatic-reaction likeness’, i.e. whether compound pairs are possibly converted to each other by enzymatic reactions. The originality of our method lies in the search for potential reactions among many compounds at a time, in the extraction of reaction-related chemical transformation patterns and in the large-scale applicability owing to the computational efficiency. In the results, we demonstrate the usefulness of our proposed method on the de novo reconstruction of 134 metabolic pathways in Kyoto Encyclopedia of Genes and Genomes (KEGG). Our comprehensively predicted reaction networks of 15 698 compounds enable us to suggest many potential pathways and to increase research productivity in metabolomics. Availability: Softwares are available on request. Supplementary material are available at http://web.kuicr.kyoto-u.ac.jp/supp/kot/ismb2013/. Contact: goto@kuicr.kyoto-u.ac.jp PMID:23812977
Methane Leak Detection and Emissions Quantification with UAVs
NASA Astrophysics Data System (ADS)
Barchyn, T.; Fox, T. A.; Hugenholtz, C.
2016-12-01
Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Domain Derivatives in Dielectric Rough Surface Scattering
2015-01-01
and require the gradient of the objective function in the unknown model parameter vector at each stage of iteration. For large N, finite...differencing becomes numerically intensive, and an efficient alternative is domain differentiation in which the full gradient is obtained by solving a single...derivative calculation of the gradient for a locally perturbed dielectric interface. The method is non-variational, and algebraic in nature in that it
Evaluation of respondent-driven sampling.
McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.
Multi-color incomplete Cholesky conjugate gradient methods for vector computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poole, E.L.
1986-01-01
This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less
An iterative method for the localization of a neutron source in a large box (container)
NASA Astrophysics Data System (ADS)
Dubinski, S.; Presler, O.; Alfassi, Z. B.
2007-12-01
The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.
Principal Component Geostatistical Approach for large-dimensional inverse problems
Kitanidis, P K; Lee, J
2014-01-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113
Principal Component Geostatistical Approach for large-dimensional inverse problems.
Kitanidis, P K; Lee, J
2014-07-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.
Hair Testing for Drugs of Abuse and New Psychoactive Substances in a High-Risk Population.
Salomone, Alberto; Palamar, Joseph J; Gerace, Enrico; Di Corcia, Daniele; Vincenti, Marco
2017-06-01
Hundreds of new psychoactive substances (NPS) have emerged in the drug market over the last decade. Few drug surveys in the USA, however, ask about use of NPS, so prevalence and correlates of use are largely unknown. A large portion of NPS use is unintentional or unknown as NPS are common adulterants in drugs like ecstasy/Molly, and most NPS are rapidly eliminated from the body, limiting efficacy of urine, blood and saliva testing. We utilized a novel method of examining prevalence of NPS use in a high-risk population utilizing hair-testing. Hair samples from high-risk nightclub and dance music attendees were tested for 82 drugs and metabolites (including NPS) using ultra-high performance liquid chromatography-tandem mass spectrometry. Eighty samples collected from different parts of the body were analyzed, 57 of which detected positive for at least one substance-either a traditional or new drug. Among these, 26 samples tested positive for at least one NPS-the most common being butylone (25 samples). Other new drugs detected include methylone, methoxetamine, 5/6-APB, α-PVP and 4-FA. Hair analysis proved a powerful tool to gain objective biological drug-prevalence information, free from possible biases of unintentional or unknown intake and untruthful reporting of use. Such testing can be used actively or retrospectively to validate survey responses and inform research on consumption patterns, including intentional and unknown use, polydrug-use, occasional NPS intake and frequent or heavy use. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Novel In-Beam Delayed Neutron Counting Technique for Characterization of Special Nuclear Materials
NASA Astrophysics Data System (ADS)
Bentoumi, G.; Rogge, R. B.; Andrews, M. T.; Corcoran, E. C.; Dimayuga, I.; Kelly, D. G.; Li, L.; Sur, B.
2016-12-01
A delayed neutron counting (DNC) system, where the sample to be analyzed remains stationary in a thermal neutron beam outside of the reactor, has been developed at the National Research Universal (NRU) reactor of the Canadian Nuclear Laboratories (CNL) at Chalk River. The new in-beam DNC is a novel approach for non-destructive characterization of special nuclear materials (SNM) that could enable identification and quantification of fissile isotopes within a large and shielded sample. Despite the orders of magnitude reduction in neutron flux, the in-beam DNC method can be as informative as the conventional in-core DNC for most cases while offering practical advantages and mitigated risk when dealing with large radioactive samples of unknown origin. This paper addresses (1) the qualification of in-beam DNC using a monochromatic thermal neutron beam in conjunction with a proven counting apparatus designed originally for in-core DNC, and (2) application of in-beam DNC to an examination of large sealed capsules containing unknown radioactive materials. Initial results showed that the in-beam DNC setup permits non-destructive analysis of bulky and gamma shielded samples. The method does not lend itself to trace analysis, and at best could only reveal the presence of a few milligrams of 235U via the assay of in-beam DNC total counts. Through analysis of DNC count rates, the technique could be used in combination with other neutron or gamma techniques to quantify isotopes present within samples.
Lan, Hui; Carson, Rachel; Provart, Nicholas J; Bonner, Anthony J
2007-09-21
Arabidopsis thaliana is the model species of current plant genomic research with a genome size of 125 Mb and approximately 28,000 genes. The function of half of these genes is currently unknown. The purpose of this study is to infer gene function in Arabidopsis using machine-learning algorithms applied to large-scale gene expression data sets, with the goal of identifying genes that are potentially involved in plant response to abiotic stress. Using in house and publicly available data, we assembled a large set of gene expression measurements for A. thaliana. Using those genes of known function, we first evaluated and compared the ability of basic machine-learning algorithms to predict which genes respond to stress. Predictive accuracy was measured using ROC50 and precision curves derived through cross validation. To improve accuracy, we developed a method for combining these classifiers using a weighted-voting scheme. The combined classifier was then trained on genes of known function and applied to genes of unknown function, identifying genes that potentially respond to stress. Visual evidence corroborating the predictions was obtained using electronic Northern analysis. Three of the predicted genes were chosen for biological validation. Gene knockout experiments confirmed that all three are involved in a variety of stress responses. The biological analysis of one of these genes (At1g16850) is presented here, where it is shown to be necessary for the normal response to temperature and NaCl. Supervised learning methods applied to large-scale gene expression measurements can be used to predict gene function. However, the ability of basic learning methods to predict stress response varies widely and depends heavily on how much dimensionality reduction is used. Our method of combining classifiers can improve the accuracy of such predictions - in this case, predictions of genes involved in stress response in plants - and it effectively chooses the appropriate amount of dimensionality reduction automatically. The method provides a useful means of identifying genes in A. thaliana that potentially respond to stress, and we expect it would be useful in other organisms and for other gene functions.
Critical Assessment of Small Molecule Identification 2016: automated methods.
Schymanski, Emma L; Ruttkies, Christoph; Krauss, Martin; Brouard, Céline; Kind, Tobias; Dührkop, Kai; Allen, Felicity; Vaniya, Arpana; Verdegem, Dries; Böcker, Sebastian; Rousu, Juho; Shen, Huibin; Tsugawa, Hiroshi; Sajed, Tanvir; Fiehn, Oliver; Ghesquière, Bart; Neumann, Steffen
2017-03-27
The fourth round of the Critical Assessment of Small Molecule Identification (CASMI) Contest ( www.casmi-contest.org ) was held in 2016, with two new categories for automated methods. This article covers the 208 challenges in Categories 2 and 3, without and with metadata, from organization, participation, results and post-contest evaluation of CASMI 2016 through to perspectives for future contests and small molecule annotation/identification. The Input Output Kernel Regression (CSI:IOKR) machine learning approach performed best in "Category 2: Best Automatic Structural Identification-In Silico Fragmentation Only", won by Team Brouard with 41% challenge wins. The winner of "Category 3: Best Automatic Structural Identification-Full Information" was Team Kind (MS-FINDER), with 76% challenge wins. The best methods were able to achieve over 30% Top 1 ranks in Category 2, with all methods ranking the correct candidate in the Top 10 in around 50% of challenges. This success rate rose to 70% Top 1 ranks in Category 3, with candidates in the Top 10 in over 80% of the challenges. The machine learning and chemistry-based approaches are shown to perform in complementary ways. The improvement in (semi-)automated fragmentation methods for small molecule identification has been substantial. The achieved high rates of correct candidates in the Top 1 and Top 10, despite large candidate numbers, open up great possibilities for high-throughput annotation of untargeted analysis for "known unknowns". As more high quality training data becomes available, the improvements in machine learning methods will likely continue, but the alternative approaches still provide valuable complementary information. Improved integration of experimental context will also improve identification success further for "real life" annotations. The true "unknown unknowns" remain to be evaluated in future CASMI contests. Graphical abstract .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashida, Misa; Malac, Marek; Egerton, Ray F.
Electron tomography is a method whereby a three-dimensional reconstruction of a nanoscale object is obtained from a series of projected images measured in a transmission electron microscope. We developed an electron-diffraction method to measure the tilt and azimuth angles, with Kikuchi lines used to align a series of diffraction patterns obtained with each image of the tilt series. Since it is based on electron diffraction, the method is not affected by sample drift and is not sensitive to sample thickness, whereas tilt angle measurement and alignment using fiducial-marker methods are affected by both sample drift and thickness. The accuracy ofmore » the diffraction method benefits reconstructions with a large number of voxels, where both high spatial resolution and a large field of view are desired. The diffraction method allows both the tilt and azimuth angle to be measured, while fiducial marker methods typically treat the tilt and azimuth angle as an unknown parameter. The diffraction method can be also used to estimate the accuracy of the fiducial marker method, and the sample-stage accuracy. A nano-dot fiducial marker measurement differs from a diffraction measurement by no more than ±1°.« less
Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome
2013-06-01
A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Bayesian methods for characterizing unknown parameters of material models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Mining high-throughput experimental data to link gene and function.
Blaby-Haas, Crysten E; de Crécy-Lagard, Valérie
2011-04-01
Nearly 2200 genomes that encode around 6 million proteins have now been sequenced. Around 40% of these proteins are of unknown function, even when function is loosely and minimally defined as 'belonging to a superfamily'. In addition to in silico methods, the swelling stream of high-throughput experimental data can give valuable clues for linking these unknowns with precise biological roles. The goal is to develop integrative data-mining platforms that allow the scientific community at large to access and utilize this rich source of experimental knowledge. To this end, we review recent advances in generating whole-genome experimental datasets, where this data can be accessed, and how it can be used to drive prediction of gene function. Copyright © 2011 Elsevier Ltd. All rights reserved.
Label-assisted mass spectrometry for the acceleration of reaction discovery and optimization
NASA Astrophysics Data System (ADS)
Cabrera-Pardo, Jaime R.; Chai, David I.; Liu, Song; Mrksich, Milan; Kozmin, Sergey A.
2013-05-01
The identification of new reactions expands our knowledge of chemical reactivity and enables new synthetic applications. Accelerating the pace of this discovery process remains challenging. We describe a highly effective and simple platform for screening a large number of potential chemical reactions in order to discover and optimize previously unknown catalytic transformations, thereby revealing new chemical reactivity. Our strategy is based on labelling one of the reactants with a polyaromatic chemical tag, which selectively undergoes a photoionization/desorption process upon laser irradiation, without the assistance of an external matrix, and enables rapid mass spectrometric detection of any products originating from such labelled reactants in complex reaction mixtures without any chromatographic separation. This method was successfully used for high-throughput discovery and subsequent optimization of two previously unknown benzannulation reactions.
Bayesian geostatistics in health cartography: the perspective of malaria.
Patil, Anand P; Gething, Peter W; Piel, Frédéric B; Hay, Simon I
2011-06-01
Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision.
Bayesian geostatistics in health cartography: the perspective of malaria
Patil, Anand P.; Gething, Peter W.; Piel, Frédéric B.; Hay, Simon I.
2011-01-01
Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision. PMID:21420361
The Inverse Bagging Algorithm: Anomaly Detection by Inverse Bootstrap Aggregating
NASA Astrophysics Data System (ADS)
Vischia, Pietro; Dorigo, Tommaso
2017-03-01
For data sets populated by a very well modeled process and by another process of unknown probability density function (PDF), a desired feature when manipulating the fraction of the unknown process (either for enhancing it or suppressing it) consists in avoiding to modify the kinematic distributions of the well modeled one. A bootstrap technique is used to identify sub-samples rich in the well modeled process, and classify each event according to the frequency of it being part of such sub-samples. Comparisons with general MVA algorithms will be shown, as well as a study of the asymptotic properties of the method, making use of a public domain data set that models a typical search for new physics as performed at hadronic colliders such as the Large Hadron Collider (LHC).
Riahi, Aouatef; Kharrat, Maher; Lariani, Imen; Chaabouni-Bouhamed, Habiba
2014-12-01
Germline deleterious mutations in the BRCA1/BRCA2 genes are associated with an increased risk for the development of breast and ovarian cancer. Given the large size of these genes the detection of such mutations represents a considerable technical challenge. Therefore, the development of cost-effective and rapid methods to identify these mutations became a necessity. High resolution melting analysis (HRM) is a rapid and efficient technique extensively employed as high-throughput mutation scanning method. The purpose of our study was to assess the specificity and sensitivity of HRM for BRCA1 and BRCA2 genes scanning. As a first step we estimate the ability of HRM for detection mutations in a set of 21 heterozygous samples harboring 8 different known BRCA1/BRCA2 variations, all samples had been preliminarily investigated by direct sequencing, and then we performed a blinded analysis by HRM in a set of 68 further sporadic samples of unknown genotype. All tested heterozygous BRCA1/BRCA2 variants were easily identified. However the HRM assay revealed further alteration that we initially had not searched (one unclassified variant). Furthermore, sequencing confirmed all the HRM detected mutations in the set of unknown samples, including homozygous changes, indicating that in this cohort, with the optimized assays, the mutations detections sensitivity and specificity were 100 %. HRM is a simple, rapid and efficient scanning method for known and unknown BRCA1/BRCA2 germline mutations. Consequently the method will allow for the economical screening of recurrent mutations in Tunisian population.
Mossotti, Victor G.
2014-01-01
Marble for the Tomb of the Unknown Soldier at Arlington National Cemetery was cut from the Colorado Yule Marble Quarry in 1931. Although anecdotal reports suggest that cracks were noticed in the main section of the monument shortly after its installation at the Arlington National Cemetery in Arlington, Virginia, detailed documentation of the extent of cracking did not appear until 1963. Although debate continues as to whether the main section of the Tomb of the Unknowns monument should be repaired or replaced, Mr. John S. Haines of Glenwood Springs, Colorado, in anticipation of the permanent closing of the Yule Quarry, donated a 58-ton block of Yule Marble, the so-called Haines block, as a potential backup. The brief study reported here was conducted during mid-summer 2009 at the behest of the superintendent of Arlington National Cemetery. The field team entered the subterranean Yule Marble Quarry with the Chief Extraction Engineer in order to contrast the method used for extraction of the Haines block with the method that was probably used to extract the marble block that is now cracked. Based on surficial inspection and shallow coring of the Haines block, and on the nature of crack propagation in Yule Marble as judged by close inspection of a large collection of surrogate Yule Marble blocks, the team found the block to be structurally sound and cosmetically equivalent to the marble used for the current monument. If the Haines block were needed, it would be an appropriate replacement for the existing cracked section of the Tomb of the Unknown Soldier Monument.
An Exploratory Analysis of Economic Factors in the Navy Total Force Strength Model (NTFSM)
2015-12-01
NTFSM is still in the testing phase and its overall behavior is largely unknown. In particular, the analysts that NTFSM was designed to help are...NTFSM is still in the testing phase and its overall behavior is largely unknown. In particular, the analysts that NTFSM was designed to help are...7 B. NTFSM VERIFICATION AND TESTING ......................................... 8 C
Evaluation of Respondent-Driven Sampling
McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309
DNA attachment to support structures
Balhorn, Rodney L.; Barry, Christopher H.
2002-01-01
Microscopic beads or other structures are attached to nucleic acids (DNA) using a terminal transferase. The transferase adds labeled dideoxy nucleotide bases to the ends of linear strands of DNA. The labels, such as the antigens digoxigenin and biotin, bind to the antibody compounds or other appropriate complementary ligands, which are bound to the microscopic beads or other support structures. The method does not require the synthesis of a synthetic oligonucleotide probe. The method can be used to tag or label DNA even when the DNA has an unknown sequence, has blunt ends, or is a very large fragment (e.g., >500 kilobase pairs).
A nonintrusive laser interferometer method for measurement of skin friction
NASA Technical Reports Server (NTRS)
Monson, D. J.
1982-01-01
A method is described for monitoring the changing thickness of a thin oil film subject to an aerodynamic shear stress using two focused laser beams. The measurement is then simply analyzed in terms of the surface skin friction of the flow. The analysis includes the effects of arbitrarily large pressure and skin friction gradients, gravity, and time varying oil temperature. It may also be applied to three dimensional flows with unknown direction. Applications are presented for a variety of flows including two dimensional flows, three dimensional swirling flows, separated flow, supersonic high Reynolds number flows, and delta wing vortical flows.
Basu, Sumanta; Duren, William; Evans, Charles R; Burant, Charles F; Michailidis, George; Karnovsky, Alla
2017-05-15
Recent technological advances in mass spectrometry, development of richer mass spectral libraries and data processing tools have enabled large scale metabolic profiling. Biological interpretation of metabolomics studies heavily relies on knowledge-based tools that contain information about metabolic pathways. Incomplete coverage of different areas of metabolism and lack of information about non-canonical connections between metabolites limits the scope of applications of such tools. Furthermore, the presence of a large number of unknown features, which cannot be readily identified, but nonetheless can represent bona fide compounds, also considerably complicates biological interpretation of the data. Leveraging recent developments in the statistical analysis of high-dimensional data, we developed a new Debiased Sparse Partial Correlation algorithm (DSPC) for estimating partial correlation networks and implemented it as a Java-based CorrelationCalculator program. We also introduce a new version of our previously developed tool Metscape that enables building and visualization of correlation networks. We demonstrate the utility of these tools by constructing biologically relevant networks and in aiding identification of unknown compounds. http://metscape.med.umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Beulens, Joline W J; van der Schouw, Yvonne T; Moons, Karel G M; Boshuizen, Hendriek C; van der A, Daphne L; Groenwold, Rolf H H
2013-04-01
Moderate alcohol consumption is associated with a reduced type 2 diabetes risk, but the biomarkers that explain this relation are unknown. The most commonly used method to estimate the proportion explained by a biomarker is the difference method. However, influence of alcohol-biomarker interaction on its results is unclear. G-estimation method is proposed to accurately assess proportion explained, but how this method compares with the difference method is unknown. In a case-cohort study of 2498 controls and 919 incident diabetes cases, we estimated the proportion explained by different biomarkers on the relation between alcohol consumption and diabetes using the difference method and sequential G-estimation method. Using the difference method, high-density lipoprotein cholesterol explained the relation between alcohol and diabetes by 78% (95% confidence interval [CI], 41-243), whereas high-sensitivity C-reactive protein (-7.5%; -36.4 to 1.8) or blood pressure (-6.9; -26.3 to -0.6) did not explain the relation. Interaction between alcohol and liver enzymes led to bias in proportion explained with different outcomes for different levels of liver enzymes. G-estimation method showed comparable results, but proportions explained were lower. The relation between alcohol consumption and diabetes may be largely explained by increased high-density lipoprotein cholesterol but not by other biomarkers. Ignoring exposure-mediator interactions may result in bias. The difference and G-estimation methods provide similar results. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, Edmond
Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.
Comparison of methods for the detection of gravitational waves from unknown neutron stars
NASA Astrophysics Data System (ADS)
Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.
2016-12-01
Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.
NASA Technical Reports Server (NTRS)
Wang, Ren H.
1991-01-01
A method of combined use of magnetic vector potential (MVP) based finite element (FE) formulations and magnetic scalar potential (MSP) based FE formulations for computation of three-dimensional (3D) magnetostatic fields is developed. This combined MVP-MSP 3D-FE method leads to considerable reduction by nearly a factor of 3 in the number of unknowns in comparison to the number of unknowns which must be computed in global MVP based FE solutions. This method allows one to incorporate portions of iron cores sandwiched in between coils (conductors) in current-carrying regions. Thus, it greatly simplifies the geometries of current carrying regions (in comparison with the exclusive MSP based methods) in electric machinery applications. A unique feature of this approach is that the global MSP solution is single valued in nature, that is, no branch cut is needed. This is again a superiority over the exclusive MSP based methods. A Newton-Raphson procedure with a concept of an adaptive relaxation factor was developed and successfully used in solving the 3D-FE problem with magnetic material anisotropy and nonlinearity. Accordingly, this combined MVP-MSP 3D-FE method is most suited for solution of large scale global type magnetic field computations in rotating electric machinery with very complex magnetic circuit geometries, as well as nonlinear and anisotropic material properties.
Ergül, Özgür
2011-11-01
Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.
Tracey, Matthew P; Pham, Dianne; Koide, Kazunori
2015-07-21
Neither palladium nor platinum is an endogenous biological metal. Imaging palladium in biological samples, however, is becoming increasingly important because bioorthogonal organometallic chemistry involves palladium catalysis. In addition to being an imaging target, palladium has been used to fluorometrically image biomolecules. In these cases, palladium species are used as imaging-enabling reagents. This review article discusses these fluorometric methods. Platinum-based drugs are widely used as anticancer drugs, yet their mechanism of action remains largely unknown. We discuss fluorometric methods for imaging or quantifying platinum in cells or biofluids. These methods include the use of chemosensors to directly detect platinum, fluorescently tagging platinum-based drugs, and utilizing post-labeling to elucidate distribution and mode of action.
Leapfrog variants of iterative methods for linear algebra equations
NASA Technical Reports Server (NTRS)
Saylor, Paul E.
1988-01-01
Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.
Window-based method for approximating the Hausdorff in three-dimensional range imagery
Koch, Mark W [Albuquerque, NM
2009-06-02
One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.
Kuang, Li; Yu, Long; Huang, Lan; Wang, Yin; Ma, Pengju; Li, Chuanbin; Zhu, Yujia
2018-05-14
With the rapid development of cyber-physical systems (CPS), building cyber-physical systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedure of building Cyber-physical systems, it has been found that a large number of functionally equivalent services exist, so it becomes an urgent task to recommend suitable services from the large number of services available in CPS. However, since it is time-consuming, and even impractical, for a single user to invoke all of the services in CPS to experience their QoS, a robust QoS prediction method is needed to predict unknown QoS values. A commonly used method in QoS prediction is collaborative filtering, however, it is hard to deal with the data sparsity and cold start problem, and meanwhile most of the existing methods ignore the data credibility issue. Thence, in order to solve both of these challenging problems, in this paper, we design a framework of QoS prediction for CPS services, and propose a personalized QoS prediction approach based on reputation and location-aware collaborative filtering. Our approach first calculates the reputation of users by using the Dirichlet probability distribution, so as to identify untrusted users and process their unreliable data, and then it digs out the geographic neighborhood in three levels to improve the similarity calculation of users and services. Finally, the data from geographical neighbors of users and services are fused to predict the unknown QoS values. The experiments using real datasets show that our proposed approach outperforms other existing methods in terms of accuracy, efficiency, and robustness.
Huang, Lan; Wang, Yin; Ma, Pengju; Li, Chuanbin; Zhu, Yujia
2018-01-01
With the rapid development of cyber-physical systems (CPS), building cyber-physical systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedure of building Cyber-physical systems, it has been found that a large number of functionally equivalent services exist, so it becomes an urgent task to recommend suitable services from the large number of services available in CPS. However, since it is time-consuming, and even impractical, for a single user to invoke all of the services in CPS to experience their QoS, a robust QoS prediction method is needed to predict unknown QoS values. A commonly used method in QoS prediction is collaborative filtering, however, it is hard to deal with the data sparsity and cold start problem, and meanwhile most of the existing methods ignore the data credibility issue. Thence, in order to solve both of these challenging problems, in this paper, we design a framework of QoS prediction for CPS services, and propose a personalized QoS prediction approach based on reputation and location-aware collaborative filtering. Our approach first calculates the reputation of users by using the Dirichlet probability distribution, so as to identify untrusted users and process their unreliable data, and then it digs out the geographic neighborhood in three levels to improve the similarity calculation of users and services. Finally, the data from geographical neighbors of users and services are fused to predict the unknown QoS values. The experiments using real datasets show that our proposed approach outperforms other existing methods in terms of accuracy, efficiency, and robustness. PMID:29757995
Bayesian power spectrum inference with foreground and target contamination treatment
NASA Astrophysics Data System (ADS)
Jasche, J.; Lavaux, G.
2017-10-01
This work presents a joint and self-consistent Bayesian treatment of various foreground and target contaminations when inferring cosmological power spectra and three-dimensional density fields from galaxy redshift surveys. This is achieved by introducing additional block-sampling procedures for unknown coefficients of foreground and target contamination templates to the previously presented ARES framework for Bayesian large-scale structure analyses. As a result, the method infers jointly and fully self-consistently three-dimensional density fields, cosmological power spectra, luminosity-dependent galaxy biases, noise levels of the respective galaxy distributions, and coefficients for a set of a priori specified foreground templates. In addition, this fully Bayesian approach permits detailed quantification of correlated uncertainties amongst all inferred quantities and correctly marginalizes over observational systematic effects. We demonstrate the validity and efficiency of our approach in obtaining unbiased estimates of power spectra via applications to realistic mock galaxy observations that are subject to stellar contamination and dust extinction. While simultaneously accounting for galaxy biases and unknown noise levels, our method reliably and robustly infers three-dimensional density fields and corresponding cosmological power spectra from deep galaxy surveys. Furthermore, our approach correctly accounts for joint and correlated uncertainties between unknown coefficients of foreground templates and the amplitudes of the power spectrum. This effect amounts to correlations and anti-correlations of up to 10 per cent across wide ranges in Fourier space.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Off-Policy Actor-Critic Structure for Optimal Control of Unknown Systems With Disturbances.
Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai; Zhang, Huaguang
2016-05-01
An optimal control method is developed for unknown continuous-time systems with unknown disturbances in this paper. The integral reinforcement learning (IRL) algorithm is presented to obtain the iterative control. Off-policy learning is used to allow the dynamics to be completely unknown. Neural networks are used to construct critic and action networks. It is shown that if there are unknown disturbances, off-policy IRL may not converge or may be biased. For reducing the influence of unknown disturbances, a disturbances compensation controller is added. It is proven that the weight errors are uniformly ultimately bounded based on Lyapunov techniques. Convergence of the Hamiltonian function is also proven. The simulation study demonstrates the effectiveness of the proposed optimal control method for unknown systems with disturbances.
Implementation of an improved adaptive-implicit method in a thermal compositional simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, T.B.
1988-11-01
A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less
1989-07-01
the Government may have formulated or in any way supplied the said drawings, specifications, or other data, is not to be regarded by implication, or...of these gains gives a measure of the total amount of damping supplied by the actuators and colocated velocity sensors. In this sense, the sum of the...disturbances are assumed to be unknown but bounded time-varying pro - cesses. Second, inequality constraints on outputs (measurements) and inputs (con
Plasma filtering techniques for nuclear waste remediation
Gueroult, Renaud; Hobbs, David T.; Fisch, Nathaniel J.
2015-04-24
Nuclear waste cleanup is challenged by the handling of feed stocks that are both unknown and complex. Plasma filtering, operating on dissociated elements, offers advantages over chemical methods in processing such wastes. The costs incurred by plasma mass filtering for nuclear waste pretreatment, before ultimate disposal, are similar to those for chemical pretreatment. However, significant savings might be achieved in minimizing the waste mass. As a result, this advantage may be realized over a large range of chemical waste compositions, thereby addressing the heterogeneity of legacy nuclear waste.
NASA Astrophysics Data System (ADS)
Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.
2018-03-01
A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.
Online feature selection with streaming features.
Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan
2013-05-01
We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.
Free-decay time-domain modal identification for large space structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Vanhorn, David A.; Doiron, Harold H.
1992-01-01
Concept definition studies for the Modal Identification Experiment (MIE), a proposed space flight experiment for the Space Station Freedom (SSF), have demonstrated advantages and compatibility of free-decay time-domain modal identification techniques with the on-orbit operational constraints of large space structures. Since practical experience with modal identification using actual free-decay responses of large space structures is very limited, several numerical and test data reduction studies were conducted. Major issues and solutions were addressed, including closely-spaced modes, wide frequency range of interest, data acquisition errors, sampling delay, excitation limitations, nonlinearities, and unknown disturbances during free-decay data acquisition. The data processing strategies developed in these studies were applied to numerical simulations of the MIE, test data from a deployable truss, and launch vehicle flight data. Results of these studies indicate free-decay time-domain modal identification methods can provide accurate modal parameters necessary to characterize the structural dynamics of large space structures.
A nonintrusive laser interferometer method for measurement of skin friction
NASA Technical Reports Server (NTRS)
Monson, D. J.
1983-01-01
A method is described for monitoring the changing thickness of a thin oil film subject to an aerodynamic shear stress using two focused laser beams. The measurement is then simply analyzed in terms of the surface skin friction of the flow. The analysis includes the effects of arbitrarily large pressure and skin friction gradients, gravity, and time varying oil temperature. It may also be applied to three dimensional flows with unknown direction. Applications are presented for a variety of flows, including two dimensional flows, three dimensional swirling flows, separated flow, supersonic high Reynolds number flows, and delta wing vortical flows. Previously announced in STAR as N83-12393
NASA Technical Reports Server (NTRS)
Newcomb, John
2004-01-01
The end-to-end test would verify the complex sequence of events from lander separation to landing. Due to the large distances involved and the significant delay time in sending a command and receiving verification, the lander needed to operate autonomously after it separated from the orbiter. It had to sense conditions, make decisions, and act accordingly. We were flying into a relatively unknown set of conditions-a Martian atmosphere of unknown pressure, density, and consistency to land on a surface of unknown altitude, and one which had an unknown bearing strength.
2010-01-01
Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504
Variable Grid Traveltime Tomography for Near-surface Seismic Imaging
NASA Astrophysics Data System (ADS)
Cai, A.; Zhang, J.
2017-12-01
We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.
NASA Astrophysics Data System (ADS)
Safari, A.; Sharifi, M. A.; Amjadiparvar, B.
2010-05-01
The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low-Low Satellite-to-Satellite Tracking
Tsuji, Shintarou; Nishimoto, Naoki; Ogasawara, Katsuhiko
2008-07-20
Although large medical texts are stored in electronic format, they are seldom reused because of the difficulty of processing narrative texts by computer. Morphological analysis is a key technology for extracting medical terms correctly and automatically. This process parses a sentence into its smallest unit, the morpheme. Phrases consisting of two or more technical terms, however, cause morphological analysis software to fail in parsing the sentence and output unprocessed terms as "unknown words." The purpose of this study was to reduce the number of unknown words in medical narrative text processing. The results of parsing the text with additional dictionaries were compared with the analysis of the number of unknown words in the national examination for radiologists. The ratio of unknown words was reduced 1.0% to 0.36% by adding terminologies of radiological technology, MeSH, and ICD-10 labels. The terminology of radiological technology was the most effective resource, being reduced by 0.62%. This result clearly showed the necessity of additional dictionary selection and trends in unknown words. The potential for this investigation is to make available a large body of clinical information that would otherwise be inaccessible for applications other than manual health care review by personnel.
Jogler, Christian; Waldmann, Jost; Huang, Xiaoluo; Jogler, Mareike; Glöckner, Frank Oliver; Mascher, Thorsten; Kolter, Roberto
2012-12-01
Members of the Planctomycetes clade share many unusual features for bacteria. Their cytoplasm contains membrane-bound compartments, they lack peptidoglycan and FtsZ, they divide by polar budding, and they are capable of endocytosis. Planctomycete genomes have remained enigmatic, generally being quite large (up to 9 Mb), and on average, 55% of their predicted proteins are of unknown function. Importantly, proteins related to the unusual traits of Planctomycetes remain largely unknown. Thus, we embarked on bioinformatic analyses of these genomes in an effort to predict proteins that are likely to be involved in compartmentalization, cell division, and signal transduction. We used three complementary strategies. First, we defined the Planctomycetes core genome and subtracted genes of well-studied model organisms. Second, we analyzed the gene content and synteny of morphogenesis and cell division genes and combined both methods using a "guilt-by-association" approach. Third, we identified signal transduction systems as well as sigma factors. These analyses provide a manageable list of candidate genes for future genetic studies and provide evidence for complex signaling in the Planctomycetes akin to that observed for bacteria with complex life-styles, such as Myxococcus xanthus.
Underworld results as a triple (shopping list, posterior, priors)
NASA Astrophysics Data System (ADS)
Quenette, S. M.; Moresi, L. N.; Abramson, D.
2013-12-01
When studying long-term lithosphere deformation and other such large-scale, spatially distinct and behaviour rich problems, there is a natural trade-off between the meaning of a model, the observations used to validate the model and the ability to compute over this space. For example, many models of varying lithologies, rheological properties and underlying physics may reasonably match (or not match) observables. To compound this problem, each realisation is computationally intensive, requiring high resolution, algorithm tuning and code tuning to contemporary computer hardware. It is often intractable to use sampling based assimilation methods, but with better optimisation, the window of tractability becomes wider. The ultimate goal is to find a sweet-spot where a formal assimilation method is used, and where a model affines to observations. Its natural to think of this as an inverse problem, in which the underlying physics may be fixed and the rheological properties and possibly the lithologies themselves are unknown. What happens when we push this approach and treat some portion of the underlying physics as an unknown? At its extreme this is an intractable problem. However, there is an analogy here with how we develop software for these scientific problems. What happens when we treat the changing part of a largely complete code as an unknown, where the changes are working towards this sweet-spot? When posed as a Bayesian inverse problem the result is a triple - the model changes, the real priors and the real posterior. Not only does this give meaning to the process by which a code changes, it forms a mathematical bridge from an inverse problem to compiler optimisations given such changes. As a stepping stone example we show a regional scale heat flow model with constraining observations, and the inverse process including increasingly complexity in the software. The implementation uses Underworld-GT (Underworld plus research extras to import geology and export geothermic measures, etc). Underworld uses StGermain an early (partial) implementation of the theories described here.
Information loss method to measure node similarity in networks
NASA Astrophysics Data System (ADS)
Li, Yongli; Luo, Peng; Wu, Chong
2014-09-01
Similarity measurement for the network node has been paid increasing attention in the field of statistical physics. In this paper, we propose an entropy-based information loss method to measure the node similarity. The whole model is established based on this idea that less information loss is caused by seeing two more similar nodes as the same. The proposed new method has relatively low algorithm complexity, making it less time-consuming and more efficient to deal with the large scale real-world network. In order to clarify its availability and accuracy, this new approach was compared with some other selected approaches on two artificial examples and synthetic networks. Furthermore, the proposed method is also successfully applied to predict the network evolution and predict the unknown nodes' attributions in the two application examples.
Genomic and genotyping characterization of haplotype-based polymorphic microsatellites in Prunus
USDA-ARS?s Scientific Manuscript database
Efficient utilization of microsatellites in genetic studies remains impeded largely due to the unknown status of their primer reliability, chromosomal location, and allele polymorphism. Discovery and characterization of microsatellite polymorphisms in a taxon will disclose the unknowns and gain new ...
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is addressed. Leading eigenvalues of large matrices that arise from discretization are calculated, and an efficient multigrid method for solving these problems is presented. The resulting grid functions are used as initial approximations for appropriate eigenvalue problems. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a nonstandard way in which the right-hand side of the coarse grid equations involves unknown parameters to be solved on the coarse grid. This leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem are presented which demonstrate the effectiveness of the method.
Hu, Erzhong; Nosato, Hirokazu; Sakanashi, Hidenori; Murakawa, Masahiro
2013-01-01
Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.
Emergency First Responders' Experience with Colorimetric Detection Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandra L. Fox; Keith A. Daum; Carla J. Miller
2007-10-01
Nationwide, first responders from state and federal support teams respond to hazardous materials incidents, industrial chemical spills, and potential weapons of mass destruction (WMD) attacks. Although first responders have sophisticated chemical, biological, radiological, and explosive detectors available for assessment of the incident scene, simple colorimetric detectors have a role in response actions. The large number of colorimetric chemical detection methods available on the market can make the selection of the proper methods difficult. Although each detector has unique aspects to provide qualitative or quantitative data about the unknown chemicals present, not all detectors provide consistent, accurate, and reliable results. Includedmore » here, in a consumer-report-style format, we provide “boots on the ground” information directly from first responders about how well colorimetric chemical detection methods meet their needs in the field and how they procure these methods.« less
Diversity of ARSACS mutations in French-Canadians.
Thiffault, I; Dicaire, M J; Tetreault, M; Huang, K N; Demers-Lamarche, J; Bernard, G; Duquette, A; Larivière, R; Gehring, K; Montpetit, A; McPherson, P S; Richter, A; Montermini, L; Mercier, J; Mitchell, G A; Dupré, N; Prévost, C; Bouchard, J P; Mathieu, J; Brais, B
2013-01-01
The growing number of spastic ataxia of Charlevoix-Saguenay (SACS) gene mutations reported worldwide has broadened the clinical phenotype of autosomal recessive spastic ataxia of Charlevoix-Saguenay (ARSACS). The identification of Quebec ARSACS cases without two known SACS mutation led to the development of a multi-modal genomic strategy to uncover mutations in this large gene and explore phenotype variability. Search for SACS mutations by combining various methods on 20 cases with a classical French-Canadian ARSACS phenotype without two mutations and a group of 104 sporadic or recessive spastic ataxia cases of unknown cause. Western blot on lymphoblast protein from cases with different genotypes was probed to establish if they still expressed sacsin. A total of 12 mutations, including 7 novels, were uncovered in Quebec ARSACS cases. The screening of 104 spastic ataxia cases of unknown cause for 98 SACS mutations did not uncover carriers of two mutations. Compounds heterozygotes for one missense SACS mutation were found to minimally express sacsin. The large number of SACS mutations present even in Quebec suggests that the size of the gene alone may explain the great genotypic diversity. This study does not support an expanding ARSACS phenotype in the French-Canadian population. Most mutations lead to loss of function, though phenotypic variability in other populations may reflect partial loss of function with preservation of some sacsin expression. Our results also highlight the challenge of SACS mutation screening and the necessity to develop new generation sequencing methods to ensure low cost complete gene sequencing.
Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.
Comparative study of methods for recognition of an unknown person's action from a video sequence
NASA Astrophysics Data System (ADS)
Hori, Takayuki; Ohya, Jun; Kurumisawa, Jun
2009-02-01
This paper proposes a Tensor Decomposition Based method that can recognize an unknown person's action from a video sequence, where the unknown person is not included in the database (tensor) used for the recognition. The tensor consists of persons, actions and time-series image features. For the observed unknown person's action, one of the actions stored in the tensor is assumed. Using the motion signature obtained from the assumption, the unknown person's actions are synthesized. The actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for the actions and persons. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. For the time-series image features to be stored in the tensor and to be extracted from the observed video sequence, the human body silhouette's contour shape based feature is used. To show the validity of our proposed method, our proposed method is experimentally compared with Nearest Neighbor rule and Principal Component analysis based method. Experiments using 33 persons' seven kinds of action show that our proposed method achieves better recognition accuracies for the seven actions than the other methods.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
On Space Exploration and Human Error: A Paper on Reliability and Safety
NASA Technical Reports Server (NTRS)
Bell, David G.; Maluf, David A.; Gawdiak, Yuri
2005-01-01
NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
Reconstructing high-dimensional two-photon entangled states via compressive sensing
Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan
2014-01-01
Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850
NASA Technical Reports Server (NTRS)
Martin, William G.; Cairns, Brian; Bal, Guillaume
2014-01-01
This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.
Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Detecting fission from special nuclear material sources
Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA
2012-06-05
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a graphing component that displays the plot of the neutron distribution from the unknown source over a Poisson distribution and a plot of neutrons due to background or environmental sources. The system further includes a known neutron source placed in proximity to the unknown source to actively interrogate the unknown source in order to accentuate differences in neutron emission from the unknown source from Poisson distributions and/or environmental sources.
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-07-17
In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
Lockley, Martin G; McCrea, Richard T; Buckley, Lisa G; Lim, Jong Deock; Matthews, Neffra A; Breithaupt, Brent H; Houck, Karen J; Gierliński, Gerard D; Surmik, Dawid; Kim, Kyung Soo; Xing, Lida; Kong, Dal Yong; Cart, Ken; Martin, Jason; Hadden, Glade
2016-01-07
Relationships between non-avian theropod dinosaurs and extant and fossil birds are a major focus of current paleobiological research. Despite extensive phylogenetic and morphological support, behavioural evidence is mostly ambiguous and does not usually fossilize. Thus, inferences that dinosaurs, especially theropods displayed behaviour analogous to modern birds are intriguing but speculative. Here we present extensive and geographically widespread physical evidence of substrate scraping behavior by large theropods considered as compelling evidence of "display arenas" or leks, and consistent with "nest scrape display" behaviour among many extant ground-nesting birds. Large scrapes, up to 2 m in diameter, occur abundantly at several Cretaceous sites in Colorado. They constitute a previously unknown category of large dinosaurian trace fossil, inferred to fill gaps in our understanding of early phases in the breeding cycle of theropods. The trace makers were probably lekking species that were seasonally active at large display arena sites. Such scrapes indicate stereotypical avian behaviour hitherto unknown among Cretaceous theropods, and most likely associated with terrirorial activity in the breeding season. The scrapes most probably occur near nesting colonies, as yet unknown or no longer preserved in the immediate study areas. Thus, they provide clues to paleoenvironments where such nesting sites occurred.
Early signs of recovery of Acropora palmata in St. John, US Virgin Islands
Muller, E.M.; Rogers, Caroline S.; van Woesik, R.
2014-01-01
Since the 1980s, diseases have caused significant declines in the population of the threatened Caribbean coral Acropora palmata. Yet it is largely unknown whether the population densities have recovered from these declines and whether there have been any recent shifts in size-frequency distributions toward large colonies. It is also unknown whether colony size influences the risk of disease infection, the most common stressor affecting this species. To address these unknowns, we examined A. palmata colonies at ten sites around St. John, US Virgin Islands, in 2004 and 2010. The prevalence of white-pox disease was highly variable among sites, ranging from 0 to 53 %, and this disease preferentially targeted large colonies. We found that colony density did not significantly change over the 6-year period, although six out of ten sites showed higher densities through time. The size-frequency distributions of coral colonies at all sites were positively skewed in both 2004 and 2010, however, most sites showed a temporal shift toward more large-sized colonies. This increase in large-sized colonies occurred despite the presence of white-pox disease, a severe bleaching event, and several storms. This study provides evidence of slow recovery of the A. palmata population around St. John despite the persistence of several stressors.
NASA Astrophysics Data System (ADS)
Lockley, Martin G.; McCrea, Richard T.; Buckley, Lisa G.; Deock Lim, Jong; Matthews, Neffra A.; Breithaupt, Brent H.; Houck, Karen J.; Gierliński, Gerard D.; Surmik, Dawid; Soo Kim, Kyung; Xing, Lida; Yong Kong, Dal; Cart, Ken; Martin, Jason; Hadden, Glade
2016-01-01
Relationships between non-avian theropod dinosaurs and extant and fossil birds are a major focus of current paleobiological research. Despite extensive phylogenetic and morphological support, behavioural evidence is mostly ambiguous and does not usually fossilize. Thus, inferences that dinosaurs, especially theropods displayed behaviour analogous to modern birds are intriguing but speculative. Here we present extensive and geographically widespread physical evidence of substrate scraping behavior by large theropods considered as compelling evidence of “display arenas” or leks, and consistent with “nest scrape display” behaviour among many extant ground-nesting birds. Large scrapes, up to 2 m in diameter, occur abundantly at several Cretaceous sites in Colorado. They constitute a previously unknown category of large dinosaurian trace fossil, inferred to fill gaps in our understanding of early phases in the breeding cycle of theropods. The trace makers were probably lekking species that were seasonally active at large display arena sites. Such scrapes indicate stereotypical avian behaviour hitherto unknown among Cretaceous theropods, and most likely associated with terrirorial activity in the breeding season. The scrapes most probably occur near nesting colonies, as yet unknown or no longer preserved in the immediate study areas. Thus, they provide clues to paleoenvironments where such nesting sites occurred.
Lockley, Martin G.; McCrea, Richard T.; Buckley, Lisa G.; Deock Lim, Jong; Matthews, Neffra A.; Breithaupt, Brent H.; Houck, Karen J.; Gierliński, Gerard D.; Surmik, Dawid; Soo Kim, Kyung; Xing, Lida; Yong Kong, Dal; Cart, Ken; Martin, Jason; Hadden, Glade
2016-01-01
Relationships between non-avian theropod dinosaurs and extant and fossil birds are a major focus of current paleobiological research. Despite extensive phylogenetic and morphological support, behavioural evidence is mostly ambiguous and does not usually fossilize. Thus, inferences that dinosaurs, especially theropods displayed behaviour analogous to modern birds are intriguing but speculative. Here we present extensive and geographically widespread physical evidence of substrate scraping behavior by large theropods considered as compelling evidence of “display arenas” or leks, and consistent with “nest scrape display” behaviour among many extant ground-nesting birds. Large scrapes, up to 2 m in diameter, occur abundantly at several Cretaceous sites in Colorado. They constitute a previously unknown category of large dinosaurian trace fossil, inferred to fill gaps in our understanding of early phases in the breeding cycle of theropods. The trace makers were probably lekking species that were seasonally active at large display arena sites. Such scrapes indicate stereotypical avian behaviour hitherto unknown among Cretaceous theropods, and most likely associated with terrirorial activity in the breeding season. The scrapes most probably occur near nesting colonies, as yet unknown or no longer preserved in the immediate study areas. Thus, they provide clues to paleoenvironments where such nesting sites occurred. PMID:26741567
Employing Machine-Learning Methods to Study Young Stellar Objects
NASA Astrophysics Data System (ADS)
Moore, Nicholas
2018-01-01
Vast amounts of data exist in the astronomical data archives, and yet a large number of sources remain unclassified. We developed a multi-wavelength pipeline to classify infrared sources. The pipeline uses supervised machine learning methods to classify objects into the appropriate categories. The program is fed data that is already classified to train it, and is then applied to unknown catalogues. The primary use for such a pipeline is the rapid classification and cataloging of data that would take a much longer time to classify otherwise. While our primary goal is to study young stellar objects (YSOs), the applications extend beyond the scope of this project. We present preliminary results from our analysis and discuss future applications.
Martín, Verónica; Mavian, Carla; López Bueno, Alberto; de Molina, Antonio; Díaz, Eduardo; Andrés, Germán; Alcami, Antonio; Alejo, Alí
2015-10-01
Amphibian-like ranaviruses include pathogens of fish, amphibians, and reptiles that have recently evolved from a fish-infecting ancestor. The molecular determinants of host range and virulence in this group are largely unknown, and currently fish infection models are lacking. We show that European sheatfish virus (ESV) can productively infect zebrafish, causing a lethal pathology, and describe a method for the generation of recombinant ESV, establishing a useful model for the study of fish ranavirus infections. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Air quality assessment for land disposal of industrial wastes
NASA Astrophysics Data System (ADS)
Shen, Thomas T.
1982-07-01
Air pollution from hazardous waste landfills and lagoons is largely unknown. Routine monitoring of toxic air contaminants associated with hazardous waste facilities is difficult and very costly. The method presented in this paper would be useful for air quality assessment in the absence of monitoring data. It may be used as a screening process to examine the question of whether or not volatilization is considered to be significant for a given contaminant and also to evaluate permit applications for new hazardous waste facilities concerning waste volatilization problems.
Plasma filtering techniques for nuclear waste remediation.
Gueroult, Renaud; Hobbs, David T; Fisch, Nathaniel J
2015-10-30
Nuclear waste cleanup is challenged by the handling of feed stocks that are both unknown and complex. Plasma filtering, operating on dissociated elements, offers advantages over chemical methods in processing such wastes. The costs incurred by plasma mass filtering for nuclear waste pretreatment, before ultimate disposal, are similar to those for chemical pretreatment. However, significant savings might be achieved in minimizing the waste mass. This advantage may be realized over a large range of chemical waste compositions, thereby addressing the heterogeneity of legacy nuclear waste. Copyright © 2015 Elsevier B.V. All rights reserved.
Chaotic Traversal (CHAT): Very Large Graphs Traversal Using Chaotic Dynamics
NASA Astrophysics Data System (ADS)
Changaival, Boonyarit; Rosalie, Martin; Danoy, Grégoire; Lavangnananda, Kittichai; Bouvry, Pascal
2017-12-01
Graph Traversal algorithms can find their applications in various fields such as routing problems, natural language processing or even database querying. The exploration can be considered as a first stepping stone into knowledge extraction from the graph which is now a popular topic. Classical solutions such as Breadth First Search (BFS) and Depth First Search (DFS) require huge amounts of memory for exploring very large graphs. In this research, we present a novel memoryless graph traversal algorithm, Chaotic Traversal (CHAT) which integrates chaotic dynamics to traverse large unknown graphs via the Lozi map and the Rössler system. To compare various dynamics effects on our algorithm, we present an original way to perform the exploration of a parameter space using a bifurcation diagram with respect to the topological structure of attractors. The resulting algorithm is an efficient and nonresource demanding algorithm, and is therefore very suitable for partial traversal of very large and/or unknown environment graphs. CHAT performance using Lozi map is proven superior than the, commonly known, Random Walk, in terms of number of nodes visited (coverage percentage) and computation time where the environment is unknown and memory usage is restricted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Richard O.; O'Brien, Robert F.; Wilson, John E.
2003-09-01
It may not be feasible to completely survey large tracts of land suspected of containing minefields. It is desirable to develop a characterization protocol that will confidently identify minefields within these large land tracts if they exist. Naturally, surveying areas of greatest concern and most likely locations would be necessary but will not provide the needed confidence that an unknown minefield had not eluded detection. Once minefields are detected, methods are needed to bound the area that will require detailed mine detection surveys. The US Department of Defense Strategic Environmental Research and Development Program (SERDP) is sponsoring the development ofmore » statistical survey methods and tools for detecting potential UXO targets. These methods may be directly applicable to demining efforts. Statistical methods are employed to determine the optimal geophysical survey transect spacing to have confidence of detecting target areas of a critical size, shape, and anomaly density. Other methods under development determine the proportion of a land area that must be surveyed to confidently conclude that there are no UXO present. Adaptive sampling schemes are also being developed as an approach for bounding the target areas. These methods and tools will be presented and the status of relevant research in this area will be discussed.« less
Geological and hydrogeological investigations in west Malaysia
NASA Technical Reports Server (NTRS)
Ahmad, J. B. (Principal Investigator); Khoon, S. Y.
1977-01-01
The author has identified the following significant results. Large structures along the east coast of the peninsula were discovered. Of particular significance were the circular structures which were believed to be associated with mineralization and whose existence was unknown. The distribution of the younger sediments along the east coast appeared to be more widespread than previously indicated. Along the Pahang coast on the southern end, small traces of raised beach lines were noted up to six miles inland. The existence of these beach lines was unknown due to their isolation in large coastal swamps.
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
Multiple nodes transfer alignment for airborne missiles based on inertial sensor network
NASA Astrophysics Data System (ADS)
Si, Fan; Zhao, Yan
2017-09-01
Transfer alignment is an important initialization method for airborne missiles because the alignment accuracy largely determines the performance of the missile. However, traditional alignment methods are limited by complicated and unknown flexure angle, and cannot meet the actual requirement when wing flexure deformation occurs. To address this problem, we propose a new method that uses the relative navigation parameters between the weapons and fighter to achieve transfer alignment. First, in the relative inertial navigation algorithm, the relative attitudes and positions are constantly computed in wing flexure deformation situations. Secondly, the alignment results of each weapon are processed using a data fusion algorithm to improve the overall performance. Finally, the feasibility and performance of the proposed method were evaluated under two typical types of deformation, and the simulation results demonstrated that the new transfer alignment method is practical and has high-precision.
Lan, D; Hu, Y D; Zhu, Q; Li, D Y; Liu, Y P
2015-07-28
The direction of production for indigenous chicken breeds is currently unknown and this knowledge, combined with the development of chicken genome-wide association studies, led us to investigate differences in specific loci between broiler and layer chicken using bioinformatic methods. In addition, we analyzed the distribution of these seven identified loci in four Chinese indigenous chicken breeds, Caoke chicken, Jiuyuan chicken, Sichuan mountain chicken, and Tibetan chicken, using DNA direct sequencing methods, and analyzed the data using bioinformatic methods. Based on the results, we suggest that Caoke chicken could be developed for meat production, while Jiuyuan chicken could be developed for egg production. As Sichuan mountain chicken and Tibetan chicken exhibited large polymorphisms, these breeds could be improved by changing their living environment.
Linear reduction method for predictive and informative tag SNP selection.
He, Jingwu; Westbrooks, Kelly; Zelikovsky, Alexander
2005-01-01
Constructing a complete human haplotype map is helpful when associating complex diseases with their related SNPs. Unfortunately, the number of SNPs is very large and it is costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNPs that should be sequenced to a small number of informative representatives called tag SNPs. In this paper, we propose a new linear algebra-based method for selecting and using tag SNPs. We measure the quality of our tag SNP selection algorithm by comparing actual SNPs with SNPs predicted from selected linearly independent tag SNPs. Our experiments show that for sufficiently long haplotypes, knowing only 0.4% of all SNPs the proposed linear reduction method predicts an unknown haplotype with the error rate below 2% based on 10% of the population.
Model Calibration with Censored Data
Cao, Fang; Ba, Shan; Brenneman, William A.; ...
2017-06-28
Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less
Improved mapping of radio sources from VLBI data by least-square fit
NASA Technical Reports Server (NTRS)
Rodemich, E. R.
1985-01-01
A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.
EFFECTS OF LARGE-SCALE POULTRY FARMS ON AQUATIC MICROBIAL COMMUNITIES: A MOLECULAR INVESTIGATION.
The effects of large-scale poultry production operations on water quality and human health are largely unknown. Poultry litter is frequently applied as fertilizer to agricultural lands adjacent to large poultry farms. Run-off from the land introduces a variety of stressors into t...
Dykema, John A.; Keith, David W.; Anderson, James G.; Weisenstein, Debra
2014-01-01
Although solar radiation management (SRM) through stratospheric aerosol methods has the potential to mitigate impacts of climate change, our current knowledge of stratospheric processes suggests that these methods may entail significant risks. In addition to the risks associated with current knowledge, the possibility of ‘unknown unknowns’ exists that could significantly alter the risk assessment relative to our current understanding. While laboratory experimentation can improve the current state of knowledge and atmospheric models can assess large-scale climate response, they cannot capture possible unknown chemistry or represent the full range of interactive atmospheric chemical physics. Small-scale, in situ experimentation under well-regulated circumstances can begin to remove some of these uncertainties. This experiment—provisionally titled the stratospheric controlled perturbation experiment—is under development and will only proceed with transparent and predominantly governmental funding and independent risk assessment. We describe the scientific and technical foundation for performing, under external oversight, small-scale experiments to quantify the risks posed by SRM to activation of halogen species and subsequent erosion of stratospheric ozone. The paper's scope includes selection of the measurement platform, relevant aspects of stratospheric meteorology, operational considerations and instrument design and engineering. PMID:25404681
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
A Corpus-Based Approach for Automatic Thai Unknown Word Recognition Using Boosting Techniques
NASA Astrophysics Data System (ADS)
Techo, Jakkrit; Nattee, Cholwich; Theeramunkong, Thanaruk
While classification techniques can be applied for automatic unknown word recognition in a language without word boundary, it faces with the problem of unbalanced datasets where the number of positive unknown word candidates is dominantly smaller than that of negative candidates. To solve this problem, this paper presents a corpus-based approach that introduces a so-called group-based ranking evaluation technique into ensemble learning in order to generate a sequence of classification models that later collaborate to select the most probable unknown word from multiple candidates. Given a classification model, the group-based ranking evaluation (GRE) is applied to construct a training dataset for learning the succeeding model, by weighing each of its candidates according to their ranks and correctness when the candidates of an unknown word are considered as one group. A number of experiments have been conducted on a large Thai medical text to evaluate performance of the proposed group-based ranking evaluation approach, namely V-GRE, compared to the conventional naïve Bayes classifier and our vanilla version without ensemble learning. As the result, the proposed method achieves an accuracy of 90.93±0.50% when the first rank is selected while it gains 97.26±0.26% when the top-ten candidates are considered, that is 8.45% and 6.79% improvement over the conventional record-based naïve Bayes classifier and the vanilla version. Another result on applying only best features show 93.93±0.22% and up to 98.85±0.15% accuracy for top-1 and top-10, respectively. They are 3.97% and 9.78% improvement over naive Bayes and the vanilla version. Finally, an error analysis is given.
Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad
2018-06-01
This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kotthoff, Matthias; Bücking, Mark
2018-01-01
Per- and polyfluoroalkyl substances (PFAS) represent a versatile group of ubiquitously occurring chemicals of increasing regulatory concern. The past years lead to an ever expanding portfolio of detected anthropogenic PFAS in numerous products encountered in daily life. Yet no clear picture of the full range of individual substance that comprise PFAS is available and this challenges analytical and engineering sciences. Authorities struggle to cope with uncertainties in managing risk of harm posed by PFAS. This is a result of an incomplete understanding of the range of compounds that they comprise in differing products. There are analytical uncertainties identifying PFAS and estimating the concentrations of the total PFAS load individual molecules remain unknown. There are four major trends from the chemical perspective that will shape PFAS research for the next decade. Mobility: A wide and dynamic distribution of short chain PFAS due to their high polarity, persistency and volatility.Substitution of regulated substances: The ban or restrictions of individual molecules will lead to a replacement with substitutes of similar concern.Increase in structural diversity of existing PFAS molecules: Introduction of e.g., hydrogens and chlorine atoms instead of fluorine, as well as branching and cross-linking lead to a high versatility of unknown target molecules.Unknown "Dark Matter": The amount, identity, formation pathways, and transformation dynamics of polymers and PFAS precursors are largely unknown. These directions require optimized analytical setups, especially multi-methods, and semi-specific tools to determine PFAS-sum parameters in any relevant matrix.
Kotthoff, Matthias; Bücking, Mark
2018-01-01
Per- and polyfluoroalkyl substances (PFAS) represent a versatile group of ubiquitously occurring chemicals of increasing regulatory concern. The past years lead to an ever expanding portfolio of detected anthropogenic PFAS in numerous products encountered in daily life. Yet no clear picture of the full range of individual substance that comprise PFAS is available and this challenges analytical and engineering sciences. Authorities struggle to cope with uncertainties in managing risk of harm posed by PFAS. This is a result of an incomplete understanding of the range of compounds that they comprise in differing products. There are analytical uncertainties identifying PFAS and estimating the concentrations of the total PFAS load individual molecules remain unknown. There are four major trends from the chemical perspective that will shape PFAS research for the next decade. Mobility: A wide and dynamic distribution of short chain PFAS due to their high polarity, persistency and volatility.Substitution of regulated substances: The ban or restrictions of individual molecules will lead to a replacement with substitutes of similar concern.Increase in structural diversity of existing PFAS molecules: Introduction of e.g., hydrogens and chlorine atoms instead of fluorine, as well as branching and cross-linking lead to a high versatility of unknown target molecules.Unknown “Dark Matter”: The amount, identity, formation pathways, and transformation dynamics of polymers and PFAS precursors are largely unknown. These directions require optimized analytical setups, especially multi-methods, and semi-specific tools to determine PFAS-sum parameters in any relevant matrix. PMID:29675408
Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng
2014-01-01
The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851
A multiwave range test for obstacle reconstructions with unknown physical properties
NASA Astrophysics Data System (ADS)
Potthast, Roland; Schulz, Jochen
2007-08-01
We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A `range test' for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533-547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhauser, Basel, 1986, pp. 93-102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Gottingen, 1999]. In particular, we propose a new version of the Kirsch-Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
On the unsupervised analysis of domain-specific Chinese texts
Deng, Ke; Bol, Peter K.; Li, Kate J.; Liu, Jun S.
2016-01-01
With the growing availability of digitized text data both publicly and privately, there is a great need for effective computational tools to automatically extract information from texts. Because the Chinese language differs most significantly from alphabet-based languages in not specifying word boundaries, most existing Chinese text-mining methods require a prespecified vocabulary and/or a large relevant training corpus, which may not be available in some applications. We introduce an unsupervised method, top-down word discovery and segmentation (TopWORDS), for simultaneously discovering and segmenting words and phrases from large volumes of unstructured Chinese texts, and propose ways to order discovered words and conduct higher-level context analyses. TopWORDS is particularly useful for mining online and domain-specific texts where the underlying vocabulary is unknown or the texts of interest differ significantly from available training corpora. When outputs from TopWORDS are fed into context analysis tools such as topic modeling, word embedding, and association pattern finding, the results are as good as or better than that from using outputs of a supervised segmentation method. PMID:27185919
Swing-free transport of suspended loads. Summer research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basher, A.M.H.
1996-02-01
Transportation of large objects using traditional bridge crane can induce pendulum motion (swing) of the object. In environments such as factory the energy contained in the swinging mass can be large and therefore attempts to move the mass onto target while still swinging can cause considerable damage. Oscillations must be damped or allowed to decay before the next process can take place. Stopping the swing can be accomplished by moving the bridge in a manner to counteract the swing which sometimes can be done by skilled operator, or by waiting for the swing to damp sufficiently that the object canmore » be moved to the target without risk of damage. One of the methods that can be utilized for oscillation suppression is input preshaping. The validity of this method depends on the exact knowledge of the system dynamics. This method can be modified to provide some degrees of robustness with respect to unknown dynamics but at the cost of the speed of transient response. This report describes investigations on the development of a controller to dampen the oscillations.« less
USDA-ARS?s Scientific Manuscript database
Apple trees, either abandoned or cared for, are common on the North American landscape. These trees can live for decades, and therefore represent a record of large- and small-scale agricultural practices through time. Here, we assessed the genetic diversity and identity of 330 unknown apple trees in...
A Size Exclusion Chromatography Laboratory with Unknowns for Introductory Students
ERIC Educational Resources Information Center
McIntee, Edward J.; Graham, Kate J.; Colosky, Edward C.; Jakubowski, Henry V.
2015-01-01
Size exclusion chromatography is an important technique in the separation of biological and polymeric samples by molecular weight. While a number of laboratory experiments have been published that use this technique for the purification of large molecules, this is the first report of an experiment that focuses on purifying an unknown small…
Application of incremental unknowns to the Burgers equation
NASA Technical Reports Server (NTRS)
Choi, Haecheon; Temam, Roger
1993-01-01
In this article, we make a few remarks on the role that attractors and inertial manifolds play in fluid mechanics problems. We then describe the role of incremental unknowns for approximating attractors and inertial manifolds when finite difference multigrid discretizations are used. The relation with direct numerical simulation and large eddy simulation is also mentioned.
Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.
Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun
2017-10-03
This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.
NASA Astrophysics Data System (ADS)
Huang, Chen; Chi, Yu-Chieh
2017-12-01
The key element in Kohn-Sham (KS) density functional theory is the exchange-correlation (XC) potential. We recently proposed the exchange-correlation potential patching (XCPP) method with the aim of directly constructing high-level XC potential in a large system by patching the locally computed, high-level XC potentials throughout the system. In this work, we investigate the patching of the exact exchange (EXX) and the random phase approximation (RPA) correlation potentials. A major challenge of XCPP is that a cluster's XC potential, obtained by solving the optimized effective potential equation, is only determined up to an unknown constant. Without fully determining the clusters' XC potentials, the patched system's XC potential is "uneven" in the real space and may cause non-physical results. Here, we developed a simple method to determine this unknown constant. The performance of XCPP-RPA is investigated on three one-dimensional systems: H20, H10Li8, and the stretching of the H19-H bond. We investigated two definitions of EXX: (i) the definition based on the adiabatic connection and fluctuation dissipation theorem (ACFDT) and (ii) the Hartree-Fock (HF) definition. With ACFDT-type EXX, effective error cancellations were observed between the patched EXX and the patched RPA correlation potentials. Such error cancellations were absent for the HF-type EXX, which was attributed to the fact that for systems with fractional occupation numbers, the integral of the HF-type EXX hole is not -1. The KS spectra and band gaps from XCPP agree reasonably well with the benchmarks as we make the clusters large.
Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAER,THOMAS A.; SACKINGER,PHILIP A.; SUBIA,SAMUEL R.
1999-10-14
Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-staticmore » solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance.« less
Li, Yongming; Tong, Shaocheng
The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.
A robust sparse-modeling framework for estimating schizophrenia biomarkers from fMRI.
Dillon, Keith; Calhoun, Vince; Wang, Yu-Ping
2017-01-30
Our goal is to identify the brain regions most relevant to mental illness using neuroimaging. State of the art machine learning methods commonly suffer from repeatability difficulties in this application, particularly when using large and heterogeneous populations for samples. We revisit both dimensionality reduction and sparse modeling, and recast them in a common optimization-based framework. This allows us to combine the benefits of both types of methods in an approach which we call unambiguous components. We use this to estimate the image component with a constrained variability, which is best correlated with the unknown disease mechanism. We apply the method to the estimation of neuroimaging biomarkers for schizophrenia, using task fMRI data from a large multi-site study. The proposed approach yields an improvement in both robustness of the estimate and classification accuracy. We find that unambiguous components incorporate roughly two thirds of the same brain regions as sparsity-based methods LASSO and elastic net, while roughly one third of the selected regions differ. Further, unambiguous components achieve superior classification accuracy in differentiating cases from controls. Unambiguous components provide a robust way to estimate important regions of imaging data. Copyright © 2016 Elsevier B.V. All rights reserved.
Pérez-Hernández, Guillermo; Noé, Frank
2016-12-13
Analysis of molecular dynamics, for example using Markov models, often requires the identification of order parameters that are good indicators of the rare events, i.e. good reaction coordinates. Recently, it has been shown that the time-lagged independent component analysis (TICA) finds the linear combinations of input coordinates that optimally represent the slow kinetic modes and may serve in order to define reaction coordinates between the metastable states of the molecular system. A limitation of the method is that both computing time and memory requirements scale with the square of the number of input features. For large protein systems, this exacerbates the use of extensive feature sets such as the distances between all pairs of residues or even heavy atoms. Here we derive a hierarchical TICA (hTICA) method that approximates the full TICA solution by a hierarchical, divide-and-conquer calculation. By using hTICA on distances between heavy atoms we identify previously unknown relaxation processes in the bovine pancreatic trypsin inhibitor.
NASA Astrophysics Data System (ADS)
Cautun, Marius; van de Weygaert, Rien; Jones, Bernard J. T.; Frenk, Carlos S.; Hellwing, Wojciech A.
2015-01-01
One of the important unknowns of current cosmology concerns the effects of the large scale distribution of matter on the formation and evolution of dark matter haloes and galaxies. One main difficulty in answering this question lies in the absence of a robust and natural way of identifying the large scale environments and their characteristics. This work summarizes the NEXUS+ formalism which extends and improves our multiscale scale-space MMF method. The new algorithm is very successful in tracing the Cosmic Web components, mainly due to its novel filtering of the density in logarithmic space. The method, due to its multiscale and hierarchical character, has the advantage of detecting all the cosmic structures, either prominent or tenuous, without preference for a certain size or shape. The resulting filamentary and wall networks can easily be characterized by their direction, thickness, mass density and density profile. These additional environmental properties allows to us to investigate not only the effect of environment on haloes, but also how it correlates with the environment characteristics.
Fire control method and analytical model for large liquid hydrocarbon pool fires
NASA Technical Reports Server (NTRS)
Fenton, D. L.
1986-01-01
The dominate parameter governing the behavior of a liquid hydrocarbon (JP-5) pool fire is wind speed. The most effective method of controlling wind speed in the vicinity of a large circular (10 m dia.) pool fire is a set of concentric screens located outside the perimeter. Because detailed behavior of the pool fire structure within one pool fire diameter is unknown, an analytical model supported by careful experiments is under development. As a first step toward this development, a regional pool fire model was constructed for the no-wind condition consisting of three zones -- liquid fuel, combustion, and plume -- where the predicted variables are mass burning rate and characteristic temperatures of the combustion and plume zones. This zone pool fire model can be modified to incorporate plume bending by wind, radiation absorption by soot particles, and a different ambient air flow entrainment rate. Results from the zone model are given for a pool diameter of 1.3 m and are found to reproduce values in the literature.
Spatial organization of chromatin domains and compartments in single chromosomes
NASA Astrophysics Data System (ADS)
Wang, Siyuan; Su, Jun-Han; Beliveau, Brian; Bintu, Bogdan; Moffitt, Jeffrey; Wu, Chao-Ting; Zhuang, Xiaowei
The spatial organization of chromatin critically affects genome function. Recent chromosome-conformation-capture studies have revealed topologically associating domains (TADs) as a conserved feature of chromatin organization, but how TADs are spatially organized in individual chromosomes remains unknown. Here, we developed an imaging method for mapping the spatial positions of numerous genomic regions along individual chromosomes and traced the positions of TADs in human interphase autosomes and X chromosomes. We observed that chromosome folding deviates from the ideal fractal-globule model at large length scales and that TADs are largely organized into two compartments spatially arranged in a polarized manner in individual chromosomes. Active and inactive X chromosomes adopt different folding and compartmentalization configurations. These results suggest that the spatial organization of chromatin domains can change in response to regulation.
A chemical proteomics approach for global analysis of lysine monomethylome profiling.
Wu, Zhixiang; Cheng, Zhongyi; Sun, Mingwei; Wan, Xuelian; Liu, Ping; He, Tieming; Tan, Minjia; Zhao, Yingming
2015-02-01
Methylation of lysine residues on histone proteins is known to play an important role in chromatin structure and function. However, non-histone protein substrates of this modification remain largely unknown. An effective approach for system-wide analysis of protein lysine methylation, particularly lysine monomethylation, is lacking. Here we describe a chemical proteomics approach for global screening for monomethyllysine substrates, involving chemical propionylation of monomethylated lysine, affinity enrichment of the modified monomethylated peptides, and HPLC/MS/MS analysis. Using this approach, we identified with high confidence 446 lysine monomethylation sites in 398 proteins, including three previously unknown histone monomethylation marks, representing the largest data set of protein lysine monomethylation described to date. Our data not only confirms previously discovered lysine methylation substrates in the nucleus and spliceosome, but also reveals new substrates associated with diverse biological processes. This method hence offers a powerful approach for dynamic study of protein lysine monomethylation under diverse cellular conditions and in human diseases. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Parameter Estimation for a Pulsating Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason; Wimer, Nicholas; Lapointe, Caelan; Hayden, Torrey; Grooms, Ian; Rieker, Greg; Hamlington, Peter
2017-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other ``truth'' data to be used for the prediction of unknown parameters, such as flow properties and boundary conditions, in numerical simulations of real-world engineering systems. Here we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a direct numerical simulation (DNS) with known boundary conditions and problem parameters, while the ABC procedure utilizes lower fidelity large eddy simulations. Using spatially-sparse statistics from the 2D buoyant jet DNS, we show that the ABC method provides accurate predictions of true jet inflow parameters. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for predicting flow information, such as boundary conditions, that can be difficult to determine experimentally.
Simple scheme for encoding and decoding a qubit in unknown state for various topological codes
Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał
2015-01-01
We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905
Capture-recapture studies for multiple strata including non-markovian transitions
Brownie, C.; Hines, J.E.; Nichols, J.D.; Pollock, K.H.; Hestbeck, J.B.
1993-01-01
We consider capture-recapture studies where release and recapture data are available from each of a number of strata on every capture occasion. Strata may, for example, be geographic locations or physiological states. Movement of animals among strata occurs with unknown probabilities, and estimation of these unknown transition probabilities is the objective. We describe a computer routine for carrying out the analysis under a model that assumes Markovian transitions and under reduced parameter versions of this model. We also introduce models that relax the Markovian assumption and allow 'memory' to operate (i.e., allow dependence of the transition probabilities on the previous state). For these models, we sugg st an analysis based on a conditional likelihood approach. Methods are illustrated with data from a large study on Canada geese (Branta canadensis) banded in three geographic regions. The assumption of Markovian transitions is rejected convincingly for these data, emphasizing the importance of the more general models that allow memory.
Robust Coordination for Large Sets of Simple Rovers
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian
2006-01-01
The ability to coordinate sets of rovers in an unknown environment is critical to the long-term success of many of NASA;s exploration missions. Such coordination policies must have the ability to adapt in unmodeled or partially modeled domains and must be robust against environmental noise and rover failures. In addition such coordination policies must accommodate a large number of rovers, without excessive and burdensome hand-tuning. In this paper we present a distributed coordination method that addresses these issues in the domain of controlling a set of simple rovers. The application of these methods allows reliable and efficient robotic exploration in dangerous, dynamic, and previously unexplored domains. Most control policies for space missions are directly programmed by engineers or created through the use of planning tools, and are appropriate for single rover missions or missions requiring the coordination of a small number of rovers. Such methods typically require significant amounts of domain knowledge, and are difficult to scale to large numbers of rovers. The method described in this article aims to address cases where a large number of rovers need to coordinate to solve a complex time dependent problem in a noisy environment. In this approach, each rover decomposes a global utility, representing the overall goal of the system, into rover-specific utilities that properly assign credit to the rover s actions. Each rover then has the responsibility to create a control policy that maximizes its own rover-specific utility. We show a method of creating rover-utilities that are "aligned" with the global utility, such that when the rovers maximize their own utility, they also maximize the global utility. In addition we show that our method creates rover-utilities that allow the rovers to create their control policies quickly and reliably. Our distributed learning method allows large sets rovers be used unmodeled domains, while providing robustness against rover failures and changing environments. In experimental simulations we show that our method scales well with large numbers of rovers in addition to being robust against noisy sensor inputs and noisy servo control. The results show that our method is able to scale to large numbers of rovers and achieves up to 400% performance improvement over standard machine learning methods.
Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek
2018-03-01
Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Brendon J.; Foreman-Mackey, Daniel; Hogg, David W., E-mail: bj.brewer@auckland.ac.nz
We present and implement a probabilistic (Bayesian) method for producing catalogs from images of stellar fields. The method is capable of inferring the number of sources N in the image and can also handle the challenges introduced by noise, overlapping sources, and an unknown point-spread function. The luminosity function of the stars can also be inferred, even when the precise luminosity of each star is uncertain, via the use of a hierarchical Bayesian model. The computational feasibility of the method is demonstrated on two simulated images with different numbers of stars. We find that our method successfully recovers the inputmore » parameter values along with principled uncertainties even when the field is crowded. We also compare our results with those obtained from the SExtractor software. While the two approaches largely agree about the fluxes of the bright stars, the Bayesian approach provides more accurate inferences about the faint stars and the number of stars, particularly in the crowded case.« less
NASA Astrophysics Data System (ADS)
Liu, Tingting; Liu, Hai; Chen, Zengzhao; Chen, Yingying; Wang, Shengming; Liu, Zhi; Zhang, Hao
2018-05-01
Infrared (IR) spectra are the fingerprints of the molecules, and the spectral band location closely relates to the structure of a molecule. Thus, specimen identification can be performed based on IR spectroscopy. However, spectrally overlapping components prevent the specific identification of hyperfine molecular information of different substances. In this paper, we propose a fast blind reconstruction approach for IR spectra, which is based on sparse and redundant representations over a dictionary. The proposed method recovers the spectrum with the discrete wavelet transform dictionary on its content. The experimental results demonstrate that the proposed method is superior because of the better performance when compared with other state-of-the-art methods. The method the authors used remove the instrument aging issue to a large extent, thus leading the reconstruction IR spectra a more convenient tool for extracting features of an unknown material and interpreting it.
Li, Yongming; Ma, Zhiyao; Tong, Shaocheng
2017-09-01
The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.
Aeromagnetic Survey in Afghanistan: A Website for Distribution of Data
Abraham, Jared D.; Anderson, Eric D.; Drenth, Benjamin J.; Finn, Carol A.; Kucks, Robert P.; Lindsay, Charles R.; Phillips, Jeffrey D.; Sweeney, Ronald E.
2007-01-01
Afghanistan's geologic setting indicates significant natural resource potential While important mineral deposits and petroleum resources have been identified, much of the country's potential remains unknown. Airborne geophysical surveys are a well accepted and cost effective method for obtaining information of the geological setting of an area without the need to be physically located on the ground. Due to the security situation and the large areas of the country of Afghanistan that has not been covered with geophysical exploration methods a regional airborne geophysical survey was proposed. Acting upon the request of the Islamic Republic of Afghanistan Ministry of Mines, the U.S. Geological Survey contracted with the Naval Research Laboratory to jointly conduct an airborne geophysical and remote sensing survey of Afghanistan.
Half-blind remote sensing image restoration with partly unknown degradation
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
The problem of image restoration has been extensively studied for its practical importance and theoretical interest. This paper mainly discusses the problem of image restoration with partly unknown kernel. In this model, the degraded kernel function is known but its parameters are unknown. With this model, we should estimate the parameters in Gaussian kernel and the real image simultaneity. For this new problem, a total variation restoration model is put out and an intersect direction iteration algorithm is designed. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) are used to measure the performance of the method. Numerical results show that we can estimate the parameters in kernel accurately, and the new method has both much higher PSNR and much higher SSIM than the expectation maximization (EM) method in many cases. In addition, the accuracy of estimation is not sensitive to noise. Furthermore, even though the support of the kernel is unknown, we can also use this method to get accurate estimation.
Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA
2012-04-10
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source.
New Primary Standards for Establishing SI Traceability for Moisture Measurements in Solid Materials
NASA Astrophysics Data System (ADS)
Heinonen, M.; Bell, S.; Choi, B. Il; Cortellessa, G.; Fernicola, V.; Georgin, E.; Hudoklin, D.; Ionescu, G. V.; Ismail, N.; Keawprasert, T.; Krasheninina, M.; Aro, R.; Nielsen, J.; Oğuz Aytekin, S.; Österberg, P.; Skabar, J.; Strnad, R.
2018-01-01
A European research project METefnet addresses a fundamental obstacle to improving energy-intensive drying process control: due to ambiguous reference analysis methods and insufficient methods for estimating uncertainty in moisture measurements, the achievable accuracy in the past was limited and measurement uncertainties were largely unknown. This paper reports the developments in METefnet that provide a sound basis for the SI traceability: four new primary standards for realizing the water mass fraction were set up, analyzed and compared to each other. The operation of these standards is based on combining sample weighing with different water vapor detection techniques: cold trap, chilled mirror, electrolytic and coulometric Karl Fischer titration. The results show that an equivalence of 0.2 % has been achieved between the water mass fraction realizations and that the developed methods are applicable to a wide range of materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
Searching molecular structure databases with tandem mass spectra using CSI:FingerID
Dührkop, Kai; Shen, Huibin; Meusel, Marvin; Rousu, Juho; Böcker, Sebastian
2015-01-01
Metabolites provide a direct functional signature of cellular state. Untargeted metabolomics experiments usually rely on tandem MS to identify the thousands of compounds in a biological sample. Today, the vast majority of metabolites remain unknown. We present a method for searching molecular structure databases using tandem MS data of small molecules. Our method computes a fragmentation tree that best explains the fragmentation spectrum of an unknown molecule. We use the fragmentation tree to predict the molecular structure fingerprint of the unknown compound using machine learning. This fingerprint is then used to search a molecular structure database such as PubChem. Our method is shown to improve on the competing methods for computational metabolite identification by a considerable margin. PMID:26392543
NASA Astrophysics Data System (ADS)
Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng
2002-03-01
The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.
Component spectra extraction from terahertz measurements of unknown mixtures.
Li, Xian; Hou, D B; Huang, P J; Cai, J H; Zhang, G X
2015-10-20
The aim of this work is to extract component spectra from unknown mixtures in the terahertz region. To that end, a method, hard modeling factor analysis (HMFA), was applied to resolve terahertz spectral matrices collected from the unknown mixtures. This method does not require any expertise of the user and allows the consideration of nonlinear effects such as peak variations or peak shifts. It describes the spectra using a peak-based nonlinear mathematic model and builds the component spectra automatically by recombination of the resolved peaks through correlation analysis. Meanwhile, modifications on the method were made to take the features of terahertz spectra into account and to deal with the artificial baseline problem that troubles the extraction process of some terahertz spectra. In order to validate the proposed method, simulated wideband terahertz spectra of binary and ternary systems and experimental terahertz absorption spectra of amino acids mixtures were tested. In each test, not only the number of pure components could be correctly predicted but also the identified pure spectra had a good similarity with the true spectra. Moreover, the proposed method associated the molecular motions with the component extraction, making the identification process more physically meaningful and interpretable compared to other methods. The results indicate that the HMFA method with the modifications can be a practical tool for identifying component terahertz spectra in completely unknown mixtures. This work reports the solution to this kind of problem in the terahertz region for the first time, to the best of the authors' knowledge, and represents a significant advance toward exploring physical or chemical mechanisms of unknown complex systems by terahertz spectroscopy.
Molecular toolbox for the identification of unknown genetically modified organisms.
Ruttink, Tom; Demeyer, Rolinde; Van Gulck, Elke; Van Droogenbroeck, Bart; Querci, Maddalena; Taverniers, Isabel; De Loose, Marc
2010-03-01
Competent laboratories monitor genetically modified organisms (GMOs) and products derived thereof in the food and feed chain in the framework of labeling and traceability legislation. In addition, screening is performed to detect the unauthorized presence of GMOs including asynchronously authorized GMOs or GMOs that are not officially registered for commercialization (unknown GMOs). Currently, unauthorized or unknown events are detected by screening blind samples for commonly used transgenic elements, such as p35S or t-nos. If (1) positive detection of such screening elements shows the presence of transgenic material and (2) all known GMOs are tested by event-specific methods but are not detected, then the presence of an unknown GMO is inferred. However, such evidence is indirect because it is based on negative observations and inconclusive because the procedure does not identify the causative event per se. In addition, detection of unknown events is hampered in products that also contain known authorized events. Here, we outline alternative approaches for analytical detection and GMO identification and develop new methods to complement the existing routine screening procedure. We developed a fluorescent anchor-polymerase chain reaction (PCR) method for the identification of the sequences flanking the p35S and t-nos screening elements. Thus, anchor-PCR fingerprinting allows the detection of unique discriminative signals per event. In addition, we established a collection of in silico calculated fingerprints of known events to support interpretation of experimentally generated anchor-PCR GM fingerprints of blind samples. Here, we first describe the molecular characterization of a novel GMO, which expresses recombinant human intrinsic factor in Arabidopsis thaliana. Next, we purposefully treated the novel GMO as a blind sample to simulate how the new methods lead to the molecular identification of a novel unknown event without prior knowledge of its transgene sequence. The results demonstrate that the new methods complement routine screening procedures by providing direct conclusive evidence and may also be useful to resolve masking of unknown events by known events.
Method for identifying known materials within a mixture of unknowns
Wagner, John S.
2000-01-01
One or both of two methods and systems are used to determine concentration of a known material in an unknown mixture on the basis of the measured interaction of electromagnetic waves upon the mixture. One technique is to utilize a multivariate analysis patch technique to develop a library of optimized patches of spectral signatures of known materials containing only those pixels most descriptive of the known materials by an evolutionary algorithm. Identity and concentration of the known materials within the unknown mixture is then determined by minimizing the residuals between the measurements from the library of optimized patches and the measurements from the same pixels from the unknown mixture. Another technique is to train a neural network by the genetic algorithm to determine the identity and concentration of known materials in the unknown mixture. The two techniques may be combined into an expert system providing cross checks for accuracy.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
Design of a DNA chip for detection of unknown genetically modified organisms (GMOs).
Nesvold, Håvard; Kristoffersen, Anja Bråthen; Holst-Jensen, Arne; Berdal, Knut G
2005-05-01
Unknown genetically modified organisms (GMOs) have not undergone a risk evaluation, and hence might pose a danger to health and environment. There are, today, no methods for detecting unknown GMOs. In this paper we propose a novel method intended as a first step in an approach for detecting unknown genetically modified (GM) material in a single plant. A model is designed where biological and combinatorial reduction rules are applied to a set of DNA chip probes containing all possible sequences of uniform length n, creating probes capable of detecting unknown GMOs. The model is theoretically tested for Arabidopsis thaliana Columbia, and the probabilities for detecting inserts and receiving false positives are assessed for various parameters for this organism. From a theoretical standpoint, the model looks very promising but should be tested further in the laboratory. The model and algorithms will be available upon request to the corresponding author.
Harnessing Diversity towards the Reconstructing of Large Scale Gene Regulatory Networks
Yamanaka, Ryota; Kitano, Hiroaki
2013-01-01
Elucidating gene regulatory network (GRN) from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i) a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii) TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks. PMID:24278007
Method and apparatus for sensor fusion
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)
1991-01-01
Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.
NASA Astrophysics Data System (ADS)
Kotthoff, Matthias; Bücking, Mark
2018-04-01
Per- and polyfluoroalkyl substances (PFAS) represent a versatile group of ubiquitously occurring chemicals of increasing regulatory concern. The past years lead to an ever expanding portfolio of detected anthropogenic PFAS in numerous products encountered in daily life. Yet no clear picture of the full range of individual substance that comprise PFAS is available and this challenges analytical and engineering sciences. Authorities struggle to cope with uncertainties in managing risk of harm posed by PFAS.This is a result of an incomplete understanding of the range of compounds that they comprise in differing products. There are analytical uncertainties identifying PFAS and estimating the concentrations of the total PFAS loadindividual molecules remain unknown. There are four major trends from the chemical perspective that will shape PFAS research for the next decade. 1.Mobility: A wide and dynamic distribution of short chain PFAS due to their high polarity, persistency and volatility. 2.Substitution of regulated substances: The ban or restrictions of individual molecules will lead to a replacement with substitutes of similar concern. 3.Increase in structural diversity of existing PFAS molecules: Introduction of e.g. hydrogens and chlorine atoms instead of fluorine, as well as branching and cross-linking lead to a high versatility of unknown target molecules. 4. Unknown “Dark Matter”: The amount, identity, formation pathways, and transformation dynamics of polymers and PFAS precursors are largely unknown. These directions require optimized analytical setups, especially multi-methods, and semi-specific tools to determine PFAS-sum parameters in any relevant matrix.
Towards de novo identification of metabolites by analyzing tandem mass spectra.
Böcker, Sebastian; Rasche, Florian
2008-08-15
Mass spectrometry is among the most widely used technologies in proteomics and metabolomics. Being a high-throughput method, it produces large amounts of data that necessitates an automated analysis of the spectra. Clearly, database search methods for protein analysis can easily be adopted to analyze metabolite mass spectra. But for metabolites, de novo interpretation of spectra is even more important than for protein data, because metabolite spectra databases cover only a small fraction of naturally occurring metabolites: even the model plant Arabidopsis thaliana has a large number of enzymes whose substrates and products remain unknown. The field of bio-prospection searches biologically diverse areas for metabolites which might serve as pharmaceuticals. De novo identification of metabolite mass spectra requires new concepts and methods since, unlike proteins, metabolites possess a non-linear molecular structure. In this work, we introduce a method for fully automated de novo identification of metabolites from tandem mass spectra. Mass spectrometry data is usually assumed to be insufficient for identification of molecular structures, so we want to estimate the molecular formula of the unknown metabolite, a crucial step for its identification. The method first calculates all molecular formulas that explain the parent peak mass. Then, a graph is build where vertices correspond to molecular formulas of all peaks in the fragmentation mass spectra, whereas edges correspond to hypothetical fragmentation steps. Our algorithm afterwards calculates the maximum scoring subtree of this graph: each peak in the spectra must be scored at most once, so the subtree shall contain only one explanation per peak. Unfortunately, finding this subtree is NP-hard. We suggest three exact algorithms (including one fixed parameter tractable algorithm) as well as two heuristics to solve the problem. Tests on real mass spectra show that the FPT algorithm and the heuristics solve the problem suitably fast and provide excellent results: for all 32 test compounds the correct solution was among the top five suggestions, for 26 compounds the first suggestion of the exact algorithm was correct. http://www.bio.inf.uni-jena.de/tandemms
NASA Astrophysics Data System (ADS)
Naseralavi, S. S.; Salajegheh, E.; Fadaee, M. J.; Salajegheh, J.
2014-06-01
This paper presents a technique for damage detection in structures under unknown periodic excitations using the transient displacement response. The method is capable of identifying the damage parameters without finding the input excitations. We first define the concept of displacement space as a linear space in which each point represents displacements of structure under an excitation and initial condition. Roughly speaking, the method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering this novel geometrical viewpoint, an equation called kernel parallelization equation (KPE) is derived for damage detection under unknown periodic excitations and a sensitivity-based algorithm for solving KPE is proposed accordingly. The method is evaluated via three case studies under periodic excitations, which confirm the efficiency of the proposed method.
Long Valley Caldera-Mammoth Mountain unrest: The knowns and unknowns
Hill, David P.
2017-01-01
This perspective is based largely on my study of the Long Valley Caldera (California, USA) over the past 40 years. Here, I’ll examine the “knowns” and the “known unknowns” of the complex tectonic–magmatic system of the Long Valley Caldera volcanic complex. I will also offer a few brief thoughts on the “unknown unknowns” of this system.
Epidemiology of neuroendocrine cancers in an Australian population.
Luke, Colin; Price, Timothy; Townsend, Amanda; Karapetis, Christos; Kotasek, Dusan; Singhal, Nimit; Tracey, Elizabeth; Roder, David
2010-06-01
The aim was to explore incidence, mortality and case survivals for invasive neuroendocrine cancers in an Australian population and consider cancer control implications. Directly age-standardised incidence and mortality rates were investigated from 1980 to 2006, plus disease-specific survivals. Annual incidence per 100,000 increased from 1.7 in 1980-1989 to 3.3 in 2000-2006. A corresponding mortality increase was not observed, although numbers of deaths were low, reducing statistical power. Increases in incidence affected both sexes and were more evident for female lung, large bowel (excluding appendix), and unknown primary site. Common sites were lung (25.9%), large bowel (23.3%) (40.9% were appendix), small intestine (20.6%), unknown primary (15.0%), pancreas (6.5%), and stomach (3.7%). Site distribution did not vary by sex (p = 0.260). Younger ages at diagnosis applied for lung (p = 0.002) and appendix (p < 0.001) and older ages for small intestine (p < 0.001) and unknown primary site (p < 0.001). Five-year survival was 68.5% for all sites combined, with secular increases (p < 0.001). After adjusting for age and diagnostic period, survivals were higher for appendix and lower for unknown primary site, pancreas, and colon (excluding appendix). Incidence rates are increasing. Research is needed into possible aetiological factors for lung and large-bowel sites, including tobacco smoking, and excess body weight and lack of exercise, respectively; and Crohn's disease as a possible precursor condition.
Large-scale structure prediction by improved contact predictions and model quality assessment.
Michel, Mirco; Menéndez Hurtado, David; Uziela, Karolis; Elofsson, Arne
2017-07-15
Accurate contact predictions can be used for predicting the structure of proteins. Until recently these methods were limited to very big protein families, decreasing their utility. However, recent progress by combining direct coupling analysis with machine learning methods has made it possible to predict accurate contact maps for smaller families. To what extent these predictions can be used to produce accurate models of the families is not known. We present the PconsFold2 pipeline that uses contact predictions from PconsC3, the CONFOLD folding algorithm and model quality estimations to predict the structure of a protein. We show that the model quality estimation significantly increases the number of models that reliably can be identified. Finally, we apply PconsFold2 to 6379 Pfam families of unknown structure and find that PconsFold2 can, with an estimated 90% specificity, predict the structure of up to 558 Pfam families of unknown structure. Out of these, 415 have not been reported before. Datasets as well as models of all the 558 Pfam families are available at http://c3.pcons.net/ . All programs used here are freely available. arne@bioinfo.se. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Geodynamic Effects of Ocean Tides: Progress and Problems
NASA Technical Reports Server (NTRS)
Richard, Ray
1999-01-01
Satellite altimetry, particularly Topex/Poseidon, has markedly improved our knowledge of global tides, thereby allowing significant progress on some longstanding problems in geodynamics. This paper reviews some of that progress. Emphasis is given to global-scale problems, particularly those falling within the mandate of the new IERS Special Bureau for Tides: angular momentum, gravitational field, geocenter motion. For this discussion I use primarily the new ocean tide solutions GOT99.2, CSR4.0, and TPXO.4 (for which G. Egbert has computed inverse-theoretic error estimates), and I concentrate on new results in angular momentum and gravity and their solid-earth implications. One example is a new estimate of the effective tidal Q at the M_2 frequency, based on combining these ocean models with tidal estimates from satellite laser ranging. Three especially intractable problems are also addressed: (1) determining long-period tides in the Arctic [large unknown effect on the inertia tensor, particularly for Mf]; (2) determining the global psi_l tide [large unknown effect on interpretations of gravimetry for the near-diurnal free wobble]; and (3) determining radiational tides [large unknown temporal variations at important frequencies]. Problems (2) and (3) are related.
Agüera, Ana; Martínez Bueno, María Jesús; Fernández-Alba, Amadeo R
2013-06-01
Since the so-called emerging contaminants were established as a new group of pollutants of environmental concern, a great effort has been devoted to the knowledge of their distribution, fate and effects in the environment. After more than 20 years of work, a significant improvement in knowledge about these contaminants has been achieved, but there is still a large gap of information on the growing number of new potential contaminants that are appearing and especially of their unpredictable transformation products. Although the environmental problem arising from emerging contaminants must be addressed from an interdisciplinary point of view, it is obvious that analytical chemistry plays an important role as the first step of the study, as it allows establishing the presence of chemicals in the environment, estimate their concentration levels, identify sources and determine their degradation pathways. These tasks involve serious difficulties requiring different analytical solutions adjusted to purpose. Thus, the complexity of the matrices requires highly selective analytical methods; the large number and variety of compounds potentially present in the samples demands the application of wide scope methods; the low concentrations at which these contaminants are present in the samples require a high detection sensitivity, and high demands on the confirmation and high structural information are needed for the characterisation of unknowns. New developments on analytical instrumentation have been applied to solve these difficulties. Furthermore and not less important has been the development of new specific software packages intended for data acquisition and, in particular, for post-run analysis. Thus, the use of sophisticated software tools has allowed successful screening analysis, determining several hundreds of analytes, and assisted in the structural elucidation of unknown compounds in a timely manner.
Method for genetic identification of unknown organisms
Colston, Jr., Billy W.; Fitch, Joseph P.; Hindson, Benjamin J.; Carter, Chance J.; Beer, Neil Reginald
2016-08-23
A method of rapid, genome and proteome based identification of unknown pathogenic or non-pathogenic organisms in a complex sample. The entire sample is analyzed by creating millions of emulsion encapsulated microdroplets, each containing a single pathogenic or non-pathogenic organism sized particle and appropriate reagents for amplification. Following amplification, the amplified product is analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Vesselinov, Velimir V.; Stanev, Valentin
The ShiftNMFk1.2 code, or as we call it, GreenNMFk, represents a hybrid algorithm combining unsupervised adaptive machine learning and Green's function inverse method. GreenNMFk allows an efficient and high performance de-mixing and feature extraction of a multitude of nonnegative signals that change their shape propagating through the medium. The signals are mixed and recorded by a network of uncorrelated sensors. The code couples Non-negative Matrix Factorization (NMF) and inverse-analysis Green's functions method. GreenNMF synergistically performs decomposition of the recorded mixtures, finds the number of the unknown sources and uses the Green's function of the governing partial differential equation to identifymore » the unknown sources and their charecteristics. GreenNMF can be applied directly to any problem controlled by a known partial-differential parabolic equation where mixtures of an unknown number of sources are measured at multiple locations. Full GreenNMFk method is a subject LANL U.S. Patent application S133364.000 August, 2017. The ShiftNMFk 1.2 version here is a toy version of this method that can work with a limited number of unknown sources (4 or less).« less
Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M
2017-01-01
Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).
Czihal, M; Tatò, F; Förster, S; Rademacher, A; Schulze-Koops, H; Hoffmann, U
2010-01-01
To evaluate the clinical characteristics and imaging results (CDS, 18-FDG-PET) of patients with large vessel giant cell arteritis (LV-GCA) presenting as fever of unknown origin (FUO). From a series of 82 patients with GCA we identified 8 patients with FUO as initial disease manifestation. Clinical characteristics and results of CDS and 18-FDG-PET were analysed. Patients with FUO and those with other clinical manifestations of GCA were compared. 18-FDG-PET-scans were available for 6/8 patients, revealing enhanced tracer uptake of the thoracic aorta and the aortic branches in all patients. CDS was performed in 8/8 patients, with detection of hypoechogenic wall thickening related to LV-GCA in 7/8 patients. Subjects with FUO were significantly younger (60.9 vs. 69.3 years, p<0.01) and had a stronger humoral inflammatory response (CRP 12.6 vs. 7.1 mg/dl, p<0.01; ESR 110 vs. 71 mm/hour, p<0.01), when compared to the other GCA-patients. LV-GCA should be considered as important differential diagnosis in patients with FUO. In addition to 18-FDG-PET, which is known to be a valuable method in the diagnostic work-up of FUO, we recommend CDS of the supraaortal and femoropopliteal arteries for the initial diagnostic work-up.
2018-01-01
Background Degenerative Cervical Myelopathy (DCM) is a syndrome of subacute cervical spinal cord compression due to spinal degeneration. Although DCM is thought to be common, many fundamental questions such as the natural history and epidemiology of DCM remain unknown. In order to answer these, access to a large cohort of patients with DCM is required. With its unrivalled and efficient reach, the Internet has become an attractive tool for medical research and may overcome these limitations in DCM. The most effective recruitment strategy, however, is unknown. Objective To compare the efficacy of fee-based advertisement with alternative free recruitment strategies to a DCM Internet health survey. Methods An Internet health survey (SurveyMonkey) accessed by a new DCM Internet platform (myelopathy.org) was created. Using multiple survey collectors and the website’s Google Analytics, the efficacy of fee-based recruitment strategies (Google AdWords) and free alternatives (including Facebook, Twitter, and myelopathy.org) were compared. Results Overall, 760 surveys (513 [68%] fully completed) were accessed, 305 (40%) from fee-based strategies and 455 (60%) from free alternatives. Accounting for researcher time, fee-based strategies were more expensive ($7.8 per response compared to $3.8 per response for free alternatives) and identified a less motivated audience (Click-Through-Rate of 5% compared to 57% using free alternatives) but were more time efficient for the researcher (2 minutes per response compared to 16 minutes per response for free methods). Facebook was the most effective free strategy, providing 239 (31%) responses, where a single message to 4 existing communities yielded 133 (18%) responses within 7 days. Conclusions The Internet can efficiently reach large numbers of patients. Free and fee-based recruitment strategies both have merits. Facebook communities are a rich resource for Internet researchers. PMID:29402760
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858
NASA Astrophysics Data System (ADS)
Unke, Oliver T.; Meuwly, Markus
2018-06-01
Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.
Yang, Ziheng; Zhu, Tianqi
2018-02-20
The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.
Procedures of determining organic trace compounds in municipal sewage sludge-a review.
Lindholm-Lehto, Petra C; Ahkola, Heidi S J; Knuutinen, Juha S
2017-02-01
Sewage sludge is the largest by-product generated during the wastewater treatment process. Since large amounts of sludge are being produced, different ways of disposal have been introduced. One tempting option is to use it as fertilizer in agricultural fields due to its high contents of inorganic nutrients. This, however, can be limited by the amount of trace contaminants in the sewage sludge, containing a variety of microbiological pollutants and pathogens but also inorganic and organic contaminants. The bioavailability and the effects of trace contaminants on the microorganisms of soil are still largely unknown as well as their mixture effects. Therefore, there is a need to analyze the sludge to test its suitability before further use. In this article, a variety of sampling, pretreatment, extraction, and analysis methods have been reviewed. Additionally, different organic trace compounds often found in the sewage sludge and their methods of analysis have been compiled. In addition to traditional Soxhlet extraction, the most common extraction methods of organic contaminants in sludge include ultrasonic extraction (USE), supercritical fluid extraction (SFE), microwave-assisted extraction (MAE), and pressurized liquid extraction (PLE) followed by instrumental analysis based on gas or liquid chromatography and mass spectrometry.
15. Photographic copy of photograph dated ca. 1929; Photographer unknown; ...
15. Photographic copy of photograph dated ca. 1929; Photographer unknown; Original in Rath collection at Grout Museum, Waterloo, Iowa; Filed under: Rath Packing Company, Box 4; THE RATH COMPLEX IN THE LATE 1920S; LOOKING WEST FROM 18TH STREET; LARGE BUILDING AT CENTER IS HOG KILL (BUILDING 40) - Rath Packing Company, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA
Spectrometry of the Earth using Neutrino Oscillations
Rott, C.; Taketa, A.; Bose, D.
2015-01-01
The unknown constituents of the interior of our home planet have provoked the human imagination and driven scientific exploration. We herein demonstrate that large neutrino detectors could be used in the near future to significantly improve our understanding of the Earth’s inner chemical composition. Neutrinos, which are naturally produced in the atmosphere, traverse the Earth and undergo oscillations that depend on the Earth’s electron density. The Earth’s chemical composition can be determined by combining observations from large neutrino detectors with seismic measurements of the Earth’s matter density. We present a method that will allow us to perform a measurement that can distinguish between composition models of the outer core. We show that the next-generation large-volume neutrino detectors can provide sufficient sensitivity to reject extreme cases of outer core composition. In the future, dedicated instruments could be capable of distinguishing between specific Earth composition models and thereby reshape our understanding of the inner Earth in previously unimagined ways. PMID:26489447
Aeromagnetic surveys in Afghanistan: An updated website for distribution of data
Shenwary, Ghulam Sakhi; Kohistany, Abdul Hakim; Hussain, Sardar; Ashan, Said; Mutty, Abdul Salam; Daud, Mohammad Ahmad; Wussow, Michael D.; Sweeney, Ronald E.; Phillips, Jeffrey D.; Lindsay, Charles R.; Kucks, Robert P.; Finn, Carol A.; Drenth, Benjamin J.; Anderson, Eric D.; Abraham, Jared D.; Liang, Robert T.; Jarvis, James L.; Gardner, Joan M.; Childers, Vicki A.; Ball, David C.; Brozena, John M.
2011-01-01
Because of its geologic setting, Afghanistan has the potential to contain substantial natural resources. Although valuable mineral deposits and petroleum resources have been identified, much of the country's potential remains unknown. Airborne geophysical surveys are a well accepted and cost effective method for obtaining information about the geological setting of an area without the need to be physically located on the ground. Owing to the current security situation and the large areas of the country that have not been evaluated by geophysical exploration methods, a regional airborne geophysical survey was proposed. Acting upon the request of the Islamic Republic of Afghanistan Ministry of Mines, the U.S. Geological Survey contracted with the Naval Research Laboratory to jointly conduct an airborne geophysical and remote sensing survey of Afghanistan.
A novel finite element analysis of three-dimensional circular crack
NASA Astrophysics Data System (ADS)
Ping, X. C.; Wang, C. G.; Cheng, L. P.
2018-06-01
A novel singular element containing a part of the circular crack front is established to solve the singular stress fields of circular cracks by using the numerical series eigensolutions of singular stress fields. The element is derived from the Hellinger-Reissner variational principle and can be directly incorporated into existing 3D brick elements. The singular stress fields are determined as the system unknowns appearing as displacement nodal values. The numerical studies are conducted to demonstrate the simplicity of the proposed technique in handling fracture problems of circular cracks. The usage of the novel singular element can avoid mesh refinement near the crack front domain without loss of calculation accuracy and velocity of convergence. Compared with the conventional finite element methods and existing analytical methods, the present method is more suitable for dealing with complicated structures with a large number of elements.
NASA Astrophysics Data System (ADS)
Qiang, Ji
2017-10-01
A three-dimensional (3D) Poisson solver with longitudinal periodic and transverse open boundary conditions can have important applications in beam physics of particle accelerators. In this paper, we present a fast efficient method to solve the Poisson equation using a spectral finite-difference method. This method uses a computational domain that contains the charged particle beam only and has a computational complexity of O(Nu(logNmode)) , where Nu is the total number of unknowns and Nmode is the maximum number of longitudinal or azimuthal modes. This saves both the computational time and the memory usage of using an artificial boundary condition in a large extended computational domain. The new 3D Poisson solver is parallelized using a message passing interface (MPI) on multi-processor computers and shows a reasonable parallel performance up to hundreds of processor cores.
Input-output identification of controlled discrete manufacturing systems
NASA Astrophysics Data System (ADS)
Estrada-Vargas, Ana Paula; López-Mellado, Ernesto; Lesage, Jean-Jacques
2014-03-01
The automated construction of discrete event models from observations of external system's behaviour is addressed. This problem, often referred to as system identification, allows obtaining models of ill-known (or even unknown) systems. In this article, an identification method for discrete event systems (DESs) controlled by a programmable logic controller is presented. The method allows processing a large quantity of observed long sequences of input/output signals generated by the controller and yields an interpreted Petri net model describing the closed-loop behaviour of the automated DESs. The proposed technique allows the identification of actual complex systems because it is sufficiently efficient and well adapted to cope with both the technological characteristics of industrial controllers and data collection requirements. Based on polynomial-time algorithms, the method is implemented as an efficient software tool which constructs and draws the model automatically; an overview of this tool is given through a case study dealing with an automated manufacturing system.
Do Our Means of Inquiry Match our Intentions?
Petscher, Yaacov
2016-01-01
A key stage of the scientific method is the analysis of data, yet despite the variety of methods that are available to researchers they are most frequently distilled to a model that focuses on the average relation between variables. Although research questions are frequently conceived with broad inquiry in mind, most regression methods are limited in comprehensively evaluating how observed behaviors are related to each other. Quantile regression is a largely unknown yet well-suited analytic technique similar to traditional regression analysis, but allows for a more systematic approach to understanding complex associations among observed phenomena in the psychological sciences. Data from the National Education Longitudinal Study of 1988/2000 are used to illustrate how quantile regression overcomes the limitations of average associations in linear regression by showing that psychological well-being and sex each differentially relate to reading achievement depending on one’s level of reading achievement. PMID:27486410
Teaching history-taking: where are we?
Nardone, D. A.; Reuler, J. B.; Girard, D. E.
1980-01-01
Knowledge in history-taking has increased rapidly over the last twenty years. Currently the principles to be taught include "conduct," "content," and "diagnostic reasoning." However, inattentiveness of medical schools, reluctance of busy faculty to be involved, and increasing enrollments have resulted in difficulties in teaching these skills. Studies have shown a beneficial short-term effect of teaching these materials on interview performance but it is unknown whether this effect is long-lasting. The methods for instruction include the bedside and videotape models utilizing the concept of the fifteen-minute interview technique, programmed instruction, patient instructors, and direct student feedback. Future research should focus on identifying strategies in diagnostic reasoning, developing graduated competency criteria for trainees at different levels of their education, refining methods to evaluate large numbers of students, measuring outcomes of effective training such as compliance, and comparing costs and effectiveness of various methods. In addition, there remains the need to establish an association of course directors. PMID:7405275
Effectiveness and value of massage skills training during pre-registration nurse education.
Cook, Neal F; Robinson, Jacqueline
2006-10-01
The integration of Complementary and alternative medicine (CAM) interventions into healthcare practices is becoming more popular and frequently accessed by patients. Various disciplines have integrated CAM techniques education into the preparation of their practitioners in response to this, but this varies widely, as does its success. Students'experiences of such education in pre-registration is largely unknown in the UK, and methods by which to successful achieve effective learning within this arena are largely unreported within the literature. This study highlighted three specifics aims; to examine the perspectives of pre-registration nursing students on being taught massage skills during pre-registration nurse education; to identify the learning and development that occurs during massage skills training; and to identify methods of enhancing the provision of such skills training and its experience. This paper demonstrates the value of integrating complementary therapies into nurse education, developing the holistic approach of student nurses and their concept of caring. In addition it contributes significantly to the knowledge base of the effectiveness of the value of CAM education in nurse preparation, highlighting the high value students place on CAM education and demonstrating notable development in the preparation of holistic practitioners. The method utilised also yielded ways to improve the delivery of such education, and demonstrates how creative teaching methods can motivate and enhance effective learning.
Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.
2013-01-01
Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
Ramos, Caroline L.; Fonseca, Fernanda L.; Rodrigues, Jessica; Guimarães, Allan J.; Cinelli, Leonardo P.; Miranda, Kildare; Nimrichter, Leonardo; Casadevall, Arturo; Travassos, Luiz R.
2012-01-01
In prior studies, we demonstrated that glucuronoxylomannan (GXM), the major capsular polysaccharide of the fungal pathogen Cryptococcus neoformans, interacts with chitin oligomers at the cell wall-capsule interface. The structural determinants regulating these carbohydrate-carbohydrate interactions, as well as the functions of these structures, have remained unknown. In this study, we demonstrate that glycan complexes composed of chitooligomers and GXM are formed during fungal growth and macrophage infection by C. neoformans. To investigate the required determinants for the assembly of chitin-GXM complexes, we developed a quantitative scanning electron microscopy-based method using different polysaccharide samples as inhibitors of the interaction of chitin with GXM. This assay revealed that chitin-GXM association involves noncovalent bonds and large GXM fibers and depends on the N-acetyl amino group of chitin. Carboxyl and O-acetyl groups of GXM are not required for polysaccharide-polysaccharide interactions. Glycan complex structures composed of cryptococcal GXM and chitin-derived oligomers were tested for their ability to induce pulmonary cytokines in mice. They were significantly more efficient than either GXM or chitin oligomers alone in inducing the production of lung interleukin 10 (IL-10), IL-17, and tumor necrosis factor alpha (TNF-α). These results indicate that association of chitin-derived structures with GXM through their N-acetyl amino groups generates glycan complexes with previously unknown properties. PMID:22562469
Wire connector classification with machine vision and a novel hybrid SVM
NASA Astrophysics Data System (ADS)
Chauhan, Vedang; Joshi, Keyur D.; Surgenor, Brian W.
2018-04-01
A machine vision-based system has been developed and tested that uses a novel hybrid Support Vector Machine (SVM) in a part inspection application with clear plastic wire connectors. The application required the system to differentiate between 4 different known styles of connectors plus one unknown style, for a total of 5 classes. The requirement to handle an unknown class is what necessitated the hybrid approach. The system was trained with the 4 known classes and tested with 5 classes (the 4 known plus the 1 unknown). The hybrid classification approach used two layers of SVMs: one layer was semi-supervised and the other layer was supervised. The semi-supervised SVM was a special case of unsupervised machine learning that classified test images as one of the 4 known classes (to accept) or as the unknown class (to reject). The supervised SVM classified test images as one of the 4 known classes and consequently would give false positives (FPs). Two methods were tested. The difference between the methods was that the order of the layers was switched. The method with the semi-supervised layer first gave an accuracy of 80% with 20% FPs. The method with the supervised layer first gave an accuracy of 98% with 0% FPs. Further work is being conducted to see if the hybrid approach works with other applications that have an unknown class requirement.
Stakia, Paraskevi; Lagos, Panagiotis; Gourgiotis, Stavros; Tzilalis, Vasilios D; Aloizos, Stavros; Salemis, Nikolaos S
2009-01-01
Cancers of unknown primary site (CUPs) consist of a clinical entity which accounts for 3-5% of all solid tumor patients. They are metastatic solid tumors whose fundamental characteristic is the absence of identifiable site of the primary tumor. We report the case of a completely asymptomatic 34-year-old man with a palpated huge mass found incidentally in the left abdomen. All the investigations were normal. During the operation, a large mass was identified 2 cm below the left renal artery which was displacing and encompassing the great retroperitoneal vessels and the left ureter. A complete resection of the mass was performed while the histological examination revealed a solitary retroperitoneal lymph node categorized as metastatic adenocarcinoma of unknown primary site. It is essential to assess the high incidence of patients with cancer who present with CUP. Early surgical excision of the metastatic lesion followed by adjuvant combination chemotherapy should be considered for patients with only a single site of malignancy.
A different approach to estimate nonlinear regression model using numerical methods
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
[Fast discrimination of edible vegetable oil based on Raman spectroscopy].
Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng
2012-07-01
A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.
Umesh, Achary; Gowda, Guru S; Kumar, Channaveerachari Naveen; Srinivas, Dwarakanath; Dawn, Bharath Rose; Botta, Ragasudha; Yadav, Ravi; Math, Suresh Bada
2017-01-01
Objectives: A large number of unknown patients without any personal, family, or other identification details represent a unique problem in the neurological emergency services of developing countries like India in a context of legal, humanitarian, and treatment issues. These patients pose a diagnostic and management challenge to treating physicians and staff. There are sparse data on these patients. The objective of this study was to know the clinical, socio-demographic, and investigational profile of “unknown” patients. Materials and Methods: We did retrospective chart review of all “Unknown” patients from January 2002 to December 2011, who was admitted under Neurology Emergency Service at a Tertiary Care Neuropsychiatry Center in South Indian Metropolitan City. Clinical and sociodemographic characteristics and clinical outcome of the sample were analyzed. Results: A total of 151 unknown patients were admitted during the 10 years. Out of these, 134 (88.7%) were males with the mean age of 43.8 ± 14.8 years and 95 (63%) were aged >40 years. Among them, 147 (97.4%) were from the urban vicinity, 126 (83.6%) were brought by police and 75 (49.7%) were registered as medico-legal cases. Out of these, only 3 (2%) patients had normal sensorium, whereas 101 (66.9%) presented with loss of consciousness. Forty-one (27.2%) unknown patients had a seizure disorder, 37 (24.5%) had metabolic encephalopathy, 26 (17.2%) had a stroke, 9 (6%) had neuro-infection, and 17 (11.3%) had a head injury. Deranged liver functions were seen in 65 (43%), renal derangement in 37 (24.5%), dyselectrolytemia in 42 (27.8%), and abnormal brain imaging finding in 95 (62.9%) patients. Furthermore, there were 14 (9.3%) deaths. Conclusions: Our findings demonstrate seizures, metabolic causes, and neuro-infections were the primary reasons for admission of unknown patients to neuro-emergency service. This novel Indian study data show the common causes of admission of unknown patients in neurology. This pattern can be useful to guide the approach of healthcare providers in India. PMID:28615894
Zhong, Zhixiong; Zhu, Yanzheng; Ahn, Choon Ki
2018-07-01
In this paper, we address the problem of reachable set estimation for continuous-time Takagi-Sugeno (T-S) fuzzy systems subject to unknown output delays. Based on the reachable set concept, a new controller design method is also discussed for such systems. An effective method is developed to attenuate the negative impact from the unknown output delays, which likely degrade the performance/stability of systems. First, an augmented fuzzy observer is proposed to capacitate a synchronous estimation for the system state and the disturbance term owing to the unknown output delays, which ensures that the reachable set of the estimation error is limited via the intersection operation of ellipsoids. Then, a compensation technique is employed to eliminate the influence on the system performance stemmed from the unknown output delays. Finally, the effectiveness and correctness of the obtained theories are verified by the tracking control of autonomous underwater vehicles. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Visualizing frequent patterns in large multivariate time series
NASA Astrophysics Data System (ADS)
Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.
2011-01-01
The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.
Accuracy of a simplified method for shielded gamma-ray skyshine sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassett, M.S.; Shultis, J.K.
1989-11-01
Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less
Wegener, Michael; Huber, Florian; Bolli, Christoph; Jenne, Carsten; Kirsch, Stefan F
2015-01-12
Phosphane and N-heterocyclic carbene ligated gold(I) chlorides can be effectively activated by Na[Me3NB12Cl11] (1) under silver-free conditions. This activation method with a weakly coordinating closo-dodecaborate anion was shown to be suitable for a large variety of reactions known to be catalyzed by homogeneous gold species, ranging from carbocyclizations to heterocyclizations. Additionally, the capability of 1 in a previously unknown conversion of 5-silyloxy-1,6-allenynes was demonstrated. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An introduction to instrumental variables analysis: part 1.
Bennett, Derrick A
2010-01-01
There are several examples in the medical literature where the associations of treatment effects predicted by observational studies have been refuted by evidence from subsequent large-scale randomised trials. This is because of the fact that non-experimental studies are subject to confounding - and confounding cannot be entirely eliminated even if all known confounders have been measured in the study as there may be unknown confounders. The aim of this 2-part methodological primer is to introduce an emerging methodology for estimating treatment effects using observational data in the absence of good randomised evidence known as the method of instrumental variables. Copyright © 2010 S. Karger AG, Basel.
Polymer separations by liquid interaction chromatography: principles - prospects - limitations.
Radke, Wolfgang
2014-03-28
Most heterogeneities of polymers with respect to different structural features cannot be resolved by only size exclusion chromatography (SEC), the most frequently applied mode of polymer chromatography. Instead, methods of interaction chromatography became increasingly important. However, despite the increasing applications the principles and potential of polymer interaction chromatography are still often unknown to a large number of polymer scientists. The present review will explain the principles of the different modes of polymer chromatography. Based on selected examples it will be shown which separation techniques can be successfully applied for separations with respect to the different structural features of polymers. Copyright © 2013 Elsevier B.V. All rights reserved.
Parallel human genome analysis: microarray-based expression monitoring of 1000 genes.
Schena, M; Shalon, D; Heller, R; Chai, A; Brown, P O; Davis, R W
1996-01-01
Microarrays containing 1046 human cDNAs of unknown sequence were printed on glass with high-speed robotics. These 1.0-cm2 DNA "chips" were used to quantitatively monitor differential expression of the cognate human genes using a highly sensitive two-color hybridization assay. Array elements that displayed differential expression patterns under given experimental conditions were characterized by sequencing. The identification of known and novel heat shock and phorbol ester-regulated genes in human T cells demonstrates the sensitivity of the assay. Parallel gene analysis with microarrays provides a rapid and efficient method for large-scale human gene discovery. Images Fig. 1 Fig. 2 Fig. 3 PMID:8855227
Music therapy as a non-pharmacological treatment for epilepsy.
Liao, Huan; Jiang, Guohui; Wang, Xuefeng
2015-01-01
Epilepsy is one of the most common neurological diseases. Currently, the primary methods of treatment include pharmacological and surgical treatment. However, approximately one-third of patients exhibit refractory epilepsy. Therefore, a novel approach to epilepsy treatment is necessary. Several studies have confirmed that music therapy can be effective at reducing seizures and epileptiform discharges, thus providing a new option for clinicians in the treatment of epilepsy. Although the underlying mechanism of music therapy is unknown, it may be related to resonance, mirror neurons, dopamine pathways and parasympathetic activation. Large sample, multicenter, randomized double-blind and more effectively designed studies are needed for future music therapy studies.
Timoshenko, J.; Shivhare, A.; Scott, R. W.; ...
2016-06-30
We adopted ab-initio X-ray Absorption Near Edge Structure (XANES) modelling for structural refinement of local environments around metal impurities in a large variety of materials. Our method enables both direct modelling, where the candidate structures are known, and the inverse modelling, where the unknown structural motifs are deciphered from the experimental spectra. We present also estimates of systematic errors, and their influence on the stability and accuracy of the obtained results. We illustrate our approach by following the evolution of local environment of palladium atoms in palladium-doped gold thiolate clusters upon chemical and thermal treatments.
Culture care theory: a major contribution to advance transcultural nursing knowledge and practices.
Leininger, Madeleine
2002-07-01
This article is focused on the major features of the Culture Care Diversity and Universality theory as a central contributing theory to advance transcultural nursing knowledge and to use the findings in teaching, research, practice, and consultation. It remains one of the oldest, most holistic, and most comprehensive theories to generate knowledge of diverse and similar cultures worldwide. The theory has been a powerful means to discover largely unknown knowledge in nursing and the health fields. It provides a new mode to assure culturally competent, safe, and congruent transcultural nursing care. The purpose, goal, assumptive premises, ethnonursing research method, criteria, and some findings are highlighted.
Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers
NASA Technical Reports Server (NTRS)
Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.
Dalton, Jane; Booth, Andrew; Noyes, Jane; Sowden, Amanda J
2017-08-01
Systematic reviews of quantitative evidence are well established in health and social care. Systematic reviews of qualitative evidence are increasingly available, but volume, topics covered, methods used, and reporting quality are largely unknown. We provide a descriptive overview of systematic reviews of qualitative evidence assessing health and social care interventions included on the Database of Abstracts of Reviews of Effects (DARE). We searched DARE for reviews published between January 1, 2009, and December 31, 2014. We extracted data on review content and methods, summarized narratively, and explored patterns over time. We identified 145 systematic reviews conducted worldwide (64 in the UK). Interventions varied but largely covered treatment or service delivery in community and hospital settings. There were no discernible patterns over time. Critical appraisal of primary studies was conducted routinely. Most reviews were poorly reported. Potential exists to use systematic reviews of qualitative evidence when driving forward user-centered health and social care. We identify where more research is needed and propose ways to improve review methodology and reporting. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
Yago, Kazuhiro; Yanagita, Soshi; Aono, Maki; Matsuo, Ken; Shimada, Hideto
2009-06-01
A 76-year-old man presented with fever of unknown origin and renal dysfunction. Laboratory examination revealed anemia, thrombocytopenia, hypoalbuminemia, proteinuria, and elevations of C-reactive protein, lactic dehydrogenase, creatinine and ferritin. (18)F-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) imaging showed FDG accumulation in the renal cortex and spleen. Based on the imaging study, renal biopsy was performed and histological diagnosis of intravascular large B-cell lymphoma (IVLBCL) was made. Renal impairment due to IVLBCL is uncommon and is often difficult to diagnose early. FDG-PET/CT may be a useful tool for the early diagnosis of IVLBCL.
Self-expressive Dictionary Learning for Dynamic 3D Reconstruction.
Zheng, Enliang; Ji, Dinghuang; Dunn, Enrique; Frahm, Jan-Michael
2017-08-22
We target the problem of sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning, where the dictionary is defined as an aggregation of the temporally varying 3D structures. Given the smooth motion of dynamic objects, we observe any element in the dictionary can be well approximated by a sparse linear combination of other elements in the same dictionary (i.e. self-expression). Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. We further analyze the reconstructability of our approach under different capture scenarios, and its comparison and relation to existing methods. Experimental results on large amounts of synthetic data as well as real imagery demonstrate the effectiveness of our approach.
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
HU, LIGANG; CAI, YONG; JIANG, GUIBIN
2016-01-01
Laboratory experiments suggest that polymeric Cr(III) could exist in aqueous solution for a relative long period of time. However, the occurrence of polymeric Cr(III) has not been reported in environmental media due partially to the lack of method for speciating polymeric Cr. We observed an unknown Cr species during the course of study on speciation of Cr in the leachates of chromated-copper-arsenate (CCA)-treated wood. Efforts were made to identify structure of the unknown Cr species. Considering the forms of Cr existed in the CCA-treated woods, we mainly focused our efforts to determine if the unknown species were polymeric Cr(III), complex of Cr/As or complex of Cr with dissolved organic matter (DOM). In order to evaluate whether polymeric Cr(III) largely exist in wood leachates, high performance liquid chromatography coupled with inductively coupled mass spectrometry (HPLC-ICPMS was used) for simultaneous speciation of monomeric Cr(III), polymeric Cr(III), and Cr(VI). In addition to wood leachates where polymeric Cr (III) ranged from 39.1 to 67.4 %, occurrence of the unknown Cr species in other environmental matrices, including surface waters, tap and waste waters, was also investigated. It was found that polymeric Cr(III) could exist in environmental samples containing μg/L level of Cr, at a level up to 60% of total Cr, suggesting that polymeric Cr(III) could significantly exist in natural environments. Failure in quantifying polymeric Cr(III) would lead to the underestimation of total Cr and bias in Cr speciation. The environmental implication of the presence of polymeric Cr(III) species in the environment deserves further study. PMID:27156211
ERIC Educational Resources Information Center
Wang, Lijuan; McArdle, John J.
2008-01-01
The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…
NASA Astrophysics Data System (ADS)
Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.
2016-07-01
Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.
Multi-Target State Extraction for the SMC-PHD Filter
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-01-01
The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274
Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun
2017-03-01
H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.
Shang, Fengjun; Jiang, Yi; Xiong, Anping; Su, Wen; He, Li
2016-11-18
With the integrated development of the Internet, wireless sensor technology, cloud computing, and mobile Internet, there has been a lot of attention given to research about and applications of the Internet of Things. A Wireless Sensor Network (WSN) is one of the important information technologies in the Internet of Things; it integrates multi-technology to detect and gather information in a network environment by mutual cooperation, using a variety of methods to process and analyze data, implement awareness, and perform tests. This paper mainly researches the localization algorithm of sensor nodes in a wireless sensor network. Firstly, a multi-granularity region partition is proposed to divide the location region. In the range-based method, the RSSI (Received Signal Strength indicator, RSSI) is used to estimate distance. The optimal RSSI value is computed by the Gaussian fitting method. Furthermore, a Voronoi diagram is characterized by the use of dividing region. Rach anchor node is regarded as the center of each region; the whole position region is divided into several regions and the sub-region of neighboring nodes is combined into triangles while the unknown node is locked in the ultimate area. Secondly, the multi-granularity regional division and Lagrange multiplier method are used to calculate the final coordinates. Because nodes are influenced by many factors in the practical application, two kinds of positioning methods are designed. When the unknown node is inside positioning unit, we use the method of vector similarity. Moreover, we use the centroid algorithm to calculate the ultimate coordinates of unknown node. When the unknown node is outside positioning unit, we establish a Lagrange equation containing the constraint condition to calculate the first coordinates. Furthermore, we use the Taylor expansion formula to correct the coordinates of the unknown node. In addition, this localization method has been validated by establishing the real environment.
Supervised Detection of Anomalous Light Curves in Massive Astronomical Catalogs
NASA Astrophysics Data System (ADS)
Nun, Isadora; Pichara, Karim; Protopapas, Pavlos; Kim, Dae-Won
2014-09-01
The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nun, Isadora; Pichara, Karim; Protopapas, Pavlos
The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each ofmore » the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.« less
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Wright, T. J.
2006-12-01
We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.
Self-guided method to search maximal Bell violations for unknown quantum states
NASA Astrophysics Data System (ADS)
Yang, Li-Kai; Chen, Geng; Zhang, Wen-Hao; Peng, Xing-Xiang; Yu, Shang; Ye, Xiang-Jun; Li, Chuan-Feng; Guo, Guang-Can
2017-11-01
In recent decades, a great variety of research and applications concerning Bell nonlocality have been developed with the advent of quantum information science. Providing that Bell nonlocality can be revealed by the violation of a family of Bell inequalities, finding maximal Bell violation (MBV) for unknown quantum states becomes an important and inevitable task during Bell experiments. In this paper we introduce a self-guided method to find MBVs for unknown states using a stochastic gradient ascent algorithm (SGA), by parametrizing the corresponding Bell operators. For three investigated systems (two qubit, three qubit, and two qutrit), this method can ascertain the MBV of general two-setting inequalities within 100 iterations. Furthermore, we prove SGA is also feasible when facing more complex Bell scenarios, e.g., d -setting d -outcome Bell inequality. Moreover, compared to other possible methods, SGA exhibits significant superiority in efficiency, robustness, and versatility.
Bick, Christian; Kolodziejski, Christoph; Timme, Marc
2014-09-01
Predictive feedback control is an easy-to-implement method to stabilize unknown unstable periodic orbits in chaotic dynamical systems. Predictive feedback control is severely limited because asymptotic convergence speed decreases with stronger instabilities which in turn are typical for larger target periods, rendering it harder to effectively stabilize periodic orbits of large period. Here, we study stalled chaos control, where the application of control is stalled to make use of the chaotic, uncontrolled dynamics, and introduce an adaptation paradigm to overcome this limitation and speed up convergence. This modified control scheme is not only capable of stabilizing more periodic orbits than the original predictive feedback control but also speeds up convergence for typical chaotic maps, as illustrated in both theory and application. The proposed adaptation scheme provides a way to tune parameters online, yielding a broadly applicable, fast chaos control that converges reliably, even for periodic orbits of large period.
Knebel, H.J.; Folger, D.W.
1976-01-01
New seismic-reflection data show that large sand waves near the head of Wilmington Canyon on the Atlantic Outer Continental Shelf have a spacing of 100-650 m and a relief of 2-9 m. The bedforms trend northwest and are asymmetrical, the steeper slopes being toward the south or west. Vibracore sediments indicate that the waves apparently have formed on a substrate of relict nearshore sediments. Although the age of the original bedforms is unknown, the asymmetry is consistent with the dominant westerly to southerly drift in this area which has been determined by other methods; the asymmetry, therefore, is probably modern. Observations in the sand-wave area from a submersible during August 1975, revealed weak bottom currents, sediment bioturbation, unrippled microtopography, and lack of scour. Thus, the asymmetry may be maintained by periodic water motion, possibly associated with storms or perhaps with flow in the canyon head. ?? 1976.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yiming, E-mail: yangyiming1988@outlook.com
Minor phases make considerable contributions to the mechanical and physical properties of metals and alloys. Unfortunately, it is difficult to identify unknown minor phases in a bulk polycrystalline material using conventional metallographic methods. Here, a non-destructive method based on three-dimensional X-ray diffraction (3DXRD) is developed to solve this problem. Simulation results demonstrate that this method is simultaneously able to identify minor phase grains and reveal their positions, orientations and sizes within bulk alloys. According to systematic simulations, the 3DXRD method is practicable for an extensive sample set, including polycrystalline alloys with hexagonal, orthorhombic and cubic minor phases. Experiments were alsomore » conducted to confirm the simulation results. The results for a bulk sample of aluminum alloy AA6061 show that the crystal grains of an unexpected γ-Fe (austenite) phase can be identified, three-dimensionally and nondestructively. Therefore, we conclude that the 3DXRD method is a powerful tool for the identification of unknown minor phases in bulk alloys belonging to a variety of crystal systems. This method also has the potential to be used for in situ observations of the effects of minor phases on the crystallographic behaviors of alloys. - Highlights: •A method based on 3DXRD is developed for identification of unknown minor phase. •Grain position, orientation and size, is simultaneously acquired. •A systematic simulation demonstrated the applicability of the proposed method. •Experimental results on a AA6061 sample confirmed the practicability of the method.« less
Su, Xiaoquan; Xu, Jian; Ning, Kang
2012-10-01
It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.
Adaptive neural control for a class of nonlinear time-varying delay systems with unknown hysteresis.
Liu, Zhi; Lai, Guanyu; Zhang, Yun; Chen, Xin; Chen, Chun Lung Philip
2014-12-01
This paper investigates the fusion of unknown direction hysteresis model with adaptive neural control techniques in face of time-delayed continuous time nonlinear systems without strict-feedback form. Compared with previous works on the hysteresis phenomenon, the direction of the modified Bouc-Wen hysteresis model investigated in the literature is unknown. To reduce the computation burden in adaptation mechanism, an optimized adaptation method is successfully applied to the control design. Based on the Lyapunov-Krasovskii method, two neural-network-based adaptive control algorithms are constructed to guarantee that all the system states and adaptive parameters remain bounded, and the tracking error converges to an adjustable neighborhood of the origin. In final, some numerical examples are provided to validate the effectiveness of the proposed control methods.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
Spencer, Nick J; Hibberd, Timothy J; Travis, Lee; Wiklendt, Lukasz; Costa, Marcello; Hu, Hongzhen; Brookes, Simon J; Wattchow, David A; Dinning, Phil G; Keating, Damien J; Sorensen, Julian
2018-05-28
The enteric nervous system (ENS) contains millions of neurons essential for organization of motor behaviour of the intestine. It is well established the large intestine requires ENS activity to drive propulsive motor behaviours. However, the firing pattern of the ENS underlying propagating neurogenic contractions of the large intestine remains unknown. To identify this, we used high resolution neuronal imaging with electrophysiology from neighbouring smooth muscle. Myoelectric activity underlying propagating neurogenic contractions along murine large intestine (referred to as colonic migrating motor complexes, CMMCs) consisted of prolonged bursts of rhythmic depolarizations at a frequency of ∼2 Hz. Temporal coordination of this activity in the smooth muscle over large spatial fields (∼7mm, longitudinally) was dependent on the ENS. During quiescent periods between neurogenic contractions, recordings from large populations of enteric neurons, in mice of either sex, revealed ongoing activity. The onset of neurogenic contractions was characterized by the emergence of temporally synchronized activity across large populations of excitatory and inhibitory neurons. This neuronal firing pattern was rhythmic and temporally synchronized across large numbers of ganglia at ∼2 Hz. ENS activation preceded smooth muscle depolarization, indicating rhythmic depolarizations in smooth muscle were controlled by firing of enteric neurons. The cyclical emergence of temporally coordinated firing of large populations of enteric neurons represents a unique neural motor pattern outside the central nervous system. This is the first direct observation of rhythmic firing in the ENS underlying rhythmic electrical depolarizations in smooth muscle. The pattern of neuronal activity we identified underlies the generation of CMMCs. SIGNIFICANCE STATEMENT How the enteric nervous system (ENS) generates neurogenic contractions of smooth muscle in the gastrointestinal (GI) tract has been a long-standing mystery in vertebrates. It is well known that myogenic pacemaker cells exist in the GI-tract (called Interstitial cells of Cajal, ICC) that generate rhythmic myogenic contractions. However, the mechanisms underlying the generation of rhythmic neurogenic contractions of smooth muscle in the GI-tract remains unknown. We developed a high resolution neuronal imaging method with electrophysiology to address this issue. This technique revealed a novel pattern of rhythmic coordinated neuronal firing in the ENS that has never been identified. Rhythmic neuronal firing in the ENS was found to generate rhythmic neurogenic depolarizations in smooth muscle that underlie contraction of the GI-tract. Copyright © 2018 the authors.
Interactive Social Neuroscience to Study Autism Spectrum Disorder
Rolison, Max J.; Naples, Adam J.; McPartland, James C.
2015-01-01
Individuals with autism spectrum disorder (ASD) demonstrate difficulty with social interactions and relationships, but the neural mechanisms underlying these difficulties remain largely unknown. While social difficulties in ASD are most apparent in the context of interactions with other people, most neuroscience research investigating ASD have provided limited insight into the complex dynamics of these interactions. The development of novel, innovative “interactive social neuroscience” methods to study the brain in contexts with two interacting humans is a necessary advance for ASD research. Studies applying an interactive neuroscience approach to study two brains engaging with one another have revealed significant differences in neural processes during interaction compared to observation in brain regions that are implicated in the neuropathology of ASD. Interactive social neuroscience methods are crucial in clarifying the mechanisms underlying the social and communication deficits that characterize ASD. PMID:25745371
Interactive social neuroscience to study autism spectrum disorder.
Rolison, Max J; Naples, Adam J; McPartland, James C
2015-03-01
Individuals with autism spectrum disorder (ASD) demonstrate difficulty with social interactions and relationships, but the neural mechanisms underlying these difficulties remain largely unknown. While social difficulties in ASD are most apparent in the context of interactions with other people, most neuroscience research investigating ASD have provided limited insight into the complex dynamics of these interactions. The development of novel, innovative "interactive social neuroscience" methods to study the brain in contexts with two interacting humans is a necessary advance for ASD research. Studies applying an interactive neuroscience approach to study two brains engaging with one another have revealed significant differences in neural processes during interaction compared to observation in brain regions that are implicated in the neuropathology of ASD. Interactive social neuroscience methods are crucial in clarifying the mechanisms underlying the social and communication deficits that characterize ASD.
Method for calibrating a Fourier transform ion cyclotron resonance mass spectrometer
Smith, Richard D.; Masselon, Christophe D.; Tolmachev, Aleksey
2003-08-19
A method for improving the calibration of a Fourier transform ion cyclotron resonance mass spectrometer wherein the frequency spectrum of a sample has been measured and the frequency (f) and intensity (I) of at least three species having known mass to charge (m/z) ratios and one specie having an unknown (m/z) ratio have been identified. The method uses the known (m/z) ratios, frequencies, and intensities at least three species to calculate coefficients A, B, and C, wherein the mass to charge ratio of a least one of the three species (m/z).sub.i is equal to ##EQU1## wherein f.sub.i is the detected frequency of the specie, G(I.sub.i) is a predetermined function of the intensity of the species, and Q is a predetermined exponent. Using the calculated values for A, B, and C, the mass to charge ratio of the unknown specie (m/z).sub.ii is calculated as the sum of ##EQU2## wherein f.sub.ii is the measured frequency of the unknown specie, and (I.sub.ii) is the measured intensity of the unknown specie.
Lu, Feng; Matsushita, Yasuyuki; Sato, Imari; Okabe, Takahiro; Sato, Yoichi
2015-10-01
We propose an uncalibrated photometric stereo method that works with general and unknown isotropic reflectances. Our method uses a pixel intensity profile, which is a sequence of radiance intensities recorded at a pixel under unknown varying directional illumination. We show that for general isotropic materials and uniformly distributed light directions, the geodesic distance between intensity profiles is linearly related to the angular difference of their corresponding surface normals, and that the intensity distribution of the intensity profile reveals reflectance properties. Based on these observations, we develop two methods for surface normal estimation; one for a general setting that uses only the recorded intensity profiles, the other for the case where a BRDF database is available while the exact BRDF of the target scene is still unknown. Quantitative and qualitative evaluations are conducted using both synthetic and real-world scenes, which show the state-of-the-art accuracy of smaller than 10 degree without using reference data and 5 degree with reference data for all 100 materials in MERL database.
"A Marriage on the Rocks": An Unknown Letter by William H. Kilpatrick about His Project Method
ERIC Educational Resources Information Center
Knoll, Michael
2010-01-01
William H. Kilpatrick is worldwide known as "Mr. Project Method." But the origin of his celebrated paper of 1918 has never been explored. The discovery of a hitherto unknown letter reveals that Kilpatrick was an educational entrepreneur who, without regard for language and tradition, adopted the term "project" and used it in a provocative new way…
USDA-ARS?s Scientific Manuscript database
Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...
Ochiai, Nobuo; Mitsui, Kazuhisa; Sasamoto, Kikuo; Yoshimura, Yuta; David, Frank; Sandra, Pat
2014-09-05
A method is developed for identification of sulfur compounds in tobacco smoke extract. The method is based on large volume injection (LVI) of 10μL of tobacco smoke extract followed by selectable one-dimensional ((1)D) or two-dimensional ((2)D) gas chromatography (GC) coupled to a hybrid quadrupole time-of-flight mass spectrometer (Q-TOF-MS) using electron ionization (EI) and positive chemical ionization (PCI), with parallel sulfur chemiluminescence detection (SCD). In order to identify each individual sulfur compound, sequential heart-cuts of 28 sulfur fractions from (1)D GC to (2)D GC were performed with the three MS detection modes (SCD/EI-TOF-MS, SCD/PCI-TOF-MS, and SCD/PCI-Q-TOF-MS). Thirty sulfur compounds were positively identified by MS library search, linear retention indices (LRI), molecular mass determination using PCI accurate mass spectra, formula calculation using EI and PCI accurate mass spectra, and structure elucidation using collision activated dissociation (CAD) of the protonated molecule. Additionally, 11 molecular formulas were obtained for unknown sulfur compounds. The determined values of the identified and unknown sulfur compounds were in the range of 10-740ngmg total particulate matter (TPM) (RSD: 1.2-12%, n=3). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah
2018-02-01
Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.
NASA Astrophysics Data System (ADS)
Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah
2018-02-01
Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-08-30
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-01-01
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
State estimation of spatio-temporal phenomena
NASA Astrophysics Data System (ADS)
Yu, Dan
This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm. The proposed algorithm outperforms the augmented two-stage Kalman Filter (ASKF) and the unbiased minimum-variance (UMV) algorithm are shown in several examples. Finally, we propose a framework to place multiple mobile sensors to optimize the long-term performance of KF in the estimation of the state of a PDE. The major challenges are that placing multiple sensors is an NP-hard problem, and the optimization problem is non-convex in general. In this dissertation, first, we construct an ROM using RPOD* algorithm, and then reduce the feasible sensor locations into a subset using the ROM. The Information Space Receding Horizon Control (I-RHC) approach and a modified Monte Carlo Tree Search (MCTS) approach are applied to solve the sensor scheduling problem using the subset. Various applications have been provided to demonstrate the performance of the proposed approach.
NASA Astrophysics Data System (ADS)
Zhu, Qiao; Yue, Jun-Zhou; Liu, Wei-Qun; Wang, Xu-Dong; Chen, Jun; Hu, Guang-Di
2017-04-01
This work is focused on the active vibration control of piezoelectric cantilever beam, where an adaptive feedforward controller (AFC) is utilized to reject the vibration with unknown multiple frequencies. First, the experiment setup and its mathematical model are introduced. Due to that the channel between the disturbance and the vibration output is unknown in practice, a concept of equivalent input disturbance (EID) is employed to put an equivalent disturbance into the input channel. In this situation, the vibration control can be achieved by setting the control input be the identified EID. Then, for the EID with known multiple frequencies, the AFC is introduced to perfectly reject the vibration but is sensitive to the frequencies. In order to accurately identify the unknown frequencies of EID in presence of the random disturbances and un-modeled nonlinear dynamics, the time-frequency-analysis (TFA) method is employed to precisely identify the unknown frequencies. Consequently, a TFA-based AFC algorithm is proposed to the active vibration control with unknown frequencies. Finally, four cases are given to illustrate the efficiency of the proposed TFA-based AFC algorithm by experiment.
Advanced Computational Framework for Environmental Management ZEM, Version 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinov, Velimir V.; O'Malley, Daniel; Pandey, Sachin
2016-11-04
Typically environmental management problems require analysis of large and complex data sets originating from concurrent data streams with different data collection frequencies and pedigree. These big data sets require on-the-fly integration into a series of models with different complexity for various types of model analyses where the data are applied as soft and hard model constraints. This is needed to provide fast iterative model analyses based on the latest available data to guide decision-making. Furthermore, the data and model are associated with uncertainties. The uncertainties are probabilistic (e.g. measurement errors) and non-probabilistic (unknowns, e.g. alternative conceptual models characterizing site conditions).more » To address all of these issues, we have developed an integrated framework for real-time data and model analyses for environmental decision-making called ZEM. The framework allows for seamless and on-the-fly integration of data and modeling results for robust and scientifically-defensible decision-making applying advanced decision analyses tools such as Bayesian- Information-Gap Decision Theory (BIG-DT). The framework also includes advanced methods for optimization that are capable of dealing with a large number of unknown model parameters, and surrogate (reduced order) modeling capabilities based on support vector regression techniques. The framework is coded in Julia, a state-of-the-art high-performance programing language (http://julialang.org). The ZEM framework is open-source and can be applied to any environmental management site. The framework will be open-source and released under GPL V3 license.« less
Potocki, J K; Tharp, H S
1993-01-01
The success of treating cancerous tissue with heat depends on the temperature elevation, the amount of tissue elevated to that temperature, and the length of time that the tissue temperature is elevated. In clinical situations the temperature of most of the treated tissue volume is unknown, because only a small number of temperature sensors can be inserted into the tissue. A state space model based on a finite difference approximation of the bioheat transfer equation (BHTE) is developed for identification purposes. A full-order extended Kalman filter (EKF) is designed to estimate both the unknown blood perfusion parameters and the temperature at unmeasured locations. Two reduced-order estimators are designed as computationally less intensive alternatives to the full-order EKF. Simulation results show that the success of the estimation scheme depends strongly on the number and location of the temperature sensors. Superior results occur when a temperature sensor exists in each unknown blood perfusion zone, and the number of sensors is at least as large as the number of unknown perfusion zones. Unacceptable results occur when there are more unknown perfusion parameters than temperature sensors, or when the sensors are placed in locations that do not sample the unknown perfusion information.
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-12-01
We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
Fast Markerless Tracking for Augmented Reality in Planar Environment
NASA Astrophysics Data System (ADS)
Basori, Ahmad Hoirul; Afif, Fadhil Noer; Almazyad, Abdulaziz S.; AbuJabal, Hamza Ali S.; Rehman, Amjad; Alkawaz, Mohammed Hazim
2015-12-01
Markerless tracking for augmented reality should not only be accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current reported methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to available feature-based approaches. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22.97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.
Nadzirin, Nurul; Firdaus-Raih, Mohd
2012-10-08
Proteins of uncharacterized functions form a large part of many of the currently available biological databases and this situation exists even in the Protein Data Bank (PDB). Our analysis of recent PDB data revealed that only 42.53% of PDB entries (1084 coordinate files) that were categorized under "unknown function" are true examples of proteins of unknown function at this point in time. The remainder 1465 entries also annotated as such appear to be able to have their annotations re-assessed, based on the availability of direct functional characterization experiments for the protein itself, or for homologous sequences or structures thus enabling computational function inference.
Three-dimensional cinematography with control object of unknown shape.
Dapena, J; Harman, E A; Miller, J A
1982-01-01
A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.
Radar Cross Section Prediction for Coated Perfect Conductors with Arbitrary Geometries.
1986-01-01
equivalent electric and magnetic surface currents as the desired unknowns. Triangular patch modelling is ap- plied to the boundary surfaces. The method of...matrix inversion for the unknown surface current coefficients. Huygens’ principle is again applied to calculate the scattered electric field produced...differential equations with the equivalent electric and magnetic surface currents as the desired unknowns. Triangular patch modelling is ap- plied to the
New Finite Difference Methods Based on IIM for Inextensible Interfaces in Incompressible Flows
Li, Zhilin; Lai, Ming-Chih
2012-01-01
In this paper, new finite difference methods based on the augmented immersed interface method (IIM) are proposed for simulating an inextensible moving interface in an incompressible two-dimensional flow. The mathematical models arise from studying the deformation of red blood cells in mathematical biology. The governing equations are incompressible Stokes or Navier-Stokes equations with an unknown surface tension, which should be determined in such a way that the surface divergence of the velocity is zero along the interface. Thus, the area enclosed by the interface and the total length of the interface should be conserved during the evolution process. Because of the nonlinear and coupling nature of the problem, direct discretization by applying the immersed boundary or immersed interface method yields complex nonlinear systems to be solved. In our new methods, we treat the unknown surface tension as an augmented variable so that the augmented IIM can be applied. Since finding the unknown surface tension is essentially an inverse problem that is sensitive to perturbations, our regularization strategy is to introduce a controlled tangential force along the interface, which leads to a least squares problem. For Stokes equations, the forward solver at one time level involves solving three Poisson equations with an interface. For Navier-Stokes equations, we propose a modified projection method that can enforce the pressure jump condition corresponding directly to the unknown surface tension. Several numerical experiments show good agreement with other results in the literature and reveal some interesting phenomena. PMID:23795308
Advanced Structural Analyses by Third Generation Synchrotron Radiation Powder Diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakata, M.; Aoyagi, S.; Ogura, T.
2007-01-19
Since the advent of the 3rd generation Synchrotron Radiation (SR) sources, such as SPring-8, the capabilities of SR powder diffraction increased greatly not only in an accurate structure refinement but also ab initio structure determination. In this study, advanced structural analyses by 3rd generation SR powder diffraction based on the Large Debye-Scherrer camera installed at BL02B2, SPring-8 is described. Because of high angular resolution and high counting statistics powder data collected at BL02B2, SPring-8, ab initio structure determination can cope with a molecular crystals with 65 atoms including H atoms. For the structure refinements, it is found that a kindmore » of Maximum Entropy Method in which several atoms are omitted in phase calculation become very important to refine structural details of fairy large molecule in a crystal. It should be emphasized that until the unknown structure is refined very precisely, the obtained structure by Genetic Algorithm (GA) or some other ab initio structure determination method using real space structural knowledge, it is not possible to tell whether the structure obtained by the method is correct or not. In order to determine and/or refine crystal structure of rather complicated molecules, we cannot overemphasize the importance of the 3rd generation SR sources.« less
Sub-word image clustering in Farsi printed books
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2015-02-01
Most OCR systems are designed for the recognition of a single page. In case of unfamiliar font faces, low quality papers and degraded prints, the performance of these products drops sharply. However, an OCR system can use redundancy of word occurrences in large documents to improve recognition results. In this paper, we propose a sub-word image clustering method for the applications dealing with large printed documents. We assume that the whole document is printed by a unique unknown font with low quality print. Our proposed method finds clusters of equivalent sub-word images with an incremental algorithm. Due to the low print quality, we propose an image matching algorithm for measuring the distance between two sub-word images, based on Hamming distance and the ratio of the area to the perimeter of the connected components. We built a ground-truth dataset of more than 111000 sub-word images to evaluate our method. All of these images were extracted from an old Farsi book. We cluster all of these sub-words, including isolated letters and even punctuation marks. Then all centers of created clusters are labeled manually. We show that all sub-words of the book can be recognized with more than 99.7% accuracy by assigning the label of each cluster center to all of its members.
Liao, J. G.; Mcmurry, Timothy; Berg, Arthur
2014-01-01
Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072
Cultural Resources Investigations, Cross Basin Channel Realignments, Atchafalaya Basin, Louisiana
1990-12-01
of the currently-planned Old Atchafalaya River Area. On Upper Grand River, opposite the mouth of Bayou Pigeon (Figure 13), Moore reported another mound...might have been buried by the large amount of recent sedimentation. To illustrate this point, Kniffen referred to the mound "opposite Bayou Pigeon " (16...Lake Natchez Ridge Shell Ridge Unknown Prelistoric * 16 IV 15 Mound at Bayou Pigeon Mound Unknown Prehistoric * 16 IV 156 Alabama-Bayou Des Ourses Mound
14. Photographic copy of photograph dated ca. 1925; Photographer unknown; ...
14. Photographic copy of photograph dated ca. 1925; Photographer unknown; Original in Rath collection at Iowa State University Libraries, Department of Special Collection, Ames, Iowa; Filed under: Rath Packing Company, Public Relations, Symbol N, Box 106, File 6: THE RATH COMPLEX IN THE MID 1920; LARGE BUILDING TO LEFT OF SMOKESTACK IS HOG KILL (BUILDING 40); LOOKING NORTH FROM ACROSS CEDAR RIVER - Rath Packing Company, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA
NASA Astrophysics Data System (ADS)
O'Malley, D.; Le, E. B.; Vesselinov, V. V.
2015-12-01
We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.
Worldwide spread of the Ponseti method for clubfoot
Shabtai, Lior; Specht, Stacy C; Herzenberg, John E
2014-01-01
The Ponseti method has become the gold standard for the treatment of idiopathic clubfoot. Its safety and efficacy has been demonstrated extensively in the literature, leading to increased use around the world over the last two decades. This has been demonstrated by the increase in Ponseti related PubMed publications from many countries. We found evidence of Ponseti activity in 113 of 193 United Nations members. The contribution of many organizations which provide resources to healthcare practitioners in low and middle income countries, as well as Ponseti champions and modern communication technology, have helped to spread the Ponseti method around the world. Despite this, there are many countries where the Ponseti method is not being used, as well as many large countries in which the extent of activity is unknown. With its low rate of complication, low cost, and high effectiveness, this method has unlimited potential to treat clubfoot in both developed and undeveloped countries. Our listing of countries who have not yet shown presence of Ponseti activity will help non-governmental organizations to target those countries which still need the most help. PMID:25405086
2017-01-01
Mass-spectrometry-based, high-throughput proteomics experiments produce large amounts of data. While typically acquired to answer specific biological questions, these data can also be reused in orthogonal ways to reveal new biological knowledge. We here present a novel method for such orthogonal data reuse of public proteomics data. Our method elucidates biological relationships between proteins based on the co-occurrence of these proteins across human experiments in the PRIDE database. The majority of the significantly co-occurring protein pairs that were detected by our method have been successfully mapped to existing biological knowledge. The validity of our novel method is substantiated by the extremely few pairs that can be mapped to existing knowledge based on random associations between the same set of proteins. Moreover, using literature searches and the STRING database, we were able to derive meaningful biological associations for unannotated protein pairs that were detected using our method, further illustrating that as-yet unknown associations present highly interesting targets for follow-up analysis. PMID:28480704
Ikeda, Mitsuru
2017-01-01
Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively. PMID:29090077
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle
2016-08-01
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.
Effective implementation of the weak Galerkin finite element methods for the biharmonic equation
Mu, Lin; Wang, Junping; Ye, Xiu
2017-07-06
The weak Galerkin (WG) methods have been introduced in [11, 12, 17] for solving the biharmonic equation. The purpose of this paper is to develop an algorithm to implement the WG methods effectively. This can be achieved by eliminating local unknowns to obtain a global system with significant reduction of size. In fact this reduced global system is equivalent to the Schur complements of the WG methods. The unknowns of the Schur complement of the WG method are those defined on the element boundaries. The equivalence of theWG method and its Schur complement is established. The numerical results demonstrate themore » effectiveness of this new implementation technique.« less
Effective implementation of the weak Galerkin finite element methods for the biharmonic equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
The weak Galerkin (WG) methods have been introduced in [11, 12, 17] for solving the biharmonic equation. The purpose of this paper is to develop an algorithm to implement the WG methods effectively. This can be achieved by eliminating local unknowns to obtain a global system with significant reduction of size. In fact this reduced global system is equivalent to the Schur complements of the WG methods. The unknowns of the Schur complement of the WG method are those defined on the element boundaries. The equivalence of theWG method and its Schur complement is established. The numerical results demonstrate themore » effectiveness of this new implementation technique.« less
Across-cohort QC analyses of GWAS summary statistics from complex traits.
Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M
2016-01-01
Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics F st statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy.
Across-cohort QC analyses of GWAS summary statistics from complex traits
Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M
2017-01-01
Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics Fst statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy. PMID:27552965
On the unreasonable effectiveness of the post-Newtonian approximation in gravitational physics
Will, Clifford M.
2011-01-01
The post-Newtonian approximation is a method for solving Einstein’s field equations for physical systems in which motions are slow compared to the speed of light and where gravitational fields are weak. Yet it has proven to be remarkably effective in describing certain strong-field, fast-motion systems, including binary pulsars containing dense neutron stars and binary black hole systems inspiraling toward a final merger. The reasons for this effectiveness are largely unknown. When carried to high orders in the post-Newtonian sequence, predictions for the gravitational-wave signal from inspiraling compact binaries will play a key role in gravitational-wave detection by laser-interferometric observatories. PMID:21447714
Evidence for the timing of sea-level events during MIS 3
NASA Astrophysics Data System (ADS)
Siddall, M.
2005-12-01
Four large sea-level peaks of millennial-scale duration occur during MIS 3. In addition smaller peaks may exist close to the sensitivity of existing methods to derive sea level during these periods. Millennial-scale changes in temperature during MIS 3 are well documented across much of the planet and are linked in some unknown, yet fundamental way to changes in ice volume / sea level. It is therefore highly likely that the timing of the sea level events during MIS 3 will prove to be a `Rosetta Stone' for understanding millennial scale climate variability. I will review observational and mechanistic arguments for the variation of sea level on Antarctic, Greenland and absolute time scales.
Inflammatory Cells and Proteases in Abdominal Aortic Aneurysm and its Complications.
Haiying, Jiang; Sasaki, Takeshi; Jin, Enze; Kuzuya, Masafumi; Cheng, Xianwu
2018-05-30
Abdominal aortic aneurysm (AAA), a common disease among elderly individuals, involves the progressive dilatation of the abdominal aorta as a consequence of degeneration. The mechanisms of AAA formation, development and rupture are largely unknown. Surgical repair is the only available method of treatment since the lack of knowledge regarding the pathogenesis of AAA has hindered the development of suitable medical treatments, particularly the development of drugs. In this review, we describe the inflammatory cells and proteases that may be involved in the formation and development of AAA. This knowledge can contribute to the development of new drugs for AAA. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Sampling Versus Filtering in Large-Eddy Simulations
NASA Technical Reports Server (NTRS)
Debliquy, O.; Knaepen, B.; Carati, D.; Wray, A. A.
2004-01-01
A LES formalism in which the filter operator is replaced by a sampling operator is proposed. The unknown quantities that appear in the LES equations originate only from inadequate resolution (Discretization errors). The resulting viewpoint seems to make a link between finite difference approaches and finite element methods. Sampling operators are shown to commute with nonlinearities and to be purely projective. Moreover, their use allows an unambiguous definition of the LES numerical grid. The price to pay is that sampling never commutes with spatial derivatives and the commutation errors must be modeled. It is shown that models for the discretization errors may be treated using the dynamic procedure. Preliminary results, using the Smagorinsky model, are very encouraging.
NASA Astrophysics Data System (ADS)
Ma, Lin
2017-11-01
This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.
Simulating chemical reactions in ionic liquids using QM/MM methodology.
Acevedo, Orlando
2014-12-18
The use of ionic liquids as a reaction medium for chemical reactions has dramatically increased in recent years due in large part to the numerous reported advances in catalysis and organic synthesis. In some extreme cases, ionic liquids have been shown to induce mechanistic changes relative to conventional solvents. Despite the large interest in the solvents, a clear understanding of the molecular factors behind their chemical impact is largely unknown. This feature article reviews our efforts developing and applying mixed quantum and molecular mechanical (QM/MM) methodology to elucidate the microscopic details of how these solvents operate to enhance rates and alter mechanisms for industrially and academically important reactions, e.g., Diels-Alder, Kemp eliminations, nucleophilic aromatic substitutions, and β-eliminations. Explicit solvent representation provided the medium dependence of the activation barriers and atomic-level characterization of the solute-solvent interactions responsible for the experimentally observed "ionic liquid effects". Technical advances are also discussed, including a linear-scaling pairwise electrostatic interaction alternative to Ewald sums, an efficient polynomial fitting method for modeling proton transfers, and the development of a custom ionic liquid OPLS-AA force field.
Novel Neuroimaging Methods to Understand How HIV Affects the Brain
Thompson, Paul
2015-01-01
In much of the developed world, the HIV epidemic has largely been controlled by anti-retroviral treatment. Even so, there is growing concern that HIV-infected individuals may be at risk for accelerated brain aging, and a range of cognitive impairments. What promotes or resists these changes is largely unknown. There is also interest in discovering factors that promote resilience to HIV, and combat its adverse effects in children. Here we review recent developments in brain imaging that reveal how the virus affects the brain. We relate these brain changes to changes in blood markers, cognitive function, and other patient outcomes or symptoms, such as apathy or neuropathic pain. We focus on new and emerging techniques, including new variants of brain MRI. Diffusion tensor imaging, for example, can map the brain’s structural connections while fMRI can uncover functional connections. Finally, we suggest how large-scale global research alliances, such as ENIGMA, may resolve controversies over effects where evidence is now lacking. These efforts pool scans from tens of thousands of individuals, and offer a source of power not previously imaginable for brain imaging studies. PMID:25902966
Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.
Sun, Kangkang; Sui, Shuai; Tong, Shaocheng
2018-04-01
This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.
Atmospheric turbulence profiling with unknown power spectral density
NASA Astrophysics Data System (ADS)
Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny
2018-04-01
Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.
Recchia, Gabriel L; Louwerse, Max M
2016-11-01
Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. Copyright © 2015 Cognitive Science Society, Inc.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
On the apparent insignificance of the randomness of flexible joints on large space truss dynamics
NASA Technical Reports Server (NTRS)
Koch, R. M.; Klosner, J. M.
1993-01-01
Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.
Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...
2016-06-09
When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.
When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Non-Contact Temperature Requirements (NCTM) for drop and bubble physics
NASA Technical Reports Server (NTRS)
Hmelo, Anthony B.; Wang, Taylor G.
1989-01-01
Many of the materials research experiments to be conducted in the Space Processing program require a non-contaminating method of manipulating and controlling weightless molten materials. In these experiments, the melt is positioned and formed within a container without physically contacting the container's wall. An acoustic method, which was developed by Professor Taylor G. Wang before coming to Vanderbilt University from the Jet Propulsion Laboratory, has demonstrated the capability of positioning and manipulating room temperature samples. This was accomplished in an earth-based laboratory with a zero-gravity environment of short duration. However, many important facets of high temperature containerless processing technology have not been established yet, nor can they be established from the room temperature studies, because the details of the interaction between an acoustic field an a molten sample are largely unknown. Drop dynamics, bubble dynamics, coalescence behavior of drops and bubbles, electromagnetic and acoustic levitation methods applied to molten metals, and thermal streaming are among the topics discussed.
Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai
2017-03-01
This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.
Neuro-adaptive backstepping control of SISO non-affine systems with unknown gain sign.
Ramezani, Zahra; Arefi, Mohammad Mehdi; Zargarzadeh, Hassan; Jahed-Motlagh, Mohammad Reza
2016-11-01
This paper presents two neuro-adaptive controllers for a class of uncertain single-input, single-output (SISO) nonlinear non-affine systems with unknown gain sign. The first approach is state feedback controller, so that a neuro-adaptive state-feedback controller is constructed based on the backstepping technique. The second approach is an observer-based controller and K-filters are designed to estimate the system states. The proposed method relaxes a priori knowledge of control gain sign and therefore by utilizing the Nussbaum-type functions this problem is addressed. In these methods, neural networks are employed to approximate the unknown nonlinear functions. The proposed adaptive control schemes guarantee that all the closed-loop signals are semi-globally uniformly ultimately bounded (SGUUB). Finally, the theoretical results are numerically verified through simulation examples. Simulation results show the effectiveness of the proposed methods. Copyright © 2016 ISA. All rights reserved.
Protein Structure Determination using Metagenome sequence data
Ovchinnikov, Sergey; Park, Hahnbeom; Varghese, Neha; Huang, Po-Ssu; Pavlopoulos, Georgios A.; Kim, David E.; Kamisetty, Hetunandan; Kyrpides, Nikos C.; Baker, David
2017-01-01
Despite decades of work by structural biologists, there are still ~5200 protein families with unknown structure outside the range of comparative modeling. We show that Rosetta structure prediction guided by residue-residue contacts inferred from evolutionary information can accurately model proteins that belong to large families, and that metagenome sequence data more than triples the number of protein families with sufficient sequences for accurate modeling. We then integrate metagenome data, contact based structure matching and Rosetta structure calculations to generate models for 614 protein families with currently unknown structures; 206 are membrane proteins and 137 have folds not represented in the PDB. This approach provides the representative models for large protein families originally envisioned as the goal of the protein structure initiative at a fraction of the cost. PMID:28104891
Comparison of statistical tests for association between rare variants and binary traits.
Bacanu, Silviu-Alin; Nelson, Matthew R; Whittaker, John C
2012-01-01
Genome-wide association studies have found thousands of common genetic variants associated with a wide variety of diseases and other complex traits. However, a large portion of the predicted genetic contribution to many traits remains unknown. One plausible explanation is that some of the missing variation is due to the effects of rare variants. Nonetheless, the statistical analysis of rare variants is challenging. A commonly used method is to contrast, within the same region (gene), the frequency of minor alleles at rare variants between cases and controls. However, this strategy is most useful under the assumption that the tested variants have similar effects. We previously proposed a method that can accommodate heterogeneous effects in the analysis of quantitative traits. Here we extend this method to include binary traits that can accommodate covariates. We use simulations for a variety of causal and covariate impact scenarios to compare the performance of the proposed method to standard logistic regression, C-alpha, SKAT, and EREC. We found that i) logistic regression methods perform well when the heterogeneity of the effects is not extreme and ii) SKAT and EREC have good performance under all tested scenarios but they can be computationally intensive. Consequently, it would be more computationally desirable to use a two-step strategy by (i) selecting promising genes by faster methods and ii) analyzing selected genes using SKAT/EREC. To select promising genes one can use (1) regression methods when effect heterogeneity is assumed to be low and the covariates explain a non-negligible part of trait variability, (2) C-alpha when heterogeneity is assumed to be large and covariates explain a small fraction of trait's variability and (3) the proposed trend and heterogeneity test when the heterogeneity is assumed to be non-trivial and the covariates explain a large fraction of trait variability.
NASA Astrophysics Data System (ADS)
Capozzi, Francesco; Lisi, Eligio; Marrone, Antonio
2016-04-01
Within the standard 3ν oscillation framework, we illustrate the status of currently unknown oscillation parameters: the θ23 octant, the mass hierarchy (normal or inverted), and the possible CP-violating phase δ, as derived by a (preliminary) global analysis of oscillation data available in 2015. We then discuss some challenges that will be faced by future, high-statistics analyses of spectral data, starting with one-dimensional energy spectra in reactor experiments, and concluding with two-dimensional energy-angle spectra in large-volume atmospheric experiments. It is shown that systematic uncertainties in the spectral shapes can noticeably affect the prospective sensitivities to unknown oscillation parameters, in particular to the mass hierarchy.
Hydration and Cooling Practices Among Farmworkers in Oregon and Washington
Bethel, Jeffrey W.; Spector, June T.; Krenz, Jennifer
2018-01-01
Objectives Although recommendations for preventing occupational heat-related illness among farmworkers include hydration and cooling practices, the extent to which these recommendations are universally practiced is unknown. The objective of this analysis was to compare hydration and cooling practices between farmworkers in Oregon and Washington. Methods A survey was administered to a purposive sample of Oregon and Washington farmworkers. Data collected included demographics, work history and current work practices, hydration practices, access and use of cooling measures, and headwear and clothing worn. Results Oregon farmworkers were more likely than those in Washington to consume beverages containing sugar and/or caffeine. Workers in Oregon more frequently reported using various cooling measures compared with workers in Washington. Availability of cooling measures also varied between the two states. Conclusions These results highlight the large variability between workers in two states regarding access to and use of methods to stay cool while working in the heat. PMID:28402203
Conformational transition of membrane-associated terminally-acylated HIV-1 Nef
Akgun, Bulent; Satija, Sushil; Nanda, Hirsh; Pirrone, Gregory F.; Shi, Xiaomeng; Engen, John R.; Kent, Michael S.
2013-01-01
Many proteins are post-translationally modified by acylation targetting them to lipid membranes. While methods such as X-ray crystallography and NMR are available to determine the structure of folded proteins in solution, the precise position of folded domains relative to a membrane remains largely unknown. We used neutron and X-ray reflection methods to measure the displacement of the core domain of HIV Nef from lipid membranes upon insertion of the N-terminal myristate group. Nef is one of several HIV-1 accessory proteins and an essential factor in AIDS progression. Upon insertion of the myristate and residues from the N-terminal arm, Nef transitions from a closed to open conformation that positions the core domain 70 Å from the lipid headgroups. This work rules out speculation that the Nef core remains closely associated with the membrane to optimize interactions with the cytoplasmic domain of MHC-1. PMID:24035710
Computing Prediction and Functional Analysis of Prokaryotic Propionylation.
Wang, Li-Na; Shi, Shao-Ping; Wen, Ping-Ping; Zhou, Zhi-You; Qiu, Jian-Ding
2017-11-27
Identification and systematic analysis of candidates for protein propionylation are crucial steps for understanding its molecular mechanisms and biological functions. Although several proteome-scale methods have been performed to delineate potential propionylated proteins, the majority of lysine-propionylated substrates and their role in pathological physiology still remain largely unknown. By gathering various databases and literatures, experimental prokaryotic propionylation data were collated to be trained in a support vector machine with various features via a three-step feature selection method. A novel online tool for seeking potential lysine-propionylated sites (PropSeek) ( http://bioinfo.ncu.edu.cn/PropSeek.aspx ) was built. Independent test results of leave-one-out and n-fold cross-validation were similar to each other, showing that PropSeek is a stable and robust predictor with satisfying performance. Meanwhile, analyses of Gene Ontology, Kyoto Encyclopedia of Genes and Genomes pathways, and protein-protein interactions implied a potential role of prokaryotic propionylation in protein synthesis and metabolism.
Gene and translation initiation site prediction in metagenomic sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyatt, Philip Douglas; LoCascio, Philip F; Hauser, Loren John
2012-01-01
Gene prediction in metagenomic sequences remains a difficult problem. Current sequencing technologies do not achieve sufficient coverage to assemble the individual genomes in a typical sample; consequently, sequencing runs produce a large number of short sequences whose exact origin is unknown. Since these sequences are usually smaller than the average length of a gene, algorithms must make predictions based on very little data. We present MetaProdigal, a metagenomic version of the gene prediction program Prodigal, that can identify genes in short, anonymous coding sequences with a high degree of accuracy. The novel value of the method consists of enhanced translationmore » initiation site identification, ability to identify sequences that use alternate genetic codes and confidence values for each gene call. We compare the results of MetaProdigal with other methods and conclude with a discussion of future improvements.« less
Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis
2015-01-01
This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890
Trophic groups and modules: two levels of group detection in food webs
Gauzens, Benoit; Thébault, Elisa; Lacroix, Gérard; Legendre, Stéphane
2015-01-01
Within food webs, species can be partitioned into groups according to various criteria. Two notions have received particular attention: trophic groups (TGs), which have been used for decades in the ecological literature, and more recently, modules. The relationship between these two group concepts remains unknown in empirical food webs. While recent developments in network theory have led to efficient methods for detecting modules in food webs, the determination of TGs (groups of species that are functionally similar) is largely based on subjective expert knowledge. We develop a novel algorithm for TG detection. We apply this method to empirical food webs and show that aggregation into TGs allows for the simplification of food webs while preserving their information content. Furthermore, we reveal a two-level hierarchical structure where modules partition food webs into large bottom–top trophic pathways, whereas TGs further partition these pathways into groups of species with similar trophic connections. This provides new perspectives for the study of dynamical and functional consequences of food-web structure, bridging topological and dynamical analysis. TGs have a clear ecological meaning and are found to provide a trade-off between network complexity and information loss. PMID:25878127
MoCha: Molecular Characterization of Unknown Pathways.
Lobo, Daniel; Hammelman, Jennifer; Levin, Michael
2016-04-01
Automated methods for the reverse-engineering of complex regulatory networks are paving the way for the inference of mechanistic comprehensive models directly from experimental data. These novel methods can infer not only the relations and parameters of the known molecules defined in their input datasets, but also unknown components and pathways identified as necessary by the automated algorithms. Identifying the molecular nature of these unknown components is a crucial step for making testable predictions and experimentally validating the models, yet no specific and efficient tools exist to aid in this process. To this end, we present here MoCha (Molecular Characterization), a tool optimized for the search of unknown proteins and their pathways from a given set of known interacting proteins. MoCha uses the comprehensive dataset of protein-protein interactions provided by the STRING database, which currently includes more than a billion interactions from over 2,000 organisms. MoCha is highly optimized, performing typical searches within seconds. We demonstrate the use of MoCha with the characterization of unknown components from reverse-engineered models from the literature. MoCha is useful for working on network models by hand or as a downstream step of a model inference engine workflow and represents a valuable and efficient tool for the characterization of unknown pathways using known data from thousands of organisms. MoCha and its source code are freely available online under the GPLv3 license.
NASA Astrophysics Data System (ADS)
Wang, Xingjian; Shi, Cun; Wang, Shaoping
2017-07-01
Hybrid actuation system with dissimilar redundant actuators, which is composed of a hydraulic actuator (HA) and an electro-hydrostatic actuator (EHA), has been applied on modern civil aircraft to improve the reliability. However, the force fighting problem arises due to different dynamic performances between HA and EHA. This paper proposes an extended state observer (ESO)-based motion synchronisation control method. To cope with the problem of unavailability of the state signals, the well-designed ESO is utilised to observe the HA and EHA state variables which are unmeasured. In particular, the extended state of ESO can estimate the lumped effect of the unknown external disturbances acting on the control surface, the nonlinear dynamics, uncertainties, and the coupling term between HA and EHA. Based on the observed states of ESO, motion synchronisation controllers are presented to make HA and EHA to simultaneously track the desired motion trajectories, which are generated by a trajectory generator. Additionally, the unknown disturbances and the coupling terms can be compensated by using the extended state of the proposed ESO. Finally, comparative simulation results indicate that the proposed ESO-based motion synchronisation controller can achieve great force fighting reduction between HA and EHA.
Coaching the exploration and exploitation in active learning for interactive video retrieval.
Wei, Xiao-Yong; Yang, Zhen-Qun
2013-03-01
Conventional active learning approaches for interactive video/image retrieval usually assume the query distribution is unknown, as it is difficult to estimate with only a limited number of labeled instances available. Thus, it is easy to put the system in a dilemma whether to explore the feature space in uncertain areas for a better understanding of the query distribution or to harvest in certain areas for more relevant instances. In this paper, we propose a novel approach called coached active learning that makes the query distribution predictable through training and, therefore, avoids the risk of searching on a completely unknown space. The estimated distribution, which provides a more global view of the feature space, can be used to schedule not only the timing but also the step sizes of the exploration and the exploitation in a principled way. The results of the experiments on a large-scale data set from TRECVID 2005-2009 validate the efficiency and effectiveness of our approach, which demonstrates an encouraging performance when facing domain-shift, outperforms eight conventional active learning methods, and shows superiority to six state-of-the-art interactive video retrieval systems.
Rotander, Anna; Kärrman, Anna; Toms, Leisa-Maree L; Kay, Margaret; Mueller, Jochen F; Gómez Ramos, María José
2015-02-17
Fluorinated surfactant-based aqueous film-forming foams (AFFFs) are made up of per- and polyfluorinated alkyl substances (PFAS) and are used to extinguish fires involving highly flammable liquids. The use of perfluorooctanesulfonic acid (PFOS) and other perfluoroalkyl acids (PFAAs) in some AFFF formulations has been linked to substantial environmental contamination. Recent studies have identified a large number of novel and infrequently reported fluorinated surfactants in different AFFF formulations. In this study, a strategy based on a case-control approach using quadrupole time-of-flight tandem mass spectrometry (QTOF-MS/MS) and advanced statistical methods has been used to extract and identify known and unknown PFAS in human serum associated with AFFF-exposed firefighters. Two target sulfonic acids [PFOS and perfluorohexanesulfonic acid (PFHxS)], three non-target acids [perfluoropentanesulfonic acid (PFPeS), perfluoroheptanesulfonic acid (PFHpS), and perfluorononanesulfonic acid (PFNS)], and four unknown sulfonic acids (Cl-PFOS, ketone-PFOS, ether-PFHxS, and Cl-PFHxS) were exclusively or significantly more frequently detected at higher levels in firefighters compared to controls. The application of this strategy has allowed for identification of previously unreported fluorinated chemicals in a timely and cost-efficient way.
Causal mapping of emotion networks in the human brain: Framework and initial findings.
Dubois, Julien; Oya, Hiroyuki; Tyszka, J Michael; Howard, Matthew; Eberhardt, Frederick; Adolphs, Ralph
2017-11-13
Emotions involve many cortical and subcortical regions, prominently including the amygdala. It remains unknown how these multiple network components interact, and it remains unknown how they cause the behavioral, autonomic, and experiential effects of emotions. Here we describe a framework for combining a novel technique, concurrent electrical stimulation with fMRI (es-fMRI), together with a novel analysis, inferring causal structure from fMRI data (causal discovery). We outline a research program for investigating human emotion with these new tools, and provide initial findings from two large resting-state datasets as well as case studies in neurosurgical patients with electrical stimulation of the amygdala. The overarching goal is to use causal discovery methods on fMRI data to infer causal graphical models of how brain regions interact, and then to further constrain these models with direct stimulation of specific brain regions and concurrent fMRI. We conclude by discussing limitations and future extensions. The approach could yield anatomical hypotheses about brain connectivity, motivate rational strategies for treating mood disorders with deep brain stimulation, and could be extended to animal studies that use combined optogenetic fMRI. Copyright © 2017 Elsevier Ltd. All rights reserved.
Protein function prediction using neighbor relativity in protein-protein interaction network.
Moosavi, Sobhan; Rahgozar, Masoud; Rahimi, Amir
2013-04-01
There is a large gap between the number of discovered proteins and the number of functionally annotated ones. Due to the high cost of determining protein function by wet-lab research, function prediction has become a major task for computational biology and bioinformatics. Some researches utilize the proteins interaction information to predict function for un-annotated proteins. In this paper, we propose a novel approach called "Neighbor Relativity Coefficient" (NRC) based on interaction network topology which estimates the functional similarity between two proteins. NRC is calculated for each pair of proteins based on their graph-based features including distance, common neighbors and the number of paths between them. In order to ascribe function to an un-annotated protein, NRC estimates a weight for each neighbor to transfer its annotation to the unknown protein. Finally, the unknown protein will be annotated by the top score transferred functions. We also investigate the effect of using different coefficients for various types of functions. The proposed method has been evaluated on Saccharomyces cerevisiae and Homo sapiens interaction networks. The performance analysis demonstrates that NRC yields better results in comparison with previous protein function prediction approaches that utilize interaction network. Copyright © 2012 Elsevier Ltd. All rights reserved.
Porous extraction paddle: a solid phase extraction technique for studying the urine metabolome
Shao, Gang; MacNeil, Michael; Yao, Yuanyuan; Giese, Roger W.
2016-01-01
RATIONALE A method was needed to accomplish solid phase extraction of a large urine volume in a convenient way where resources are limited, towards a goal of metabolome and xenobiotic exposome analysis at another, distant location. METHODS A porous extraction paddle (PEP) was set up, comprising a porous nylon bag containing extraction particles that is flattened and immobilized between two stainless steel meshes. Stirring the PEP after attachment to a shaft of a motor mounted on the lid of the jar containing the urine accomplishes extraction. The bag contained a mixture of nonpolar and partly nonpolar particles to extract a diversity of corresponding compounds. RESULTS Elution of a urine-exposed, water-washed PEP with aqueous methanol containing triethylammonium acetate (conditions intended to give a complete elution), followed by MALDI-TOF/TOF-MS, demonstrated that a diversity of compounds had been extracted ranging from uric acid to peptides. CONCLUSION The PEP allows the user to extract a large liquid sample in a jar simply by turning on a motor. The technique will be helpful in conducting metabolomics and xenobiotic exposome studies of urine, encouraging the extraction of large volumes to set up a convenient repository sample (e.g. 2 g of exposed adsorbent in a cryovial) for shipment and re-analysis in various ways in the future, including scaled-up isolation of unknown chemicals for identification. PMID:27624170
NASA Astrophysics Data System (ADS)
Yang, Xinxin; Ge, Shuzhi Sam; He, Wei
2018-04-01
In this paper, both the closed-form dynamics and adaptive robust tracking control of a space robot with two-link flexible manipulators under unknown disturbances are developed. The dynamic model of the system is described with assumed modes approach and Lagrangian method. The flexible manipulators are represented as Euler-Bernoulli beams. Based on singular perturbation technique, the displacements/joint angles and flexible modes are modelled as slow and fast variables, respectively. A sliding mode control is designed for trajectories tracking of the slow subsystem under unknown but bounded disturbances, and an adaptive sliding mode control is derived for slow subsystem under unknown slowly time-varying disturbances. An optimal linear quadratic regulator method is proposed for the fast subsystem to damp out the vibrations of the flexible manipulators. Theoretical analysis validates the stability of the proposed composite controller. Numerical simulation results demonstrate the performance of the closed-loop flexible space robot system.
Davies, Benjamin; Kotter, Mark
2018-02-05
Degenerative Cervical Myelopathy (DCM) is a syndrome of subacute cervical spinal cord compression due to spinal degeneration. Although DCM is thought to be common, many fundamental questions such as the natural history and epidemiology of DCM remain unknown. In order to answer these, access to a large cohort of patients with DCM is required. With its unrivalled and efficient reach, the Internet has become an attractive tool for medical research and may overcome these limitations in DCM. The most effective recruitment strategy, however, is unknown. To compare the efficacy of fee-based advertisement with alternative free recruitment strategies to a DCM Internet health survey. An Internet health survey (SurveyMonkey) accessed by a new DCM Internet platform (myelopathy.org) was created. Using multiple survey collectors and the website's Google Analytics, the efficacy of fee-based recruitment strategies (Google AdWords) and free alternatives (including Facebook, Twitter, and myelopathy.org) were compared. Overall, 760 surveys (513 [68%] fully completed) were accessed, 305 (40%) from fee-based strategies and 455 (60%) from free alternatives. Accounting for researcher time, fee-based strategies were more expensive ($7.8 per response compared to $3.8 per response for free alternatives) and identified a less motivated audience (Click-Through-Rate of 5% compared to 57% using free alternatives) but were more time efficient for the researcher (2 minutes per response compared to 16 minutes per response for free methods). Facebook was the most effective free strategy, providing 239 (31%) responses, where a single message to 4 existing communities yielded 133 (18%) responses within 7 days. The Internet can efficiently reach large numbers of patients. Free and fee-based recruitment strategies both have merits. Facebook communities are a rich resource for Internet researchers. ©Benjamin Davies, Mark Kotter. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 05.02.2018.
McDougall, Carmel; Woodcroft, Ben J.
2016-01-01
In nature, numerous mechanisms have evolved by which organisms fabricate biological structures with an impressive array of physical characteristics. Some examples of metazoan biological materials include the highly elastic byssal threads by which bivalves attach themselves to rocks, biomineralized structures that form the skeletons of various animals, and spider silks that are renowned for their exceptional strength and elasticity. The remarkable properties of silks, which are perhaps the best studied biological materials, are the result of the highly repetitive, modular, and biased amino acid composition of the proteins that compose them. Interestingly, similar levels of modularity/repetitiveness and similar bias in amino acid compositions have been reported in proteins that are components of structural materials in other organisms, however the exact nature and extent of this similarity, and its functional and evolutionary relevance, is unknown. Here, we investigate this similarity and use sequence features common to silks and other known structural proteins to develop a bioinformatics-based method to identify similar proteins from large-scale transcriptome and whole-genome datasets. We show that a large number of proteins identified using this method have roles in biological material formation throughout the animal kingdom. Despite the similarity in sequence characteristics, most of the silk-like structural proteins (SLSPs) identified in this study appear to have evolved independently and are restricted to a particular animal lineage. Although the exact function of many of these SLSPs is unknown, the apparent independent evolution of proteins with similar sequence characteristics in divergent lineages suggests that these features are important for the assembly of biological materials. The identification of these characteristics enable the generation of testable hypotheses regarding the mechanisms by which these proteins assemble and direct the construction of biological materials with diverse morphologies. The SilkSlider predictor software developed here is available at https://github.com/wwood/SilkSlider. PMID:27415783
ERIC Educational Resources Information Center
Buchenroth-Martin, Cynthia; DiMartino, Trevor; Martin, Andrew P.
2017-01-01
Collaborative learning in small groups is commonly implemented as a part of student-centered curricula. In large-enrollment courses, details of the interactions among students as a consequence of working in collaborative groups are often unknown but are important because how students interact influences the effectiveness of peer learning. We…
USDA-ARS?s Scientific Manuscript database
Ticks serve as biological vectors for a wide variety of bacterial pathogens which must be able to efficiently colonize specific tick tissues prior to transmission. The bacterial determinants of tick colonization are largely unknown, a knowledge gap attributed in large part to the paucity of tools t...
ERIC Educational Resources Information Center
Garavan, Thomas N.; Carbery, Ronan; O'Malley, Grace; O'Donnell, David
2010-01-01
Much remains unknown in the increasingly important field of e-learning in organizations. Drawing on a large-scale survey of employees (N = 557) who had opportunities to participate in voluntary e-learning activities, the factors influencing participation in e-learning are explored in this empirical paper. It is hypothesized that key variables…
Wallau, Gabriel Luz; Capy, Pierre; Loreto, Elgion; Le Rouzic, Arnaud; Hua-Van, Aurélie
2016-04-01
Transposable elements (TEs) are genomic repeated sequences that display complex evolutionary patterns. They are usually inherited vertically, but can occasionally be transmitted between sexually independent species, through so-called horizontal transposon transfers (HTTs). Recurrent HTTs are supposed to be essential in life cycle of TEs, which are otherwise destined for eventual decay. HTTs also impact the host genome evolution. However, the extent of HTTs in eukaryotes is largely unknown, due to the lack of efficient, statistically supported methods that can be applied to multiple species sequence data sets. Here, we developed a new automated method available as a R package "vhica" that discriminates whether a given TE family was vertically or horizontally transferred, and potentially infers donor and receptor species. The method is well suited for TE sequences extracted from complete genomes, and applicable to multiple TEs and species at the same time. We first validated our method using Drosophila TE families with well-known evolutionary histories, displaying both HTTs and vertical transmission. We then tested 26 different lineages of mariner elements recently characterized in 20 Drosophila genomes, and found HTTs in 24 of them. Furthermore, several independent HTT events could often be detected within the same mariner lineage. The VHICA (Vertical and Horizontal Inheritance Consistence Analysis) method thus appears as a valuable tool to analyze the evolutionary history of TEs across a large range of species. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Matrix completion by deep matrix factorization.
Fan, Jicong; Cheng, Jieyu
2018-02-01
Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pesavento, Maria; Alberti, Giancarla; Biesuz, Raffaela
2009-01-12
Different experimental approaches have been suggested in the last few decades to determine metal species in complex matrices of unknown composition as environmental waters. The methods are mainly focused on the determination of single species or groups of species. The more recent developments in trace elements speciation are reviewed focusing on methods for labile and free metal determination. Electrochemical procedures with low detection limit as anodic stripping voltammetry (ASV) and the competing ligand exchange with adsorption cathodic stripping voltammetry (CLE-AdCSV) have been widely employed in metal distribution studies in natural waters. Other electrochemical methods such as stripping chronopotentiometry and AGNES seem to be promising to evaluate the free metal concentration at the low levels of environmental samples. Separation techniques based on ion exchange (IE) and complexing resins (CR), and micro separation methods as the Donnan membrane technique (DMT), diffusive gradients in thin-film gels (DGT) and the permeation liquid membrane (PLM), are among the non-electrochemical methods largely used in this field and reviewed in the text. Under appropriate conditions such techniques make possible the evaluation of free metal ion concentration.
Photographic copy of photograph, photographer unknown, August 1912 (original print ...
Photographic copy of photograph, photographer unknown, August 1912 (original print located at U.S. Bureau of Reclamation Upper Columbia Area Office, Yakima, Washington). "A VIEW OF METHOD OF DAM CONSTRUCTION" - Kachess Dam, Kachess River, 1.5 miles north of Interstate 90, Easton, Kittitas County, WA
Mudalige, Thilak K; Qu, Haiou; Linder, Sean W
2015-11-13
Engineered nanoparticles are available in large numbers of commercial products claiming various health benefits. Nanoparticle absorption, distribution, metabolism, excretion, and toxicity in a biological system are dependent on particle size, thus the determination of size and size distribution is essential for full characterization. Number based average size and size distribution is a major parameter for full characterization of the nanoparticle. In the case of polydispersed samples, large numbers of particles are needed to obtain accurate size distribution data. Herein, we report a rapid methodology, demonstrating improved nanoparticle recovery and excellent size resolution, for the characterization of gold nanoparticles in dietary supplements using asymmetric flow field flow fractionation coupled with visible absorption spectrometry and inductively coupled plasma mass spectrometry. A linear relationship between gold nanoparticle size and retention times was observed, and used for characterization of unknown samples. The particle size results from unknown samples were compared to results from traditional size analysis by transmission electron microscopy, and found to have less than a 5% deviation in size for unknown product over the size range from 7 to 30 nm. Published by Elsevier B.V.
M-MRAC Backstepping for Systems with Unknown Virtual Control Coefficients
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2015-01-01
The paper presents an over-parametrization free certainty equivalence state feedback backstepping adaptive control design method for systems of any relative degree with unmatched uncertainties and unknown virtual control coefficients. It uses a fast prediction model to estimate the unknown parameters, which is independent of the control design. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters. The benefits of the approach are demonstrated in numerical simulations.
Unknown sequence amplification: Application to in vitro genome walking in Chlamydia trachomatis L2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Copley, C.G.; Boot, C.; Bundell, K.
1991-01-01
A recently described technique, Chemical Genetics' unknown sequence amplification method, which requires only one specific oligonucleotide, has broadened the applicability of the polymerase chain reaction to DNA of unknown sequence. The authors have adapted this technique to the study of the genome of Chlamydia trachomatis, an obligate intracellular bacterium, and describe modifications that significantly improve the utility of this approach. These techniques allow for rapid genomic analysis entirely in vitro, using DNA of limited quantity of purity.
Hegde, Shivanand; Hegde, Shrilakshmi; Zimmermann, Martina; Flöck, Martina; Spergser, Joachim; Rosengarten, Renate
2015-01-01
Mycoplasmas possess complex pathogenicity determinants that are largely unknown at the molecular level. Mycoplasma agalactiae serves as a useful model to study the molecular basis of mycoplasma pathogenicity. The generation and in vivo screening of a transposon mutant library of M. agalactiae were employed to unravel its host colonization factors. Tn4001mod mutants were sequenced using a novel sequencing method, and functionally heterogeneous pools containing 15 to 19 selected mutants were screened simultaneously through two successive cycles of sheep intramammary infections. A PCR-based negative selection method was employed to identify mutants that failed to colonize the udders and draining lymph nodes in the animals. A total of 14 different mutants found to be absent from ≥95% of samples were identified and subsequently verified via a second round of stringent confirmatory screening where 100% absence was considered attenuation. Using this criterion, seven mutants with insertions in genes MAG1050, MAG2540, MAG3390, uhpT, eutD, adhT, and MAG4460 were not recovered from any of the infected animals. Among the attenuated mutants, many contain disruptions in hypothetical genes, implying their previously unknown role in M. agalactiae pathogenicity. These data indicate the putative role of functionally different genes, including hypothetical ones, in the pathogenesis of M. agalactiae. Defining the precise functions of the identified genes is anticipated to increase our understanding of M. agalactiae infections and to develop successful intervention strategies against it. PMID:25916984
NASA Astrophysics Data System (ADS)
Holtorf, Hauke; Guitton, Marie-Christine; Reski, Ralf
2002-04-01
Functional genome analysis of plants has entered the high-throughput stage. The complete genome information from key species such as Arabidopsis thaliana and rice is now available and will further boost the application of a range of new technologies to functional plant gene analysis. To broadly assign functions to unknown genes, different fast and multiparallel approaches are currently used and developed. These new technologies are based on known methods but are adapted and improved to accommodate for comprehensive, large-scale gene analysis, i.e. such techniques are novel in the sense that their design allows researchers to analyse many genes at the same time and at an unprecedented pace. Such methods allow analysis of the different constituents of the cell that help to deduce gene function, namely the transcripts, proteins and metabolites. Similarly the phenotypic variations of entire mutant collections can now be analysed in a much faster and more efficient way than before. The different methodologies have developed to form their own fields within the functional genomics technological platform and are termed transcriptomics, proteomics, metabolomics and phenomics. Gene function, however, cannot solely be inferred by using only one such approach. Rather, it is only by bringing together all the information collected by different functional genomic tools that one will be able to unequivocally assign functions to unknown plant genes. This review focuses on current technical developments and their impact on the field of plant functional genomics. The lower plant Physcomitrella is introduced as a new model system for gene function analysis, owing to its high rate of homologous recombination.
Identification of signalling cascades involved in red blood cell shrinkage and vesiculation.
Kostova, Elena B; Beuger, Boukje M; Klei, Thomas R L; Halonen, Pasi; Lieftink, Cor; Beijersbergen, Roderick; van den Berg, Timo K; van Bruggen, Robin
2015-04-16
Even though red blood cell (RBC) vesiculation is a well-documented phenomenon, notably in the context of RBC aging and blood transfusion, the exact signalling pathways and kinases involved in this process remain largely unknown. We have established a screening method for RBC vesicle shedding using the Ca(2+) ionophore ionomycin which is a rapid and efficient method to promote vesiculation. In order to identify novel pathways stimulating vesiculation in RBC, we screened two libraries: the Library of Pharmacologically Active Compounds (LOPAC) and the Selleckchem Kinase Inhibitor Library for their effects on RBC from healthy donors. We investigated compounds triggering vesiculation and compounds inhibiting vesiculation induced by ionomycin. We identified 12 LOPAC compounds, nine kinase inhibitors and one kinase activator which induced RBC shrinkage and vesiculation. Thus, we discovered several novel pathways involved in vesiculation including G protein-coupled receptor (GPCR) signalling, the phosphoinositide 3-kinase (PI3K)-Akt (protein kinase B) pathway, the Jak-STAT (Janus kinase-signal transducer and activator of transcription) pathway and the Raf-MEK (mitogen-activated protein kinase kinase)-ERK (extracellular signal-regulated kinase) pathway. Moreover, we demonstrated a link between casein kinase 2 (CK2) and RBC shrinkage via regulation of the Gardos channel activity. In addition, our data showed that inhibition of several kinases with unknown functions in mature RBC, including Alk (anaplastic lymphoma kinase) kinase and vascular endothelial growth factor receptor 2 (VEGFR-2), induced RBC shrinkage and vesiculation.
Identification of signalling cascades involved in red blood cell shrinkage and vesiculation
Kostova, Elena B.; Beuger, Boukje M.; Klei, Thomas R.L.; Halonen, Pasi; Lieftink, Cor; Beijersbergen, Roderick; van den Berg, Timo K.; van Bruggen, Robin
2015-01-01
Even though red blood cell (RBC) vesiculation is a well-documented phenomenon, notably in the context of RBC aging and blood transfusion, the exact signalling pathways and kinases involved in this process remain largely unknown. We have established a screening method for RBC vesicle shedding using the Ca2+ ionophore ionomycin which is a rapid and efficient method to promote vesiculation. In order to identify novel pathways stimulating vesiculation in RBC, we screened two libraries: the Library of Pharmacologically Active Compounds (LOPAC) and the Selleckchem Kinase Inhibitor Library for their effects on RBC from healthy donors. We investigated compounds triggering vesiculation and compounds inhibiting vesiculation induced by ionomycin. We identified 12 LOPAC compounds, nine kinase inhibitors and one kinase activator which induced RBC shrinkage and vesiculation. Thus, we discovered several novel pathways involved in vesiculation including G protein-coupled receptor (GPCR) signalling, the phosphoinositide 3-kinase (PI3K)–Akt (protein kinase B) pathway, the Jak–STAT (Janus kinase–signal transducer and activator of transcription) pathway and the Raf–MEK (mitogen-activated protein kinase kinase)–ERK (extracellular signal-regulated kinase) pathway. Moreover, we demonstrated a link between casein kinase 2 (CK2) and RBC shrinkage via regulation of the Gardos channel activity. In addition, our data showed that inhibition of several kinases with unknown functions in mature RBC, including Alk (anaplastic lymphoma kinase) kinase and vascular endothelial growth factor receptor 2 (VEGFR-2), induced RBC shrinkage and vesiculation. PMID:25757360
Diversity of Marine-Derived Fungal Cultures Exposed by DNA Barcodes: The Algorithm Matters
Andreakis, Nikos; Høj, Lone; Kearns, Philip; Hall, Michael R.; Ericson, Gavin; Cobb, Rose E.; Gordon, Benjamin R.; Evans-Illidge, Elizabeth
2015-01-01
Marine fungi are an understudied group of eukaryotic microorganisms characterized by unresolved genealogies and unstable classification. Whereas DNA barcoding via the nuclear ribosomal internal transcribed spacer (ITS) provides a robust and rapid tool for fungal species delineation, accurate classification of fungi is often arduous given the large number of partial or unknown barcodes and misidentified isolates deposited in public databases. This situation is perpetuated by a paucity of cultivable fungal strains available for phylogenetic research linked to these data sets. We analyze ITS barcodes produced from a subsample (290) of 1781 cultured isolates of marine-derived fungi in the Bioresources Library located at the Australian Institute of Marine Science (AIMS). Our analysis revealed high levels of under-explored fungal diversity. The majority of isolates were ascomycetes including representatives of the subclasses Eurotiomycetidae, Hypocreomycetidae, Sordariomycetidae, Pleosporomycetidae, Dothideomycetidae, Xylariomycetidae and Saccharomycetidae. The phylum Basidiomycota was represented by isolates affiliated with the genera Tritirachium and Tilletiopsis. BLAST searches revealed 26 unknown OTUs and 50 isolates corresponding to previously uncultured, unidentified fungal clones. This study makes a significant addition to the availability of barcoded, culturable marine-derived fungi for detailed future genomic and physiological studies. We also demonstrate the influence of commonly used alignment algorithms and genetic distance measures on the accuracy and comparability of estimating Operational Taxonomic Units (OTUs) by the automatic barcode gap finder (ABGD) method. Large scale biodiversity screening programs that combine datasets using algorithmic OTU delineation pipelines need to ensure compatible algorithms have been used because the algorithm matters. PMID:26308620
Fan, T W; Lane, A N; Pedler, J; Crowley, D; Higashi, R M
1997-08-15
Root exudates in the rhizosphere are vital to the normal life cycle of plants. A key factor is phytometallophores, which function in the nutritional acquisition of iron and zinc and are likely to be important in the uptake of pollutant metals by plants. Unraveling the biochemistry of these compounds is tedious using traditional analyses, which also fall short in providing the overall chemical composition or in detecting unknown or unexpected organic ligands in the exudates. Here, we demonstrate a comprehensive analysis of the exudate composition directly by 1H and 13C multidimensional NMR and silylation GC-MS. The advantages are (a) minimal sample preparation, with no loss of unknown compounds, and reduced net analysis time; (b) structure-based analysis for universal detection and identification; and (c) simultaneous analysis of a large number of constituents in a complex mixture. Using barley root exudates, a large number of common organic and amino acids were identified. Three derivatives of mugineic acid phytosiderophores were also determined, the major one being 3-epihydroxymugineic acid, for which complete 1H and 13C NMR assignments were obtained. Quantification of all major components using these methods revealed a sevenfold increase in total exudation under moderate iron deficiency, with 3-epihydroxymugineic acid comprising approximately 22% of the exudate mixture. As iron deficiency increased, total quantities of exudate per gram of root remained unchanged, but the relative quantity of carbon allocated to phytosiderophore increased to approximately 50% of the total exudate in response to severe iron deficiency.
NASA Technical Reports Server (NTRS)
Shue, Jack
2004-01-01
The end-to-end test would verify the complex sequence of events from lander separation to landing. Due to the large distances involved and the significant delay time in sending a command and receiving verification, the lander needed to operate autonomously after it separated from the orbiter. It had to sense conditions, make decisions, and act accordingly. We were flying into a relatively unknown set of conditions-a Martian atmosphere of unknown pressure, density, and consistency to land on a surface of unknown altitude, and one which had an unknown bearing strength. In order to touch down safely on Mars the lander had to orient itself for descent and entry, modulate itself to maintain proper lift, pop a parachute, jettison its aeroshell, deploy landing legs and radar, ignite a terminal descent engine, and fly a given trajectory to the surface. Once on the surface, it would determine its orientation, raise the high-gain antenna, perform a sweep to locate Earth, and begin transmitting information. It was this complicated, autonomous sequence that the end-to-end test was to simulate.
Visual Target Tracking in the Presence of Unknown Observer Motion
NASA Technical Reports Server (NTRS)
Williams, Stephen; Lu, Thomas
2009-01-01
Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.
Housworth, E A; Martins, E P
2001-01-01
Statistical randomization tests in evolutionary biology often require a set of random, computer-generated trees. For example, earlier studies have shown how large numbers of computer-generated trees can be used to conduct phylogenetic comparative analyses even when the phylogeny is uncertain or unknown. These methods were limited, however, in that (in the absence of molecular sequence or other data) they allowed users to assume that no phylogenetic information was available or that all possible trees were known. Intermediate situations where only a taxonomy or other limited phylogenetic information (e.g., polytomies) are available are technically more difficult. The current study describes a procedure for generating random samples of phylogenies while incorporating limited phylogenetic information (e.g., four taxa belong together in a subclade). The procedure can be used to conduct comparative analyses when the phylogeny is only partially resolved or can be used in other randomization tests in which large numbers of possible phylogenies are needed.
Formal Models of the Network Co-occurrence Underlying Mental Operations.
Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand
2016-06-01
Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition.
Formal Models of the Network Co-occurrence Underlying Mental Operations
Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand
2016-01-01
Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition. PMID:27310288
Prioritizing Environmental Risk of Prescription Pharmaceuticals
Dong, Zhao; Senn, David B.; Moran, Rebecca E.
2015-01-01
Low levels of pharmaceutical compounds have been detected in aquatic environments worldwide, but their human and ecological health risks associated with low dose environmental exposure is largely unknown due to the large number of these compounds and a lack of information. Therefore prioritization and ranking methods are needed for screening target compounds for research and risk assessment. Previous efforts to rank pharmaceutical compounds have often focused on occurrence data and have paid less attention to removal mechanisms such as human metabolism. This study proposes a simple prioritization approach based on number of prescriptions and toxicity information, accounting for metabolism and wastewater treatment removal, and can be applied to unmeasured compounds. The approach was performed on the 200 most-prescribed drugs in the U.S. in 2009. Our results showed that under-studied compounds such as levothyroxine and montelukast sodium received the highest scores, suggesting the importance of removal mechanisms in influencing the ranking, and the need for future environmental research to include other less-studied but potentially harmful pharmaceutical compounds. PMID:22813724
A cytogenetics study of Hydrodroma despiciens (Müller, 1776) (Acari: Hydrachnellae: Hydrodromidae).
Onrat, Serap Tutgun; Aşçi, Ferruh; Ozkan, Muhlis
2006-06-30
The karyotypes of water mites (Acari: Hydrachnellae: Hydrodromidae) are largely unknown. The present investigation is the first report of a study designed to characterize the chromosomes of water mites. The study was carried out with specimens of Hydrodroma despiciens collected from Eber Lake in Afyon, Turkey. Several different methods were tried to obtain chromosomes of this species. However, somatic cell culture proved to be the most effective for the preparation of chromosomes. In the present study, we determined the diploid chromosome number of Hydrodroma despiciens to be 2n = 16. However, a large metacentric chromosome was found in each metaphase, which we believed to be the X chromosome. We could not determine the sex chromosomes of this species. This study is the first approach to the cytogenetic characterization of this water mite group. Furthermore, these cytogenetic data will contribute to the understanding of the phylogenetic relationship among water mites. To our knowledge, this is the first report on the cytogenetics of water mites.
Photographic copy of photograph, photographer unknown, August 1912 (original print ...
Photographic copy of photograph, photographer unknown, August 1912 (original print located at U.S. Bureau of Reclamation Upper Columbia Area Office, Yakima, Washington). "METHOD OF CONSTRUCTING DAM AFTER REMOVING OF TRESTLE" - Kachess Dam, Kachess River, 1.5 miles north of Interstate 90, Easton, Kittitas County, WA
Gene-culture coevolution in the age of genomics
Richerson, Peter J.; Boyd, Robert; Henrich, Joseph
2010-01-01
The use of socially learned information (culture) is central to human adaptations. We investigate the hypothesis that the process of cultural evolution has played an active, leading role in the evolution of genes. Culture normally evolves more rapidly than genes, creating novel environments that expose genes to new selective pressures. Many human genes that have been shown to be under recent or current selection are changing as a result of new environments created by cultural innovations. Some changed in response to the development of agricultural subsistence systems in the Early and Middle Holocene. Alleles coding for adaptations to diets rich in plant starch (e.g., amylase copy number) and to epidemic diseases evolved as human populations expanded (e.g., sickle cell and G6PD deficiency alleles that provide protection against malaria). Large-scale scans using patterns of linkage disequilibrium to detect recent selection suggest that many more genes evolved in response to agriculture. Genetic change in response to the novel social environment of contemporary modern societies is also likely to be occurring. The functional effects of most of the alleles under selection during the last 10,000 years are currently unknown. Also unknown is the role of paleoenvironmental change in regulating the tempo of hominin evolution. Although the full extent of culture-driven gene-culture coevolution is thus far unknown for the deeper history of the human lineage, theory and some evidence suggest that such effects were profound. Genomic methods promise to have a major impact on our understanding of gene-culture coevolution over the span of hominin evolutionary history. PMID:20445092
Reconstruction of phonon relaxation times from systems featuring interfaces with unknown properties
NASA Astrophysics Data System (ADS)
Forghani, Mojtaba; Hadjiconstantinou, Nicolas G.
2018-05-01
We present a method for reconstructing the phonon relaxation-time function τω=τ (ω ) (including polarization) and associated phonon free-path distribution from thermal spectroscopy data for systems featuring interfaces with unknown properties. Our method does not rely on the effective thermal-conductivity approximation or a particular physical model of the interface behavior. The reconstruction is formulated as an optimization problem in which the relaxation times are determined as functions of frequency by minimizing the discrepancy between the experimentally measured temperature profiles and solutions of the Boltzmann transport equation for the same system. Interface properties such as transmissivities are included as unknowns in the optimization; however, because for the thermal spectroscopy problems considered here the reconstruction is not very sensitive to the interface properties, the transmissivities are only approximately reconstructed and can be considered as byproducts of the calculation whose primary objective is the accurate determination of the relaxation times. The proposed method is validated using synthetic experimental data obtained from Monte Carlo solutions of the Boltzmann transport equation. The method is shown to remain robust in the presence of uncertainty (noise) in the measurement.
High-precision numerical integration of equations in dynamics
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.
Mining high-throughput experimental data to link gene and function
Blaby-Haas, Crysten E.; de Crécy-Lagard, Valérie
2011-01-01
Nearly 2200 genomes encoding some 6 million proteins have now been sequenced. Around 40% of these proteins are of unknown function even when function is loosely and minimally defined as “belonging to a superfamily”. In addition to in silico methods, the swelling stream of high-throughput experimental data can give valuable clues for linking these “unknowns” with precise biological roles. The goal is to develop integrative data-mining platforms that allow the scientific community at large to access and utilize this rich source of experimental knowledge. To this end, we review recent advances in generating whole-genome experimental datasets, where this data can be accessed, and how it can be used to drive prediction of gene function. PMID:21310501
Azadmanesh, Jahaun; Trickel, Scott R.; Weiss, Kevin L.; ...
2017-03-29
Superoxide dismutases (SODs) are enzymes that protect against oxidative stress by dismutation of superoxide into oxygen and hydrogen peroxide through cyclic reduction and oxidation of the active-site metal. The complete enzymatic mechanisms of SODs are unknown since data on the positions of hydrogen are limited. Here, we present, methods for large crystal growth and neutron data collection of human manganese SOD (MnSOD) using perdeuteration and the MaNDi beamline at Oak Ridge National Laboratory. Furthermore, The crystal from which the human MnSOD data set was obtained is the crystal with the largest unit-cell edge (240 Å) from which data have beenmore » collectedvianeutron diffraction to sufficient resolution (2.30 Å) where hydrogen positions can be observed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pereira, Ana I.; ALGORITMI,University of Minho; Lima, José
There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The energy consumption can also be reduced with elastic elements coupled to each joint. The presented paper addresses an optimization method, the Stretched Simulated Annealing, that runs in an accurate and stable simulation model to find the optimal gait combined with elastic elements. Finalmore » results demonstrate that optimization is a valid gait planning technique.« less
Imaging Fluorescent Combustion Species in Gas Turbine Flame Tubes: On Complexities in Real Systems
NASA Technical Reports Server (NTRS)
Hicks, Y. R.; Locke, R. J.; Anderson, R. C.; Zaller, M.; Schock, H. J.
1997-01-01
Planar laser-induced fluorescence (PLIF) is used to visualize the flame structure via OH, NO, and fuel imaging in kerosene- burning gas turbine combustor flame tubes. When compared to simple gaseous hydrocarbon flames and hydrogen flames, flame tube testing complexities include spectral interferences from large fuel fragments, unknown turbulence interactions, high pressure operation, and the concomitant need for windows and remote operation. Complications of these and other factors as they apply to image analysis are considered. Because both OH and gas turbine engine fuels (commercial and military) can be excited and detected using OH transition lines, a narrowband and a broadband detection scheme are compared and the benefits and drawbacks of each method are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azadmanesh, Jahaun; Trickel, Scott R.; Weiss, Kevin L.
Superoxide dismutases (SODs) are enzymes that protect against oxidative stress by dismutation of superoxide into oxygen and hydrogen peroxide through cyclic reduction and oxidation of the active-site metal. The complete enzymatic mechanisms of SODs are unknown since data on the positions of hydrogen are limited. Here, we present, methods for large crystal growth and neutron data collection of human manganese SOD (MnSOD) using perdeuteration and the MaNDi beamline at Oak Ridge National Laboratory. Furthermore, The crystal from which the human MnSOD data set was obtained is the crystal with the largest unit-cell edge (240 Å) from which data have beenmore » collectedvianeutron diffraction to sufficient resolution (2.30 Å) where hydrogen positions can be observed.« less
Technique for Solving Electrically Small to Large Structures for Broadband Applications
NASA Technical Reports Server (NTRS)
Jandhyala, Vikram; Chowdhury, Indranil
2011-01-01
Fast iterative algorithms are often used for solving Method of Moments (MoM) systems, having a large number of unknowns, to determine current distribution and other parameters. The most commonly used fast methods include the fast multipole method (FMM), the precorrected fast Fourier transform (PFFT), and low-rank QR compression methods. These methods reduce the O(N) memory and time requirements to O(N log N) by compressing the dense MoM system so as to exploit the physics of Green s Function interactions. FFT-based techniques for solving such problems are efficient for spacefilling and uniform structures, but their performance substantially degrades for non-uniformly distributed structures due to the inherent need to employ a uniform global grid. FMM or QR techniques are better suited than FFT techniques; however, neither the FMM nor the QR technique can be used at all frequencies. This method has been developed to efficiently solve for a desired parameter of a system or device that can include both electrically large FMM elements, and electrically small QR elements. The system or device is set up as an oct-tree structure that can include regions of both the FMM type and the QR type. The system is enclosed with a cube at a 0- th level, splitting the cube at the 0-th level into eight child cubes. This forms cubes at a 1st level, recursively repeating the splitting process for cubes at successive levels until a desired number of levels is created. For each cube that is thus formed, neighbor lists and interaction lists are maintained. An iterative solver is then used to determine a first matrix vector product for any electrically large elements as well as a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large and small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within the predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter. The solution for the desired parameter is then presented to a user in a tangible form; for example, on a display.
The use of biochemical methods in extraterrestrial life detection
NASA Astrophysics Data System (ADS)
McDonald, Gene
2006-08-01
Instrument development for in situ extraterrestrial life detection focuses primarily on the ability to distinguish between biological and non-biological material, mostly through chemical analysis for potential biosignatures (e.g., biogenic minerals, enantiomeric excesses). In constrast, biochemical analysis techniques commonly applied to Earth life focus primarily on the exploration of cellular and molecular processes, not on the classification of a given system as biological or non-biological. This focus has developed because of the relatively large functional gap between life and non-life on Earth today. Life on Earth is very diverse from an environmental and physiological point of view, but is highly conserved from a molecular point of view. Biochemical analysis techniques take advantage of this similarity of all terrestrial life at the molecular level, particularly through the use of biologically-derived reagents (e.g., DNA polymerases, antibodies), to enable analytical methods with enormous sensitivity and selectivity. These capabilities encourage consideration of such reagents and methods for use in extraterrestrial life detection instruments. The utility of this approach depends in large part on the (unknown at this time) degree of molecular compositional differences between extraterrestrial and terrestrial life. The greater these differences, the less useful laboratory biochemical techniques will be without significant modification. Biochemistry and molecular biology methods may need to be "de-focused" in order to produce instruments capable of unambiguously detecting a sufficiently wide range of extraterrestrial biochemical systems. Modern biotechnology tools may make that possible in some cases.
Chapter 11 - Post-hurricane fuel dynamics and implications for fire behavior (Project SO-EM-F-12-01)
Shanyue Guan; G. Geoff. Wang
2018-01-01
Hurricanes have long been a powerful and recurring disturbance in many coastal forest ecosystems. Intense hurricanes often produce a large amount of dead fuels in their affected forests. How the post-hurricane fuel complex changes with time, due todecomposition and management such as salvage, and its implications for fire behavior remain largely unknown....
ERIC Educational Resources Information Center
Dempsey, Ian
2014-01-01
The extent to which school students continue to receive special education services over time is largely unknown because longitudinal studies are rare in this area. The present study examined a large Australian longitudinal database to track the status of children who received special education support in 2006 and whether they continued to access…
Compatible-strain mixed finite element methods for incompressible nonlinear elasticity
NASA Astrophysics Data System (ADS)
Faghih Shojaei, Mostafa; Yavari, Arash
2018-05-01
We introduce a new family of mixed finite elements for incompressible nonlinear elasticity - compatible-strain mixed finite element methods (CSFEMs). Based on a Hu-Washizu-type functional, we write a four-field mixed formulation with the displacement, the displacement gradient, the first Piola-Kirchhoff stress, and a pressure-like field as the four independent unknowns. Using the Hilbert complexes of nonlinear elasticity, which describe the kinematics and the kinetics of motion, we identify the solution spaces of the independent unknown fields. In particular, we define the displacement in H1, the displacement gradient in H (curl), the stress in H (div), and the pressure field in L2. The test spaces of the mixed formulations are chosen to be the same as the corresponding solution spaces. Next, in a conforming setting, we approximate the solution and the test spaces with some piecewise polynomial subspaces of them. Among these approximation spaces are the tensorial analogues of the Nédélec and Raviart-Thomas finite element spaces of vector fields. This approach results in compatible-strain mixed finite element methods that satisfy both the Hadamard compatibility condition and the continuity of traction at the discrete level independently of the refinement level of the mesh. By considering several numerical examples, we demonstrate that CSFEMs have a good performance for bending problems and for bodies with complex geometries. CSFEMs are capable of capturing very large strains and accurately approximating stress and pressure fields. Using CSFEMs, we do not observe any numerical artifacts, e.g., checkerboarding of pressure, hourglass instability, or locking in our numerical examples. Moreover, CSFEMs provide an efficient framework for modeling heterogeneous solids.
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...
2016-08-10
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less
Update: Cytokine Dysregulation in Chronic Nonbacterial Osteomyelitis (CNO)
Hofmann, Sigrun R.; Roesen-Wolff, Angela; Hahn, Gabriele; Hedrich, Christian M.
2012-01-01
Chronic nonbacterial osteomyelitis (CNO) with its most severe form chronic recurrent multifocal osteomyelitis (CRMO) is a non-bacterial osteitis of yet unknown origin. Secondary to the absence of both high-titer autoantibodies and autoreactive T lymphocytes, and the association with other autoimmune diseases, it was recently reclassified as an autoinflammatory disorder of the musculoskeletal system. Since its etiology is largely unknown, the diagnosis is based on clinical criteria, and treatment is empiric and not always successful. In this paper, we summarize recent advances in the understanding of possible etiopathogenetic mechanisms in CNO. PMID:22685464
A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.
Hu, Di; Sarosh, Ali; Dong, Yun-Feng
2012-03-01
Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, L. M.
2017-09-01
A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master-slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown slave system (including the external disturbances). Consequently, based on the slide mode technology and the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized projective synchronization error. The main contribution of this paper is that a control strategy is provided for the generalized projective synchronization between two entirely unknown fractional-order chaotic systems subject to the unknown external disturbances, and the proposed control strategy only requires that the master system has the same fractional orders as the slave system. Moreover, the proposed method allows us to achieve all kinds of generalized projective chaos synchronizations by turning the user-defined parameters onto the desired values. Simulation results show the effectiveness of the proposed method and the robustness of the controlled system.
Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Poole, E. L.
1986-01-01
In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
Pang, Susan; Cowen, Simon
2017-12-13
We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.
Direct Analysis in Real Time Mass Spectrometry for Characterization of Large Saccharides.
Ma, Huiying; Jiang, Qing; Dai, Diya; Li, Hongli; Bi, Wentao; Da Yong Chen, David
2018-03-06
Polysaccharide characterization posts the most difficult challenge to available analytical technologies compared to other types of biomolecules. Plant polysaccharides are reported to have numerous medicinal values, but their effect can be different based on the types of plants, and even regions of productions and conditions of cultivation. However, the molecular basis of the differences of these polysaccharides is largely unknown. In this study, direct analysis in real time mass spectrometry (DART-MS) was used to generate polysaccharide fingerprints. Large saccharides can break down into characteristic small fragments in the DART source via pyrolysis, and the products are then detected by high resolution MS. Temperature was shown to be a crucial parameter for the decomposition of large polysaccharide. The general behavior of carbohydrates in DART-MS was also studied through the investigation of a number of mono- and oligosaccharide standards. The chemical formula and putative ionic forms of the fragments were proposed based on accurate mass with less than 10 ppm mass errors. Multivariate data analysis shows the clear differentiation of different plant species. Intensities of marker ions compared among samples also showed obvious differences. The combination of DART-MS analysis and mechanochemical extraction method used in this work demonstrates a simple, fast, and high throughput analytical protocol for the efficient evaluation of molecular features in plant polysaccharides.
Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.
2015-01-01
The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Path Following in the Exact Penalty Method of Convex Programming
Zhou, Hua; Lange, Kenneth
2015-01-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044
Ozawa, Tatsuhiko; Kondo, Masato; Isobe, Masaharu
2004-01-01
The 3' rapid amplification of cDNA ends (3' RACE) is widely used to isolate the cDNA of unknown 3' flanking sequences. However, the conventional 3' RACE often fails to amplify cDNA from a large transcript if there is a long distance between the 5' gene-specific primer and poly(A) stretch, since the conventional 3' RACE utilizes 3' oligo-dT-containing primer complementary to the poly(A) tail of mRNA at the first strand cDNA synthesis. To overcome this problem, we have developed an improved 3' RACE method suitable for the isolation of cDNA derived from very large transcripts. By using the oligonucleotide-containing random 9mer together with the GC-rich sequence for the suppression PCR technology at the first strand of cDNA synthesis, we have been able to amplify the cDNA from a very large transcript, such as the microtubule-actin crosslinking factor 1 (MACF1) gene, which codes a transcript of 20 kb in size. When there is no splicing variant, our highly specific amplification allows us to perform the direct sequencing of 3' RACE products without requiring cloning in bacterial hosts. Thus, this stepwise 3' RACE walking will help rapid characterization of the 3' structure of a gene, even when it encodes a very large transcript.
NASA Astrophysics Data System (ADS)
Habibi, Hamed; Rahimi Nohooji, Hamed; Howard, Ian
2017-09-01
Power maximization has always been a practical consideration in wind turbines. The question of how to address optimal power capture, especially when the system dynamics are nonlinear and the actuators are subject to unknown faults, is significant. This paper studies the control methodology for variable-speed variable-pitch wind turbines including the effects of uncertain nonlinear dynamics, system fault uncertainties, and unknown external disturbances. The nonlinear model of the wind turbine is presented, and the problem of maximizing extracted energy is formulated by designing the optimal desired states. With the known system, a model-based nonlinear controller is designed; then, to handle uncertainties, the unknown nonlinearities of the wind turbine are estimated by utilizing radial basis function neural networks. The adaptive neural fault tolerant control is designed passively to be robust on model uncertainties, disturbances including wind speed and model noises, and completely unknown actuator faults including generator torque and pitch actuator torque. The Lyapunov direct method is employed to prove that the closed-loop system is uniformly bounded. Simulation studies are performed to verify the effectiveness of the proposed method.
Yang, Xiong; Liu, Derong; Wang, Ding; Wei, Qinglai
2014-07-01
In this paper, a reinforcement-learning-based direct adaptive control is developed to deliver a desired tracking performance for a class of discrete-time (DT) nonlinear systems with unknown bounded disturbances. We investigate multi-input-multi-output unknown nonaffine nonlinear DT systems and employ two neural networks (NNs). By using Implicit Function Theorem, an action NN is used to generate the control signal and it is also designed to cancel the nonlinearity of unknown DT systems, for purpose of utilizing feedback linearization methods. On the other hand, a critic NN is applied to estimate the cost function, which satisfies the recursive equations derived from heuristic dynamic programming. The weights of both the action NN and the critic NN are directly updated online instead of offline training. By utilizing Lyapunov's direct method, the closed-loop tracking errors and the NN estimated weights are demonstrated to be uniformly ultimately bounded. Two numerical examples are provided to show the effectiveness of the present approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Li, Da-Peng; Li, Dong-Juan; Liu, Yan-Jun; Tong, Shaocheng; Chen, C L Philip
2017-10-01
This paper deals with the tracking control problem for a class of nonlinear multiple input multiple output unknown time-varying delay systems with full state constraints. To overcome the challenges which cause by the appearances of the unknown time-varying delays and full-state constraints simultaneously in the systems, an adaptive control method is presented for such systems for the first time. The appropriate Lyapunov-Krasovskii functions and a separation technique are employed to eliminate the effect of unknown time-varying delays. The barrier Lyapunov functions are employed to prevent the violation of the full state constraints. The singular problems are dealt with by introducing the signal function. Finally, it is proven that the proposed method can both guarantee the good tracking performance of the systems output, all states are remained in the constrained interval and all the closed-loop signals are bounded in the design process based on choosing appropriate design parameters. The practicability of the proposed control technique is demonstrated by a simulation study in this paper.
Selection of core animals in the Algorithm for Proven and Young using a simulation model.
Bradford, H L; Pocrnić, I; Fragomeni, B O; Lourenco, D A L; Misztal, I
2017-12-01
The Algorithm for Proven and Young (APY) enables the implementation of single-step genomic BLUP (ssGBLUP) in large, genotyped populations by separating genotyped animals into core and non-core subsets and creating a computationally efficient inverse for the genomic relationship matrix (G). As APY became the choice for large-scale genomic evaluations in BLUP-based methods, a common question is how to choose the animals in the core subset. We compared several core definitions to answer this question. Simulations comprised a moderately heritable trait for 95,010 animals and 50,000 genotypes for animals across five generations. Genotypes consisted of 25,500 SNP distributed across 15 chromosomes. Genotyping errors and missing pedigree were also mimicked. Core animals were defined based on individual generations, equal representation across generations, and at random. For a sufficiently large core size, core definitions had the same accuracies and biases, even if the core animals had imperfect genotypes. When genotyped animals had unknown parents, accuracy and bias were significantly better (p ≤ .05) for random and across generation core definitions. © 2017 The Authors. Journal of Animal Breeding and Genetics Published by Blackwell Verlag GmbH.
System for identifying known materials within a mixture of unknowns
Wagner, John S.
1999-01-01
One or both of two methods and systems are used to determine concentration of a known material in an unknown mixture on the basis of the measured interaction of electromagnetic waves upon the mixture. One technique is to utilize a multivariate analysis patch technique to develop a library of optimized patches of spectral signatures of known materials containing only those pixels most descriptive of the known materials by an evolutionary algorithm. Identity and concentration of the known materials within the unknown mixture is then determined by minimizing the residuals between the measurements from the library of optimized patches and the measurements from the same pixels from the unknown mixture. Another technique is to train a neural network by the genetic algorithm to determine the identity and concentration of known materials in the unknown mixture. The two techniques may be combined into an expert system providing cross checks for accuracy.
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
System for identifying known materials within a mixture of unknowns
Wagner, J.S.
1999-07-20
One or both of two methods and systems are used to determine concentration of a known material in an unknown mixture on the basis of the measured interaction of electromagnetic waves upon the mixture. One technique is to utilize a multivariate analysis patch technique to develop a library of optimized patches of spectral signatures of known materials containing only those pixels most descriptive of the known materials by an evolutionary algorithm. Identity and concentration of the known materials within the unknown mixture is then determined by minimizing the residuals between the measurements from the library of optimized patches and the measurements from the same pixels from the unknown mixture. Another technique is to train a neural network by the genetic algorithm to determine the identity and concentration of known materials in the unknown mixture. The two techniques may be combined into an expert system providing cross checks for accuracy. 37 figs.
A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation
Wang, Jinfeng; Li, Hong; Fang, Zhichao
2014-01-01
We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153
Human anatomy: let the students tell us how to teach.
Davis, Christopher R; Bates, Anthony S; Ellis, Harold; Roberts, Alice M
2014-01-01
Anatomy teaching methods have evolved as the medical undergraduate curriculum has modernized. Traditional teaching methods of dissection, prosection, tutorials and lectures are now supplemented by anatomical models and e-learning. Despite these changes, the preferences of medical students and anatomy faculty towards both traditional and contemporary teaching methods and tools are largely unknown. This study quantified medical student and anatomy faculty opinion on various aspects of anatomical teaching at the Department of Anatomy, University of Bristol, UK. A questionnaire was used to explore the perceived effectiveness of different anatomical teaching methods and tools among anatomy faculty (AF) and medical students in year one (Y1) and year two (Y2). A total of 370 preclinical medical students entered the study (76% response rate). Responses were quantified and intergroup comparisons were made. All students and AF were strongly in favor of access to cadaveric specimens and supported traditional methods of small-group teaching with medically qualified demonstrators. Other teaching methods, including e-learning, anatomical models and surgical videos, were considered useful educational tools. In several areas there was disharmony between the opinions of AF and medical students. This study emphasizes the importance of collecting student preferences to optimize teaching methods used in the undergraduate anatomy curriculum. © 2013 American Association of Anatomists.
Bourlier, Christophe; Kubické, Gildas; Déchamps, Nicolas
2008-04-01
A fast, exact numerical method based on the method of moments (MM) is developed to calculate the scattering from an object below a randomly rough surface. Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)] have recently developed the PILE (propagation-inside-layer expansion) method for a stack of two one-dimensional rough interfaces separating homogeneous media. From the inversion of the impedance matrix by block (in which two impedance matrices of each interface and two coupling matrices are involved), this method allows one to calculate separately and exactly the multiple-scattering contributions inside the layer in which the inverses of the impedance matrices of each interface are involved. Our purpose here is to apply this method for an object below a rough surface. In addition, to invert a matrix of large size, the forward-backward spectral acceleration (FB-SA) approach of complexity O(N) (N is the number of unknowns on the interface) proposed by Chou and Johnson [Radio Sci.33, 1277 (1998)] is applied. The new method, PILE combined with FB-SA, is tested on perfectly conducting circular and elliptic cylinders located below a dielectric rough interface obeying a Gaussian process with Gaussian and exponential height autocorrelation functions.
Inferring terrestrial photosynthetic light use efficiency of temperate ecosystems from space
Thomas Hilker; Nicholas C. Coops; Forest G. Hall; Caroline J. Nichol; Alexei Lyapustin; T. Andrew Black; Michael A. Wulder; Ray Leuning; Alan Barr; David Y. Hollinger; Bill Munger; Compton J. Tucker
2011-01-01
Terrestrial ecosystems absorb about 2.8 Gt C yrâ1, which is estimated to be about a quarter of the carbon emitted from fossil fuel combustion. However, the uncertainties of this sink are large, on the order of ±40%, with spatial and temporal variations largely unknown. One of the largest factors contributing to the uncertainty is photosynthesis,...
N. S. Wagenbrenner; S. H. Chung; B. K. Lamb
2017-01-01
Wind erosion of soils burned by wildfire contributes substantial particulate matter (PM) in the form of dust to the atmosphere, but the magnitude of this dust source is largely unknown. It is important to accurately quantify dust emissions because they can impact human health, degrade visibility, exacerbate dust-on-snow issues (including snowmelt timing, snow chemistry...
Laterodorsal Nucleus of the Thalamus: A Processor of Somatosensory Inputs
BEZDUDNAYA, TATIANA; KELLER, ASAF
2009-01-01
The laterodorsal (LD) nucleus of the thalamus has been considered a “higher order” nucleus that provides inputs to limbic cortical areas. Although its functions are largely unknown, it is often considered to be involved in spatial learning and memory. Here we provide evidence that LD is part of a hitherto unknown pathway for processing somatosensory information. Juxtacellular and extracellular recordings from LD neurons reveal that they respond to vibrissa stimulation with short latency (median = 7 ms) and large magnitude responses (median = 1.2 spikes/stimulus). Most neurons (62%) had large receptive fields, responding to six and more individual vibrissae. Electrical stimulation of the trigeminal nucleus interpolaris (SpVi) evoked short latency responses (median = 3.8 ms) in vibrissa-responsive LD neurons. Labeling produced by anterograde and retrograde neuroanatomical tracers confirmed that LD neurons receive direct inputs from SpVi. Electrophysiological and neuroanatomical analyses revealed also that LD projects upon the cingulate and retrosplenial cortex, but has only sparse projections to the barrel cortex. These findings suggest that LD is part of a novel processing stream involved in spatial orientation and learning related to somatosensory cues. PMID:18273888
Model and Data Reduction for Control, Identification and Compressed Sensing
NASA Astrophysics Data System (ADS)
Kramer, Boris
This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.
An improved method for bivariate meta-analysis when within-study correlations are unknown.
Hong, Chuan; D Riley, Richard; Chen, Yong
2018-03-01
Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.
Industrialized timber structures.
DOT National Transportation Integrated Search
1974-01-01
It was recently learned that a number of innovations in structural timber components are available to the construction industry, but that they were largely unknown to bridge designers. The purpose of this study was to develop for the Department a fea...
Associations of endothelial function and air temperature in diabetic subjects
Background and Objective: Epidemiological studies consistently show that air temperature is associated with changes in cardiovascular morbidity and mortality. However, the biological mechanisms underlying the association remain largely unknown. As one index of endothelial functio...
Promoting Community Health Resources: Preferred Communication Strategies
USDA-ARS?s Scientific Manuscript database
Background: Community health promotion efforts involve communicating resource information to priority populations. Which communication strategies are most effective is largely unknown for specific populations. Objective: A random-dialed telephone survey was conducted to assess health resource comm...
Parametric system identification of catamaran for improving controller design
NASA Astrophysics Data System (ADS)
Timpitak, Surasak; Prempraneerach, Pradya; Pengwang, Eakkachai
2018-01-01
This paper presents an estimation of simplified dynamic model for only surge- and yaw- motions of catamaran by using system identification (SI) techniques to determine associated unknown parameters. These methods will enhance the performance of designing processes for the motion control system of Unmanned Surface Vehicle (USV). The simulation results demonstrate an effective way to solve for damping forces and to determine added masses by applying least-square and AutoRegressive Exogenous (ARX) methods. Both methods are then evaluated according to estimated parametric errors from the vehicle’s dynamic model. The ARX method, which yields better estimated accuracy, can then be applied to identify unknown parameters as well as to help improving a controller design of a real unmanned catamaran.
Distenfeld, Carl H.
1978-01-01
A method for measuring the dose-equivalent for exposure to an unknown and/or time varing neutron flux which comprises simultaneously exposing a plurality of neutron detecting elements of different types to a neutron flux and combining the measured responses of the various detecting elements by means of a function, whose value is an approximate measure of the dose-equivalent, which is substantially independent of the energy spectra of the flux. Also, a personnel neutron dosimeter, which is useful in carrying out the above method, comprising a plurality of various neutron detecting elements in a single housing suitable for personnel to wear while working in a radiation area.
A new serotyping method for Klebsiella species: evaluation of the technique.
Riser, E; Noone, P; Bonnet, M L
1976-01-01
A new indirect fluorescent typing method for Klebsiella species is compared with an established method, capsular swelling. The fluorescent antibody (FA) technique was tested with standards and unknowns, and the results were checked by capsular swelling. Several unknowns were sent away for confirmation of typing, by capsular swelling. The FA method was also tried by a technician in the routine department for blind identification of standards. Fluorescence typing gives close correlation with the established capsular swelling technique but has greater sensitivity; allows more econimical use of expensive antisera; possesses greater objectivity as it requires less operator skill in the reading of results; resolves most of the cross reactions observed with capsular swelling; and has a higher per cent success rate in identification. PMID:777043
Automated adaptive inference of phenomenological dynamical models.
Daniels, Bryan C; Nemenman, Ilya
2015-08-21
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508
The Information Available to a Moving Observer on Shape with Unknown, Isotropic BRDFs.
Chandraker, Manmohan
2016-07-01
Psychophysical studies show motion cues inform about shape even with unknown reflectance. Recent works in computer vision have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to surface depth, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show differential stereo may not determine shape for general BRDFs, but suffices to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that differential stereo yields the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate the hardness of surface reconstruction to the complexity of imaging setup.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Semi-automated surface mapping via unsupervised classification
NASA Astrophysics Data System (ADS)
D'Amore, M.; Le Scaon, R.; Helbert, J.; Maturilli, A.
2017-09-01
Due to the increasing volume of the returned data from space mission, the human search for correlation and identification of interesting features becomes more and more unfeasible. Statistical extraction of features via machine learning methods will increase the scientific output of remote sensing missions and aid the discovery of yet unknown feature hidden in dataset. Those methods exploit algorithm trained on features from multiple instrument, returning classification maps that explore intra-dataset correlation, allowing for the discovery of unknown features. We present two applications, one for Mercury and one for Vesta.
2014-06-01
high-throughput method has utility for evaluating a diversity of natural materials with unknown complex odor blends that can then be down-selected for...method has utility for evaluating a diversity of natural materials with unknown complex odor blends that can then be down-selected for further...leishmaniasis. Lancet 366: 1561-1577. Petts, S.L., Y. Tang, and R.D. Ward. 1997. Nectar from a wax plant, Hoya sp., as a carbohydrate source for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel; Wang, Z. J.
2004-01-01
A three-dimensional, high-order, conservative, and efficient discontinuous spectral volume (SV) method for the solutions of Maxwell's equations on unstructured grids is presented. The concept of discontinuous 2nd high-order loca1 representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) method, but instead of using a Galerkin finite-element formulation, the SV method is based on a finite-volume approach to attain a simpler formulation. Conventional unstructured finite-volume methods require data reconstruction based on the least-squares formulation using neighboring cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In the SV method, one starts with a relatively coarse grid of triangles or tetrahedra, called spectral volumes (SVs), and partition each SV into a number of structured subcells, called control volumes (CVs), that support a polynomial expansion of a desired degree of precision. The unknowns are cell averages over CVs. If all the SVs are partitioned in a geometrically similar manner, the reconstruction becomes universal as a weighted sum of unknowns, and only a few universal coefficients need to be stored for the surface integrals over CV faces. Since the solution is discontinuous across the SV boundaries, a Riemann solver is thus necessary to maintain conservation. In the paper, multi-parameter and symmetric SV partitions, up to quartic for triangle and cubic for tetrahedron, are first presented. The corresponding weight coefficients for CV face integrals in terms of CV cell averages for each partition are analytically determined. These discretization formulas are then applied to the integral form of the Maxwell equations. All numerical procedures for outer boundary, material interface, zonal interface, and interior SV face are unified with a single characteristic formulation. The load balancing in a massive parallel computing environment is therefore easier to achieve. A parameter is introduced in the Riemann solver to control the strength of the smoothing term. Important aspects of the data structure and its effects to communication and the optimum use of cache memory are discussed. Results will be presented for plane TE and TM waves incident on a perfectly conducting cylinder for up to fifth order of accuracy, and a plane wave incident on a perfectly conducting sphere for up to fourth order of accuracy. Comparisons are made with exact solutions for these cases.
2011-01-01
Background Health professions education programs use simulation for teaching and maintaining clinical procedural skills. Simulated learning activities are also becoming useful methods of instruction for interprofessional education. The simulation environment for interprofessional training allows participants to explore collaborative ways of improving communicative aspects of clinical care. Simulation has shown communication improvement within and between health care professions, but the impacts of teamwork simulation on perceptions of others' interprofessional practices and one's own attitudes toward teamwork are largely unknown. Methods A single-arm intervention study tested the association between simulated team practice and measures of interprofessional collaboration, nurse-physician relationships, and attitudes toward health care teams. Participants were 154 post-licensure nurses, allied health professionals, and physicians. Self- and proxy-report survey measurements were taken before simulation training and two and six weeks after. Results Multilevel modeling revealed little change over the study period. Variation in interprofessional collaboration and attitudes was largely attributable to between-person characteristics. A constructed categorical variable indexing 'leadership capacity' found that participants with highest and lowest values were more likely to endorse shared team leadership over physician centrality. Conclusion Results from this study indicate that focusing interprofessional simulation education on shared leadership may provide the most leverage to improve interprofessional care. PMID:21443779
Trophic groups and modules: two levels of group detection in food webs.
Gauzens, Benoit; Thébault, Elisa; Lacroix, Gérard; Legendre, Stéphane
2015-05-06
Within food webs, species can be partitioned into groups according to various criteria. Two notions have received particular attention: trophic groups (TGs), which have been used for decades in the ecological literature, and more recently, modules. The relationship between these two group concepts remains unknown in empirical food webs. While recent developments in network theory have led to efficient methods for detecting modules in food webs, the determination of TGs (groups of species that are functionally similar) is largely based on subjective expert knowledge. We develop a novel algorithm for TG detection. We apply this method to empirical food webs and show that aggregation into TGs allows for the simplification of food webs while preserving their information content. Furthermore, we reveal a two-level hierarchical structure where modules partition food webs into large bottom-top trophic pathways, whereas TGs further partition these pathways into groups of species with similar trophic connections. This provides new perspectives for the study of dynamical and functional consequences of food-web structure, bridging topological and dynamical analysis. TGs have a clear ecological meaning and are found to provide a trade-off between network complexity and information loss. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
WATER QUALITY MONITORING OF PHARMACEUTICALS ...
The demand on freshwater to sustain the needs of the growing population is of worldwide concern. Often this water is used, treated, and released for reuse by other communities. The anthropogenic contaminants present in this water may include complex mixtures of pesticides, prescription and nonprescription drugs, personal care and common consumer products, industrial and domestic-use materials and degradation products of these compounds. Although, the fate of these pharmaceuticals and personal care products (PPCPs) in wastewater treatment facilities is largely unknown, the limited data that does exist suggests that many of these chemicals survive treatment and some others are returned to their biologically active form via deconjugation of metabolites.Traditional water sampling methods (i.e., grab or composite samples) often require the concentration of large amounts of water to detect trace levels of PPCPs. A passive sampler, the polar organic chemical integrative sampler (POCIS), has been developed to integratively concentrate the trace levels of these chemicals, determine the time-weighted average water concentrations, and provide a method of estimating the potential exposure of aquatic organisms to these complex mixtures of waterborne contaminants. The POCIS (U.S. Patent number 6,478,961) consists of a hydrophilic microporous membrane, acting as a semipermeable barrier, enveloping various solid-phase sorbents that retain the sampled chemicals. Sampling rates f
Flick, Tawnya G.; Leib, Ryan D.; Williams, Evan R.
2010-01-01
Accurate and rapid quantitation is advantageous to identify counterfeit and substandard pharmaceutical drugs. A standard-free electrospray ionization mass spectrometry method is used to directly determine the dosage in the prescription and over-the-counter drugs, Tamiflu®, Sudafed®, and Dramamine®. A tablet of each drug was dissolved in aqueous solution, filtered, and introduced into solutions containing a known concentration of either L-tryptophan, L-phenylalanine or prednisone as clustering agents. The active ingredient(s) incorporates statistically into large clusters of the clustering agent where effects of differential ionization/detection are substantially reduced. From the abundances of large clusters, the dosages of the active ingredients in each of the tablets were determined to typically better than 20% accuracy even when the ionization/detection efficiency of the individual components differed by over 100×. Although this unorthodox method for quantitation is not as accurate as using conventional standards, it has the advantages that it is fast, it can be applied to mixtures where the identities of the analytes are unknown, and it can be used when suitable standards may not be readily available, such as schedule I or II controlled substances or new designer drugs that have not previously been identified. PMID:20092258
Optimization Methods for Spiking Neurons and Networks
Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph
2011-01-01
Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265
A study revealing the key aroma compounds of steamed bread made by Chinese traditional sourdough*
Zhang, Guo-hua; Wu, Tao; Sadiq, Faizan A.; Yang, Huan-yi; Liu, Tong-jie; Ruan, Hui; He, Guo-qing
2016-01-01
Aroma of Chinese steamed bread (CSB) is one of the important parameters that determines the overall quality attributes and consumer acceptance. However, the aroma profile of CSB still remains poorly understood, mainly because of relying on only a single method for aroma extraction in previous studies. Therefore, the objective of this study was to determine the volatile aroma compounds of five different samples of CSB using three different aroma extraction methods, namely solid-phase microextraction (SPME), simultaneous distillation–extraction (SDE), and purge and trap (P&T). All samples showed a unique aroma profile, which could be attributed to their unique microbial consortia. (E)-2-Nonenal and (E,E)-2,4-decadienal were the most prevalent aromatic compounds revealed by SDE, which have not been reported previously, while ethanol and acetic acid proved to be the most dominant compounds by both SPME and P&T. Our approach of combining three different aroma extraction methods provided better insights into the aroma profile of CSB, which had remained largely unknown in previous studies. PMID:27704748
Optimal control in microgrid using multi-agent reinforcement learning.
Li, Fu-Dong; Wu, Min; He, Yong; Chen, Xin
2012-11-01
This paper presents an improved reinforcement learning method to minimize electricity costs on the premise of satisfying the power balance and generation limit of units in a microgrid with grid-connected mode. Firstly, the microgrid control requirements are analyzed and the objective function of optimal control for microgrid is proposed. Then, a state variable "Average Electricity Price Trend" which is used to express the most possible transitions of the system is developed so as to reduce the complexity and randomicity of the microgrid, and a multi-agent architecture including agents, state variables, action variables and reward function is formulated. Furthermore, dynamic hierarchical reinforcement learning, based on change rate of key state variable, is established to carry out optimal policy exploration. The analysis shows that the proposed method is beneficial to handle the problem of "curse of dimensionality" and speed up learning in the unknown large-scale world. Finally, the simulation results under JADE (Java Agent Development Framework) demonstrate the validity of the presented method in optimal control for a microgrid with grid-connected mode. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Compressive sensing of high betweenness centrality nodes in networks
NASA Astrophysics Data System (ADS)
Mahyar, Hamidreza; Hasheminezhad, Rouzbeh; Ghalebi K., Elahe; Nazemian, Ali; Grosu, Radu; Movaghar, Ali; Rabiee, Hamid R.
2018-05-01
Betweenness centrality is a prominent centrality measure expressing importance of a node within a network, in terms of the fraction of shortest paths passing through that node. Nodes with high betweenness centrality have significant impacts on the spread of influence and idea in social networks, the user activity in mobile phone networks, the contagion process in biological networks, and the bottlenecks in communication networks. Thus, identifying k-highest betweenness centrality nodes in networks will be of great interest in many applications. In this paper, we introduce CS-HiBet, a new method to efficiently detect top- k betweenness centrality nodes in networks, using compressive sensing. CS-HiBet can perform as a distributed algorithm by using only the local information at each node. Hence, it is applicable to large real-world and unknown networks in which the global approaches are usually unrealizable. The performance of the proposed method is evaluated by extensive simulations on several synthetic and real-world networks. The experimental results demonstrate that CS-HiBet outperforms the best existing methods with notable improvements.
Looking back on a decade of barcoding crustaceans
Raupach, Michael J.; Radulovici, Adriana E.
2015-01-01
Abstract Species identification represents a pivotal component for large-scale biodiversity studies and conservation planning but represents a challenge for many taxa when using morphological traits only. Consequently, alternative identification methods based on molecular markers have been proposed. In this context, DNA barcoding has become a popular and accepted method for the identification of unknown animals across all life stages by comparison to a reference library. In this review we examine the progress of barcoding studies for the Crustacea using the Web of Science data base from 2003 to 2014. All references were classified in terms of taxonomy covered, subject area (identification/library, genetic variability, species descriptions, phylogenetics, methods, pseudogenes/numts), habitat, geographical area, authors, journals, citations, and the use of the Barcode of Life Data Systems (BOLD). Our analysis revealed a total number of 164 barcoding studies for crustaceans with a preference for malacostracan crustaceans, in particular Decapoda, and for building reference libraries in order to identify organisms. So far, BOLD did not establish itself as a popular informatics platform among carcinologists although it offers many advantages for standardized data storage, analyses and publication. PMID:26798245
A new fast direct solver for the boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2017-09-01
A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.
Jopp-van Well, Eilin; Gehl, Axel; Säring, Dennis; Amling, Michael; Hahn, Michael; Sperhake, Jan; Augustin, Christa; Krebs, Oliver; Püschel, Klaus
2016-01-01
The article reports on the exhumation and identification of unknown soldiers from the Second World War. With the help of medicolegal investigation and reconstruction methods an American pilot presumably murdered by a shot to the head (lynch law) and an interned Italian soldier could be identified after about 70 years and brought back home.
ERIC Educational Resources Information Center
Pavel, John T.; Hyde, Erin C.; Bruch, Martha D.
2012-01-01
This experiment introduced general chemistry students to the basic concepts of organic structures and to the power of spectroscopic methods for structure determination. Students employed a combination of IR and NMR spectroscopy to perform de novo structure determination of unknown alcohols, without being provided with a list of possible…
Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087
Fan, Quan-Yong; Yang, Guang-Hong
2016-01-01
This paper is concerned with the problem of integral sliding-mode control for a class of nonlinear systems with input disturbances and unknown nonlinear terms through the adaptive actor-critic (AC) control method. The main objective is to design a sliding-mode control methodology based on the adaptive dynamic programming (ADP) method, so that the closed-loop system with time-varying disturbances is stable and the nearly optimal performance of the sliding-mode dynamics can be guaranteed. In the first step, a neural network (NN)-based observer and a disturbance observer are designed to approximate the unknown nonlinear terms and estimate the input disturbances, respectively. Based on the NN approximations and disturbance estimations, the discontinuous part of the sliding-mode control is constructed to eliminate the effect of the disturbances and attain the expected equivalent sliding-mode dynamics. Then, the ADP method with AC structure is presented to learn the optimal control for the sliding-mode dynamics online. Reconstructed tuning laws are developed to guarantee the stability of the sliding-mode dynamics and the convergence of the weights of critic and actor NNs. Finally, the simulation results are presented to illustrate the effectiveness of the proposed method.
Troussier, Idriss; Klausner, Guillaume; Morinière, Sylvain; Blais, Eivind; Jean-Christophe Faivre; Champion, Ambroise; Geoffrois, Lionnel; Pflumio, Carole; Babin, Emmanuel; Maingon, Philippe; Thariat, Juliette
2018-02-01
Cervical lymphadenopathies of unknown primary represent 3 % of head and neck cancers. Their diagnostic work up has largely changed in recent years. This review provides an update on diagnostic developments and their potential therapeutic impact. This is a systematic review of the literature. In recent years, changes in epidemiology-based prognostic factors such as human papilloma virus (HPV) cancers, advances in imaging and minimally invasive surgery have been integrated in the management of cervical lymphadenopathies of unknown primary. In particular, systematic use of PET scanner and increasing practice of robotic or laser surgery have contributed to increasing detection rate of primary cancers. These allow more adapted and personalized treatments. The impact of changes in the eighth TNM staging system is discussed. The management of cervical lymphadenopathies of unknown primary cancer has changed significantly in the last 10 years. On the other hand, practice changes will have to be assessed. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.
Rectal cancer and Fournier’s gangrene - current knowledge and therapeutic options
Bruketa, Tomislav; Majerovic, Matea; Augustin, Goran
2015-01-01
Fournier’s gangrene (FG) is a rapid progressive bacterial infection that involves the subcutaneous fascia and part of the deep fascia but spares the muscle in the scrotal, perianal and perineal region. The incidence has increased dramatically, while the reported incidence of rectal cancer-induced FG is unknown but is extremely low. Pathophysiology and clinical presentation of rectal cancer-induced FG per se does not differ from the other causes. Only rectal cancer-specific symptoms before presentation can lead to the diagnosis. The diagnosis of rectal cancer-induced FG should be excluded in every patient with blood on digital rectal examination, when urogenital and dermatological causes are excluded and when fever or sepsis of unknown origin is present with perianal symptomatology. Therapeutic options are more complex than for other forms of FG. First, the causative rectal tumor should be removed. The survival of patients with rectal cancer resection is reported as 100%, while with colostomy it is 80%. The preferred method of rectal resection has not been defined. Second, oncological treatment should be administered but the timing should be adjusted to the resolution of the FG and sometimes for the healing of plastic reconstructive procedures that are commonly needed for the reconstruction of large perineal, scrotal and lower abdominal wall defects. PMID:26290629
An approximate solution for interlaminar stresses in laminated composites: Applied mechanics program
NASA Technical Reports Server (NTRS)
Rose, Cheryl A.; Herakovich, Carl T.
1992-01-01
An approximate solution for interlaminar stresses in finite width, laminated composites subjected to uniform extensional, and bending loads is presented. The solution is based upon the principle of minimum complementary energy and an assumed, statically admissible stress state, derived by considering local material mismatch effects and global equilibrium requirements. The stresses in each layer are approximated by polynomial functions of the thickness coordinate, multiplied by combinations of exponential functions of the in-plane coordinate, expressed in terms of fourteen unknown decay parameters. Imposing the stationary condition of the laminate complementary energy with respect to the unknown variables yields a system of fourteen non-linear algebraic equations for the parameters. Newton's method is implemented to solve this system. Once the parameters are known, the stresses can be easily determined at any point in the laminate. Results are presented for through-thickness and interlaminar stress distributions for angle-ply, cross-ply (symmetric and unsymmetric laminates), and quasi-isotropic laminates subjected to uniform extension and bending. It is shown that the solution compares well with existing finite element solutions and represents an improved approximate solution for interlaminar stresses, primarily at interfaces where global equilibrium is satisfied by the in-plane stresses, but large local mismatch in properties requires the presence of interlaminar stresses.
Sauvage, François-Ludovic; Picard, Nicolas; Saint-Marcoux, Franck; Gaulier, Jean-Michel; Lachâtre, Gérard; Marquet, Pierre
2009-09-01
LC coupled to single (LC-MS) and tandem (LC-MS/MS) mass spectrometry is recognized as the most powerful analytical tools for metabolic studies in drug discovery. In this article, we describe five cases illustrating the utility of screening xenobiotic metabolites in routine analysis of forensic samples using LC-MS/MS. Analyses were performed using a previously published LC-MS/MS general unknown screening (GUS) procedure developed using a hybrid linear IT-tandem mass spectrometer. In each of the cases presented, the presence of metabolites of xenobiotics was suspected after analyzing urine samples. In two cases, the parent drug was also detected and the metabolites were merely useful to confirm drug intake, but in three other cases, metabolite detection was of actual forensic interest. The presented results indicate that: (i) the GUS procedure developed is useful to detect a large variety of drug metabolites, which would have been hardly detected using targeted methods in the context of clinical or forensic toxicology; (ii) metabolite structure can generally be inferred from their "enhanced" product ion scan spectra; and (iii) structure confirmation can be achieved through in vitro metabolic experiments or through the analysis of urine samples from individuals taking the parent drug.
NASA Astrophysics Data System (ADS)
Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.
2015-07-01
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
Iliev, Filip L.; Stanev, Valentin G.; Vesselinov, Velimir V.
2018-01-01
Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found. PMID:29518126
Iliev, Filip L; Stanev, Valentin G; Vesselinov, Velimir V; Alexandrov, Boian S
2018-01-01
Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found.
Radiative flux and forcing parameterization error in aerosol-free clear skies.
Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M
2015-07-16
Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.
Positive-unlabeled learning for disease gene identification
Yang, Peng; Li, Xiao-Li; Mei, Jian-Ping; Kwoh, Chee-Keong; Ng, See-Kiong
2012-01-01
Background: Identifying disease genes from human genome is an important but challenging task in biomedical research. Machine learning methods can be applied to discover new disease genes based on the known ones. Existing machine learning methods typically use the known disease genes as the positive training set P and the unknown genes as the negative training set N (non-disease gene set does not exist) to build classifiers to identify new disease genes from the unknown genes. However, such kind of classifiers is actually built from a noisy negative set N as there can be unknown disease genes in N itself. As a result, the classifiers do not perform as well as they could be. Result: Instead of treating the unknown genes as negative examples in N, we treat them as an unlabeled set U. We design a novel positive-unlabeled (PU) learning algorithm PUDI (PU learning for disease gene identification) to build a classifier using P and U. We first partition U into four sets, namely, reliable negative set RN, likely positive set LP, likely negative set LN and weak negative set WN. The weighted support vector machines are then used to build a multi-level classifier based on the four training sets and positive training set P to identify disease genes. Our experimental results demonstrate that our proposed PUDI algorithm outperformed the existing methods significantly. Conclusion: The proposed PUDI algorithm is able to identify disease genes more accurately by treating the unknown data more appropriately as unlabeled set U instead of negative set N. Given that many machine learning problems in biomedical research do involve positive and unlabeled data instead of negative data, it is possible that the machine learning methods for these problems can be further improved by adopting PU learning methods, as we have done here for disease gene identification. Availability and implementation: The executable program and data are available at http://www1.i2r.a-star.edu.sg/∼xlli/PUDI/PUDI.html. Contact: xlli@i2r.a-star.edu.sg or yang0293@e.ntu.edu.sg Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:22923290
Hegde, Shivanand; Hegde, Shrilakshmi; Zimmermann, Martina; Flöck, Martina; Spergser, Joachim; Rosengarten, Renate; Chopra-Dewasthaly, Rohini
2015-07-01
Mycoplasmas possess complex pathogenicity determinants that are largely unknown at the molecular level. Mycoplasma agalactiae serves as a useful model to study the molecular basis of mycoplasma pathogenicity. The generation and in vivo screening of a transposon mutant library of M. agalactiae were employed to unravel its host colonization factors. Tn4001mod mutants were sequenced using a novel sequencing method, and functionally heterogeneous pools containing 15 to 19 selected mutants were screened simultaneously through two successive cycles of sheep intramammary infections. A PCR-based negative selection method was employed to identify mutants that failed to colonize the udders and draining lymph nodes in the animals. A total of 14 different mutants found to be absent from ≥ 95% of samples were identified and subsequently verified via a second round of stringent confirmatory screening where 100% absence was considered attenuation. Using this criterion, seven mutants with insertions in genes MAG1050, MAG2540, MAG3390, uhpT, eutD, adhT, and MAG4460 were not recovered from any of the infected animals. Among the attenuated mutants, many contain disruptions in hypothetical genes, implying their previously unknown role in M. agalactiae pathogenicity. These data indicate the putative role of functionally different genes, including hypothetical ones, in the pathogenesis of M. agalactiae. Defining the precise functions of the identified genes is anticipated to increase our understanding of M. agalactiae infections and to develop successful intervention strategies against it. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
A Genealogical Look at Shared Ancestry on the X Chromosome.
Buffalo, Vince; Mount, Stephen M; Coop, Graham
2016-09-01
Close relatives can share large segments of their genome identical by descent (IBD) that can be identified in genome-wide polymorphism data sets. There are a range of methods to use these IBD segments to identify relatives and estimate their relationship. These methods have focused on sharing on the autosomes, as they provide a rich source of information about genealogical relationships. We hope to learn additional information about recent ancestry through shared IBD segments on the X chromosome, but currently lack the theoretical framework to use this information fully. Here, we fill this gap by developing probability distributions for the number and length of X chromosome segments shared IBD between an individual and an ancestor k generations back, as well as between half- and full-cousin relationships. Due to the inheritance pattern of the X and the fact that X homologous recombination occurs only in females (outside of the pseudoautosomal regions), the number of females along a genealogical lineage is a key quantity for understanding the number and length of the IBD segments shared among relatives. When inferring relationships among individuals, the number of female ancestors along a genealogical lineage will often be unknown. Therefore, our IBD segment length and number distributions marginalize over this unknown number of recombinational meioses through a distribution of recombinational meioses we derive. By using Bayes' theorem to invert these distributions, we can estimate the number of female ancestors between two relatives, giving us details about the genealogical relations between individuals not possible with autosomal data alone. Copyright © 2016 by the Genetics Society of America.
Stark, Peter C.; Kuske, Cheryl R.; Mullen, Kenneth I.
2002-01-01
A method for quantitating dsDNA in an aqueous sample solution containing an unknown amount of dsDNA. A first aqueous test solution containing a known amount of a fluorescent dye-dsDNA complex and at least one fluorescence-attenutating contaminant is prepared. The fluorescence intensity of the test solution is measured. The first test solution is diluted by a known amount to provide a second test solution having a known concentration of dsDNA. The fluorescence intensity of the second test solution is measured. Additional diluted test solutions are similarly prepared until a sufficiently dilute test solution having a known amount of dsDNA is prepared that has a fluorescence intensity that is not attenuated upon further dilution. The value of the maximum absorbance of this solution between 200-900 nanometers (nm), referred to herein as the threshold absorbance, is measured. A sample solution having an unknown amount of dsDNA and an absorbance identical to that of the sufficiently dilute test solution at the same chosen wavelength is prepared. Dye is then added to the sample solution to form the fluorescent dye-dsDNA-complex, after which the fluorescence intensity of the sample solution is measured and the quantity of dsDNA in the sample solution is determined. Once the threshold absorbance of a sample solution obtained from a particular environment has been determined, any similarly prepared sample solution taken from a similar environment and having the same value for the threshold absorbance can be quantified for dsDNA by adding a large excess of dye to the sample solution and measuring its fluorescence intensity.
NASA Astrophysics Data System (ADS)
Zhou, Mowei; Yan, Jing; Romano, Christine A.; Tebo, Bradley M.; Wysocki, Vicki H.; Paša-Tolić, Ljiljana
2018-01-01
Manganese oxidation is an important biogeochemical process that is largely regulated by bacteria through enzymatic reactions. However, the detailed mechanism is poorly understood due to challenges in isolating and characterizing these unknown enzymes. A manganese oxidase, Mnx, from Bacillus sp. PL-12 has been successfully overexpressed in active form as a protein complex with a molecular mass of 211 kDa. We have recently used surface induced dissociation (SID) and ion mobility-mass spectrometry (IM-MS) to release and detect folded subcomplexes for determining subunit connectivity and quaternary structure. The data from the native mass spectrometry experiments led to a plausible structural model of this multicopper oxidase, which has been difficult to study by conventional structural biology methods. It was also revealed that each Mnx subunit binds a variable number of copper ions. Becasue of the heterogeneity of the protein and limited mass resolution, ambiguities in assigning some of the observed peaks remained as a barrier to fully understanding the role of metals and potential unknown ligands in Mnx. In this study, we performed SID in a modified Fourier transform-ion cyclotron resonance (FTICR) mass spectrometer. The high mass accuracy and resolution offered by FTICR unveiled unexpected artificial modifications on the protein that had been previously thought to be iron bound species based on lower resolution spectra. Additionally, isotopically resolved spectra of the released subcomplexes revealed the metal binding stoichiometry at different structural levels. This method holds great potential for in-depth characterization of metalloproteins and protein-ligand complexes. [Figure not available: see fulltext.
Knief, Claudia
2015-01-01
Methane-oxidizing bacteria are characterized by their capability to grow on methane as sole source of carbon and energy. Cultivation-dependent and -independent methods have revealed that this functional guild of bacteria comprises a substantial diversity of organisms. In particular the use of cultivation-independent methods targeting a subunit of the particulate methane monooxygenase (pmoA) as functional marker for the detection of aerobic methanotrophs has resulted in thousands of sequences representing “unknown methanotrophic bacteria.” This limits data interpretation due to restricted information about these uncultured methanotrophs. A few groups of uncultivated methanotrophs are assumed to play important roles in methane oxidation in specific habitats, while the biology behind other sequence clusters remains still largely unknown. The discovery of evolutionary related monooxygenases in non-methanotrophic bacteria and of pmoA paralogs in methanotrophs requires that sequence clusters of uncultivated organisms have to be interpreted with care. This review article describes the present diversity of cultivated and uncultivated aerobic methanotrophic bacteria based on pmoA gene sequence diversity. It summarizes current knowledge about cultivated and major clusters of uncultivated methanotrophic bacteria and evaluates habitat specificity of these bacteria at different levels of taxonomic resolution. Habitat specificity exists for diverse lineages and at different taxonomic levels. Methanotrophic genera such as Methylocystis and Methylocaldum are identified as generalists, but they harbor habitat specific methanotrophs at species level. This finding implies that future studies should consider these diverging preferences at different taxonomic levels when analyzing methanotrophic communities. PMID:26696968
Singular value decomposition for collaborative filtering on a GPU
NASA Astrophysics Data System (ADS)
Kato, Kimikazu; Hosino, Tikara
2010-06-01
A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.
Hopping in the Crowd to Unveil Network Topology.
Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco
2018-04-13
We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.
Green genes: bioinformatics and systems-biology innovations drive algal biotechnology.
Reijnders, Maarten J M F; van Heck, Ruben G A; Lam, Carolyn M C; Scaife, Mark A; dos Santos, Vitor A P Martins; Smith, Alison G; Schaap, Peter J
2014-12-01
Many species of microalgae produce hydrocarbons, polysaccharides, and other valuable products in significant amounts. However, large-scale production of algal products is not yet competitive against non-renewable alternatives from fossil fuel. Metabolic engineering approaches will help to improve productivity, but the exact metabolic pathways and the identities of the majority of the genes involved remain unknown. Recent advances in bioinformatics and systems-biology modeling coupled with increasing numbers of algal genome-sequencing projects are providing the means to address this. A multidisciplinary integration of methods will provide synergy for a systems-level understanding of microalgae, and thereby accelerate the improvement of industrially valuable strains. In this review we highlight recent advances and challenges to microalgal research and discuss future potential. Copyright © 2014 Elsevier Ltd. All rights reserved.
Schoepp, Randal J; Morin, Michelle D; Martinez, Mark J; Kulesh, David A; Hensley, Lisa; Geisbert, Thomas W; Brady, Daniel R; Jahrling, Peter B
2004-01-01
Smallpox disease has been eradicated from the human population since 1979, but is again a concern because of its potential use as an agent of bioterrorism or biowarfare. World Health Organization-sanctioned repositories of infectious Variola virus are known to occur in both Russia and the United States, but many believe other undeclared and unregulated sources of the virus could exist. Thus, validation of improved methods for definitive identification of smallpox virus in diagnostic specimens is urgently needed. In this paper, we describe the discovery of suspected Variola infected human tissue, fixed and preserved for decades in largely unknown solutions, and the use of routine histology, electron microscopy, and ultimately DNA extraction and fluorogenic 5' nuclease (TaqMan) assays for its identification and confirmation.
Liu, Tong; Su, Qi-Ping; Yang, Jin-Hu; Zhang, Yu; Xiong, Shao-Jie; Liu, Jin-Ming; Yang, Chui-Ping
2017-08-01
A qudit (d-level quantum system) has a large Hilbert space and thus can be used to achieve many quantum information and communication tasks. Here, we propose a method to transfer arbitrary d-dimensional quantum states (known or unknown) between two superconducting transmon qudits coupled to a single cavity. The state transfer can be performed by employing resonant interactions only. In addition, quantum states can be deterministically transferred without measurement. Numerical simulations show that high-fidelity transfer of quantum states between two superconducting transmon qudits (d ≤ 5) is feasible with current circuit QED technology. This proposal is quite general and can be applied to accomplish the same task with natural or artificial atoms of a ladder-type level structure coupled to a cavity or resonator.
Bio-nano interactions detected by nanochannel electrophoresis.
Luan, Binquan
2016-08-01
Engineered nanoparticles have been widely used in industry and are present in many consumer products. However, their bio-safeties especially in a long term are largely unknown. Here, a nanochannel-electrophoresis-based method is proposed for detecting the potential bio-nano interactions that may further lead to damages to human health and/or biological environment. Through proof-of-concept molecular dynamics simulations, it was demonstrated that the transport of a protein-nanoparticle complex is very different from that of a protein along. By monitoring the change of ionic currents induced by a transported analyte as well as the transport velocities of the analyte, the complex (with bio-nano interaction) can be clearly distinguished from the protein alone (with no interaction with tested nanoparticles). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hopping in the Crowd to Unveil Network Topology
NASA Astrophysics Data System (ADS)
Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco
2018-04-01
We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.
Knowns and unknowns in metabolomics identified by multidimensional NMR and hybrid MS/NMR methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingol, Kerem; Brüschweiler, Rafael
Metabolomics continues to make rapid progress through the development of new and better methods and their applications to gain insight into the metabolism of a wide range of different biological systems from a systems biology perspective. Customization of NMR databases and search tools allows the faster and more accurate identification of known metabolites, whereas the identification of unknowns, without a need for extensive purification, requires new strategies to integrate NMR with mass spectrometry, cheminformatics, and computational methods. For some applications, the use of covalent and non-covalent attachments in the form of labeled tags or nanoparticles can significantly reduce the complexitymore » of these tasks.« less
NASA Technical Reports Server (NTRS)
Hall, A. Daniel (Inventor); Davies, Francis J. (Inventor)
2007-01-01
Method and system are disclosed for determining individual string resistance in a network of strings when the current through a parallel connected string is unknown and when the voltage across a series connected string is unknown. The method/system of the invention involves connecting one or more frequency-varying impedance components with known electrical characteristics to each string and applying a frequency-varying input signal to the network of strings. The frequency-varying impedance components may be one or more capacitors, inductors, or both, and are selected so that each string is uniquely identifiable in the output signal resulting from the frequency-varying input signal. Numerical methods, such as non-linear regression, may then be used to resolve the resistance associated with each string.
NASA Astrophysics Data System (ADS)
Hao, San-Ru; Hou, Bo-Yu; Xi, Xiao-Qiang; Yue, Rui-Hong
2003-02-01
In this paper we generalize the standard teleportation to the conclusive teleportation case which can teleport an arbitrary d-dimensional N-particle unknown state via the partially entangled quantum channel. We show that only if the quantum channel satisfies a constraint condition can the most general d-dimensional N-particle unknown state be perfect conclusively teleported. We also present a method for optimal conclusively teleportation of the N-particle states and for constructing the joint POVM which can discern the quantum states on the sender's (Alice's) side. Two typical examples are given so that one can see how our method works. The project supported in part by National Natural Science Foundation of China under Grant No. 19975036 and the Foundation of Science and Technology Committee of Hunan Province of China under Grant No. 21000205
An Efficient Solution Method for Multibody Systems with Loops Using Multiple Processors
NASA Technical Reports Server (NTRS)
Ghosh, Tushar K.; Nguyen, Luong A.; Quiocho, Leslie J.
2015-01-01
This paper describes a multibody dynamics algorithm formulated for parallel implementation on multiprocessor computing platforms using the divide-and-conquer approach. The system of interest is a general topology of rigid and elastic articulated bodies with or without loops. The algorithm divides the multibody system into a number of smaller sets of bodies in chain or tree structures, called "branches" at convenient joints called "connection points", and uses an Order-N (O (N)) approach to formulate the dynamics of each branch in terms of the unknown spatial connection forces. The equations of motion for the branches, leaving the connection forces as unknowns, are implemented in separate processors in parallel for computational efficiency, and the equations for all the unknown connection forces are synthesized and solved in one or several processors. The performances of two implementations of this divide-and-conquer algorithm in multiple processors are compared with an existing method implemented on a single processor.
A new statistical method for design and analyses of component tolerance
NASA Astrophysics Data System (ADS)
Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam
2017-03-01
Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Section Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Determining Comparability Between Candidate Methods and Reference Method...
Treatment Patterns for Cervical Carcinoma In Situ in Michigan, 1998-2003
Patel, Divya A.; Saraiya, Mona; Copeland, Glenn; Cote, Michele L.; Datta, S. Deblina; Sawaya, George F.
2015-01-01
Objective To characterize population-level surgical treatment patterns for cervical carcinoma in situ (CIS) reported to the Michigan Cancer Surveillance Program (MCSP), and to inform data collection strategies. Methods All cases of cervical carcinoma in situ (CIS) (including cervical intraepithelial neoplasia grade 3 and adenocarcinoma in situ [AIS]) reported to the MCSP during 1998–2003 were identified. First course of treatment (ablative procedure, cone biopsy, loop electrosurgical excisional procedure [LEEP], hysterectomy, unspecified surgical treatment, no surgical treatment, unknown if surgically treated) was described by histology, race, and age at diagnosis. Results Of 17,022 cases of cervical CIS, 82.8% were squamous CIS, 3% AIS/adenosquamous CIS, and 14.2% unspecified/other CIS. Over half (54.7%) of cases were diagnosed in women under age 30. Excisional treatments (LEEP, 32.3% and cone biopsy, 17.3%) were most common, though substantial proportions had no reported treatment (17.8%) or unknown treatment (21.1%). Less common were hysterectomy (7.2%) and ablative procedures (2.6%). LEEP was the most common treatment for squamous cases, while hysterectomy was the most treatment for AIS/adenosquamous CIS cases. Across histologic types, a sizeable proportion of women diagnosed ≤30 years of age underwent excision, either LEEP (20%–38.7%) or cone biopsy (13.7%–44%). Conclusion Despite evidence suggesting it may be safer and equally effective as excision, ablation was rarely used for treating cervical squamous CIS. These population-based data indicate some notable differences in treatment by histology and age at diagnosis, with observed patterns appearing consistent with consensus guidelines in place at the time of study, but favoring more aggressive procedures. Future data collection strategies may need to validate treatment information, including the large proportion of no or unknown treatment. PMID:24002133
Irie, Miho; Hayakawa, Eisuke; Fujimura, Yoshinori; Honda, Youhei; Setoyama, Daiki; Wariishi, Hiroyuki; Hyodo, Fuminori; Miura, Daisuke
2018-01-29
Clinical application of the major anticancer drug, cisplatin, is limited by severe side effects, especially acute kidney injury (AKI) caused by nephrotoxicity. The detailed metabolic mechanism is still largely unknown. Here, we used an integrated technique combining mass spectrometry imaging (MSI) and liquid chromatography-mass spectrometry (LC-MS) to visualize the diverse spatiotemporal metabolic dynamics in the mouse kidney after cisplatin dosing. Biological responses to cisplatin was more sensitively detected within 24 h as a metabolic alteration, which is much earlier than possible with the conventional clinical chemistry method of blood urea nitrogen (BUN) measurement. Region-specific changes (e.g., medulla and cortex) in metabolites related to DNA damage and energy generation were observed over the 72-h exposure period. Therefore, this metabolomics approach may become a novel strategy for elucidating early renal responses to cisplatin, prior to the detection of kidney damage evaluated by conventional method. Copyright © 2018. Published by Elsevier Inc.
Full-potential modeling of blade-vortex interactions
NASA Technical Reports Server (NTRS)
Jones, H. E.; Caradonna, F. X.
1986-01-01
A comparison is made of four different models for predicting the unsteady loading induced by a vortex passing close to an airfoil. (1) The first model approximates the vortex effect as a change in the airfoil angle of attack. (2) The second model is related to the first but, instead of imposing only a constant velocity on the airfoil, the distributed effect of the vortex is computed and used. This is analogous to a lifting surface method. (3) The third model is to specify a branch cut discontinuity in the potential field. The vortex is modeled as a jump in potential across the branch cut, the edge of which represents the center of the vortex. (4) The fourth method models the vortex expressing the potential as the sum of a known potential due to the vortex and an unknown perturbation due to the airfoil. The purpose of the current study is to investigate the four vortex models described above and to determine their relative merits and suitability for use in large three-dimensional codes.
Beckwith, Marianne Sandvold; Beckwith, Kai Sandvold; Sikorski, Pawel; Skogaker, Nan Tostrup
2015-01-01
Mycobacteria pose a threat to the world health today, with pathogenic and opportunistic bacteria causing tuberculosis and non-tuberculous disease in large parts of the population. Much is still unknown about the interplay between bacteria and host during infection and disease, and more research is needed to meet the challenge of drug resistance and inefficient vaccines. This work establishes a reliable and reproducible method for performing correlative imaging of human macrophages infected with mycobacteria at an ultra-high resolution and in 3D. Focused Ion Beam/Scanning Electron Microscopy (FIB/SEM) tomography is applied, together with confocal fluorescence microscopy for localization of appropriately infected cells. The method is based on an Aclar poly(chloro-tri-fluoro)ethylene substrate, micropatterned into an advantageous geometry by a simple thermomoulding process. The platform increases the throughput and quality of FIB/SEM tomography analyses, and was successfully applied to detail the intracellular environment of a whole mycobacterium-infected macrophage in 3D. PMID:26406896
Genovo: De Novo Assembly for Metagenomes
NASA Astrophysics Data System (ADS)
Laserson, Jonathan; Jojic, Vladimir; Koller, Daphne
Next-generation sequencing technologies produce a large number of noisy reads from the DNA in a sample. Metagenomics and population sequencing aim to recover the genomic sequences of the species in the sample, which could be of high diversity. Methods geared towards single sequence reconstruction are not sensitive enough when applied in this setting. We introduce a generative probabilistic model of read generation from environmental samples and present Genovo, a novel de novo sequence assembler that discovers likely sequence reconstructions under the model. A Chinese restaurant process prior accounts for the unknown number of genomes in the sample. Inference is made by applying a series of hill-climbing steps iteratively until convergence. We compare the performance of Genovo to three other short read assembly programs across one synthetic dataset and eight metagenomic datasets created using the 454 platform, the largest of which has 311k reads. Genovo's reconstructions cover more bases and recover more genes than the other methods, and yield a higher assembly score.
UO2 fuel pellets fabrication via Spark Plasma Sintering using non-standard molybdenum die
NASA Astrophysics Data System (ADS)
Papynov, E. K.; Shichalin, O. O.; Mironenko, A. Yu; Tananaev, I. G.; Avramenko, V. A.; Sergienko, V. I.
2018-02-01
The article investigates spark plasma sintering (SPS) of commercial uranium dioxide (UO2) powder of ceramic origin into highly dense fuel pellets using non-standard die instead of usual graphite die. An alternative and formerly unknown method has been suggested to fabricate UO2 fuel pellets by SPS for excluding of typical problems related to undesirable carbon diffusion. Influence of SPS parameters on chemical composition and quality of UO2 pellets has been studied. Also main advantages and drawbacks have been revealed for SPS consolidation of UO2 in non-standard molybdenum die. The method is very promising due to high quality of the final product (density 97.5-98.4% from theoretical, absence of carbon traces, mean grain size below 3 μm) and mild sintering conditions (temperature 1100 ºC, pressure 141.5 MPa, sintering time 25 min). The results are interesting for development and probable application of SPS in large-scale production of nuclear ceramic fuel.
Extending Measurements to En=30 MeV and Beyond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, Dana Lynn
The majority of energy release in the fission process is due to the kinetic energy of the fission fragments. Average Total Kinetic Energy measurements for the major actinides over a wide range of incident neutron energies were performed at LANSCE using a Frisch-gridded ionization chamber. The experiments and results of the 238U(n,f) and 235U(n,f) will be presented, including (En), (A), and mass yield distributions as a function of neutron energy. A preliminary (En) for 239Pu(n,f) will also be shown. The (En) shows a clear structure at multichance fission thresholds for all the reactions that we studied. The fragment masses aremore » determined using the iterative double energy (2E) method, with a resolution of A = 4 - 5 amu. The correction for the prompt fission neutrons is the main source of uncertainty, especially at high incident neutron energies, since the behavior of nubar(A,En) is largely unknown. Different correction methods will be discussed.« less
Peterson, Gunnel; Nilsson, David; Trygg, Johan; Falla, Deborah; Dedering, Åsa; Wallman, Thorne; Peolsson, Anneli
2015-10-16
Chronic whiplash-associated disorder (WAD) is common after whiplash injury, with considerable personal, social, and economic burden. Despite decades of research, factors responsible for continuing pain and disability are largely unknown, and diagnostic tools are lacking. Here, we report a novel model of mechanical ventral neck muscle function recorded from non-invasive, real-time, ultrasound measurements. We calculated the deformation area and deformation rate in 23 individuals with persistent WAD and compared them to 23 sex- and age-matched controls. Multivariate statistics were used to analyse interactions between ventral neck muscles, revealing different interplay between muscles in individuals with WAD and healthy controls. Although the cause and effect relation cannot be established from this data, for the first time, we reveal a novel method capable of detecting different neck muscle interplay in people with WAD. This non-invasive method stands to make a major breakthrough in the assessment and diagnosis of people following a whiplash trauma.
Striking circadian neuron diversity and cycling of Drosophila alternative splicing.
Wang, Qingqing; Abruzzi, Katharine C; Rosbash, Michael; Rio, Donald C
2018-06-04
Although alternative pre-mRNA splicing (AS) significantly diversifies the neuronal proteome, the extent of AS is still unknown due in part to the large number of diverse cell types in the brain. To address this complexity issue, we used an annotation-free computational method to analyze and compare the AS profiles between small specific groups of Drosophila circadian neurons. The method, the J unction U sage M odel (JUM), allows the comprehensive profiling of both known and novel AS events from specific RNA-seq libraries. The results show that many diverse and novel pre-mRNA isoforms are preferentially expressed in one class of clock neuron and also absent from the more standard Drosophila head RNA preparation. These AS events are enriched in potassium channels important for neuronal firing, and there are also cycling isoforms with no detectable underlying transcriptional oscillations. The results suggest massive AS regulation in the brain that is also likely important for circadian regulation. © 2018, Wang et al.
Phage phenomics: Physiological approaches to characterize novel viral proteins
Sanchez, Savannah E. [San Diego State Univ., San Diego, CA (United States); Cuevas, Daniel A. [San Diego State Univ., San Diego, CA (United States); Rostron, Jason E. [San Diego State Univ., San Diego, CA (United States); Liang, Tiffany Y. [San Diego State Univ., San Diego, CA (United States); Pivaroff, Cullen G. [San Diego State Univ., San Diego, CA (United States); Haynes, Matthew R. [San Diego State Univ., San Diego, CA (United States); Nulton, Jim [San Diego State Univ., San Diego, CA (United States); Felts, Ben [San Diego State Univ., San Diego, CA (United States); Bailey, Barbara A. [San Diego State Univ., San Diego, CA (United States); Salamon, Peter [San Diego State Univ., San Diego, CA (United States); Edwards, Robert A. [San Diego State Univ., San Diego, CA (United States); Argonne National Lab. (ANL), Argonne, IL (United States); Burgin, Alex B. [Broad Institute, Cambridge, MA (United States); Segall, Anca M. [San Diego State Univ., San Diego, CA (United States); Rohwer, Forest [San Diego State Univ., San Diego, CA (United States)
2018-06-21
Current investigations into phage-host interactions are dependent on extrapolating knowledge from (meta)genomes. Interestingly, 60 - 95% of all phage sequences share no homology to current annotated proteins. As a result, a large proportion of phage genes are annotated as hypothetical. This reality heavily affects the annotation of both structural and auxiliary metabolic genes. Here we present phenomic methods designed to capture the physiological response(s) of a selected host during expression of one of these unknown phage genes. Multi-phenotype Assay Plates (MAPs) are used to monitor the diversity of host substrate utilization and subsequent biomass formation, while metabolomics provides bi-product analysis by monitoring metabolite abundance and diversity. Both tools are used simultaneously to provide a phenotypic profile associated with expression of a single putative phage open reading frame (ORF). Thus, representative results for both methods are compared, highlighting the phenotypic profile differences of a host carrying either putative structural or metabolic phage genes. In addition, the visualization techniques and high throughput computational pipelines that facilitated experimental analysis are presented.
NASA Astrophysics Data System (ADS)
Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang
2016-09-01
Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.
Rapid Assessment of Genotoxicity by Flow Cytometric Detection of Cell Cycle Alterations.
Bihari, Nevenka
2017-01-01
Flow cytometry is a convenient method for the determination of genotoxic effects of environmental pollution and can reveal genotoxic compounds in unknown environmental mixtures. It is especially suitable for the analyses of large numbers of samples during monitoring programs. The speed of detection is one of the advantages of this technique which permits the acquisition of 10 4 -10 5 cells per sample in 5 min. This method can rapidly detect cell cycle alterations resulting from DNA damage. The outcome of such an analysis is a diagram of DNA content across the cell cycle which indicates cell proliferation, G 2 arrests, G 1 delays, apoptosis, and ploidy.Here, we present the flow cytometric procedure for rapid assessment of genotoxicity via detection of cell cycle alterations. The described protocol simplifies the analysis of genotoxic effects in marine environments and is suitable for monitoring purposes. It uses marine mussel cells in the analysis and can be adapted to investigations on a broad range of marine invertebrates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Savannah E.; Cuevas, Daniel A.; Rostron, Jason E.
Current investigations into phage-host interactions are dependent on extrapolating knowledge from (meta)genomes. Interestingly, 60 - 95% of all phage sequences share no homology to current annotated proteins. As a result, a large proportion of phage genes are annotated as hypothetical. This reality heavily affects the annotation of both structural and auxiliary metabolic genes. Here we present phenomic methods designed to capture the physiological response(s) of a selected host during expression of one of these unknown phage genes. Multi-phenotype Assay Plates (MAPs) are used to monitor the diversity of host substrate utilization and subsequent biomass formation, while metabolomics provides bi-product analysismore » by monitoring metabolite abundance and diversity. Both tools are used simultaneously to provide a phenotypic profile associated with expression of a single putative phage open reading frame (ORF). Thus, representative results for both methods are compared, highlighting the phenotypic profile differences of a host carrying either putative structural or metabolic phage genes. In addition, the visualization techniques and high throughput computational pipelines that facilitated experimental analysis are presented.« less
Generalized Minimum-Time Follow-up Approaches Applied to Tasking Electro-Optical Sensor Tasking
NASA Astrophysics Data System (ADS)
Murphy, T. S.; Holzinger, M. J.
This work proposes a methodology for tasking of sensors to search an area of state space for a particular object, group of objects, or class of objects. This work creates a general unified mathematical framework for analyzing reacquisition, search, scheduling, and custody operations. In particular, this work looks at searching for unknown space object(s) with prior knowledge in the form of a set, which can be defined via an uncorrelated track, region of state space, or a variety of other methods. The follow-up tasking can occur from a variable location and time, which often requires searching a large region of the sky. This work analyzes the area of a search region over time to inform a time optimal search method. Simulation work looks at analyzing search regions relative to a particular sensor, and testing a tasking algorithm to search through the region. The tasking algorithm is also validated on a reacquisition problem with a telescope system at Georgia Tech.
Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data
NASA Astrophysics Data System (ADS)
Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.
2018-03-01
We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.
A local immunization strategy for networks with overlapping community structure
NASA Astrophysics Data System (ADS)
Taghavian, Fatemeh; Salehi, Mostafa; Teimouri, Mehdi
2017-02-01
Since full coverage treatment is not feasible due to limited resources, we need to utilize an immunization strategy to effectively distribute the available vaccines. On the other hand, the structure of contact network among people has a significant impact on epidemics of infectious diseases (such as SARS and influenza) in a population. Therefore, network-based immunization strategies aim to reduce the spreading rate by removing the vaccinated nodes from contact network. Such strategies try to identify more important nodes in epidemics spreading over a network. In this paper, we address the effect of overlapping nodes among communities on epidemics spreading. The proposed strategy is an optimized random-walk based selection of these nodes. The whole process is local, i.e. it requires contact network information in the level of nodes. Thus, it is applicable to large-scale and unknown networks in which the global methods usually are unrealizable. Our simulation results on different synthetic and real networks show that the proposed method outperforms the existing local methods in most cases. In particular, for networks with strong community structures, high overlapping membership of nodes or small size communities, the proposed method shows better performance.
NASA Technical Reports Server (NTRS)
Barkeshli, Kasra; Volakis, John L.
1991-01-01
The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.
Methods for the computation of detailed geoids and their accuracy
NASA Technical Reports Server (NTRS)
Rapp, R. H.; Rummel, R.
1975-01-01
Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.
The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory
Bosbach, Wolfram A.
2015-01-01
Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603
[Using neural networks based template matching method to obtain redshifts of normal galaxies].
Xu, Xin; Luo, A-li; Wu, Fu-chao; Zhao, Yong-heng
2005-06-01
Galaxies can be divided into two classes: normal galaxy (NG) and active galaxy (AG). In order to determine NG redshifts, an automatic effective method is proposed in this paper, which consists of the following three main steps: (1) From the template of normal galaxy, the two sets of samples are simulated, one with the redshift of 0.0-0.3, the other of 0.3-0.5, then the PCA is used to extract the main components, and train samples are projected to the main component subspace to obtain characteristic spectra. (2) The characteristic spectra are used to train a Probabilistic Neural Network to obtain a Bayes classifier. (3) An unknown real NG spectrum is first inputted to this Bayes classifier to determine the possible range of redshift, then the template matching is invoked to locate the redshift value within the estimated range. Compared with the traditional template matching technique with an unconstrained range, our proposed method not only halves the computational load, but also increases the estimation accuracy. As a result, the proposed method is particularly useful for automatic spectrum processing produced from a large-scale sky survey project.
Bootstrap Methods: A Very Leisurely Look.
ERIC Educational Resources Information Center
Hinkle, Dennis E.; Winstead, Wayland H.
The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.
Fu, Yue; Chai, Tianyou
2016-12-01
Regarding two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics, this paper presents an online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair. First, for known systems, the simultaneous policy updating algorithm (SPUA) is reviewed. A new analytical method to prove the convergence is presented. Then, based on the SPUA, without using a priori knowledge of any system dynamics, an online algorithm is proposed to simultaneously learn in real time either the minimal nonnegative solution of the Hamilton-Jacobi-Isaacs (HJI) equation or the generalized algebraic Riccati equation for linear systems as a special case, along with the optimal policy pair. The approximate solution to the HJI equation and the admissible policy pair is reexpressed by the approximation theorem. The unknown constants or weights of each are identified simultaneously by resorting to the recursive least square method. The convergence of the online algorithm to the optimal solutions is provided. A practical online algorithm is also developed. Simulation results illustrate the effectiveness of the proposed method.
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.
Sokoloski, Sacha
2017-09-01
In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Li, Deng; Lu, Xiaoming; Cheng, Xinyi; Wang, Liwei
2014-10-01
Continuous crystal-based positron emission tomography (PET) detectors could be an ideal alternative for current high-resolution pixelated PET detectors if the issues of high performance γ interaction position estimation and its real-time implementation are solved. Unfortunately, existing position estimators are not very feasible for implementation on field-programmable gate array (FPGA). In this paper, we propose a new self-organizing map neural network-based nearest neighbor (SOM-NN) positioning scheme aiming not only at providing high performance, but also at being realistic for FPGA implementation. Benefitting from the SOM feature mapping mechanism, the large set of input reference events at each calibration position is approximated by a small set of prototypes, and the computation of the nearest neighbor searching for unknown events is largely reduced. Using our experimental data, the scheme was evaluated, optimized and compared with the smoothed k-NN method. The spatial resolutions of full-width-at-half-maximum (FWHM) of both methods averaged over the center axis of the detector were obtained as 1.87 ±0.17 mm and 1.92 ±0.09 mm, respectively. The test results show that the SOM-NN scheme has an equivalent positioning performance with the smoothed k-NN method, but the amount of computation is only about one-tenth of the smoothed k-NN method. In addition, the algorithm structure of the SOM-NN scheme is more feasible for implementation on FPGA. It has the potential to realize real-time position estimation on an FPGA with a high-event processing throughput.
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Organic Spectroscopy Laboratory: Utilizing IR and NMR in the Identification of an Unknown Substance
ERIC Educational Resources Information Center
Glagovich, Neil M.; Shine, Timothy D.
2005-01-01
A laboratory experiment that emphasizes the interpretation of both infrared (IR) and nuclear magnetic resonance (NMR) spectra in the elucidation of the structure of an unknown compound was developed. The method helps students determine [to the first power]H- and [to the thirteenth power]C-NMR spectra from the structures of compounds and to…
Antibiotic resistance genes and residual antimicrobials in cattle feedlot surface soil
USDA-ARS?s Scientific Manuscript database
Cattle feedlot soils receive manure containing both antibiotic residues and antibiotic resistant bacteria. The fates of these constituents are largely unknown with potentially serious consequences for increased antibiotic resistance in the environment. Determine if common antimicrobials (tetracycl...
Quantitative real-time imaging of glutathione
USDA-ARS?s Scientific Manuscript database
Glutathione plays many important roles in biological processes; however, the dynamic changes of glutathione concentrations in living cells remain largely unknown. Here, we report a reversible reaction-based fluorescent probe—designated as RealThiol (RT)—that can quantitatively monitor the real-time ...
Signaling hierarchy regulating human endothelial cell development
USDA-ARS?s Scientific Manuscript database
Our present knowledge of the regulation of mammalian endothelial cell differentiation has been largely derived from studies of mouse embryonic development. However, unique mechanisms and hierarchy of signals that govern human endothelial cell development are unknown and, thus, explored in these stud...
The caprine abomasal microbiome
USDA-ARS?s Scientific Manuscript database
Parasitism is considered the number one health problem in small ruminants. The barber's pole worm Haemonchus contortus infection in goats elicits a strong host immune response. However, the effect of the parasitic infection on the structure and function of the gut microbiome remains largely unknown....
NASA Astrophysics Data System (ADS)
Secmen, Mustafa
2011-10-01
This paper introduces the performance of an electromagnetic target recognition method in resonance scattering region, which includes pseudo spectrum Multiple Signal Classification (MUSIC) algorithm and principal component analysis (PCA) technique. The aim of this method is to classify an "unknown" target as one of the "known" targets in an aspect-independent manner. The suggested method initially collects the late-time portion of noise-free time-scattered signals obtained from different reference aspect angles of known targets. Afterward, these signals are used to obtain MUSIC spectrums in real frequency domain having super-resolution ability and noise resistant feature. In the final step, PCA technique is applied to these spectrums in order to reduce dimensionality and obtain only one feature vector per known target. In the decision stage, noise-free or noisy scattered signal of an unknown (test) target from an unknown aspect angle is initially obtained. Subsequently, MUSIC algorithm is processed for this test signal and resulting test vector is compared with feature vectors of known targets one by one. Finally, the highest correlation gives the type of test target. The method is applied to wire models of airplane targets, and it is shown that it can tolerate considerable noise levels although it has a few different reference aspect angles. Besides, the runtime of the method for a test target is sufficiently low, which makes the method suitable for real-time applications.
Curvelet-domain multiple matching method combined with cubic B-spline function
NASA Astrophysics Data System (ADS)
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
Highly adaptive tests for group differences in brain functional connectivity.
Kim, Junghi; Pan, Wei
2015-01-01
Resting-state functional magnetic resonance imaging (rs-fMRI) and other technologies have been offering evidence and insights showing that altered brain functional networks are associated with neurological illnesses such as Alzheimer's disease. Exploring brain networks of clinical populations compared to those of controls would be a key inquiry to reveal underlying neurological processes related to such illnesses. For such a purpose, group-level inference is a necessary first step in order to establish whether there are any genuinely disrupted brain subnetworks. Such an analysis is also challenging due to the high dimensionality of the parameters in a network model and high noise levels in neuroimaging data. We are still in the early stage of method development as highlighted by Varoquaux and Craddock (2013) that "there is currently no unique solution, but a spectrum of related methods and analytical strategies" to learn and compare brain connectivity. In practice the important issue of how to choose several critical parameters in estimating a network, such as what association measure to use and what is the sparsity of the estimated network, has not been carefully addressed, largely because the answers are unknown yet. For example, even though the choice of tuning parameters in model estimation has been extensively discussed in the literature, as to be shown here, an optimal choice of a parameter for network estimation may not be optimal in the current context of hypothesis testing. Arbitrarily choosing or mis-specifying such parameters may lead to extremely low-powered tests. Here we develop highly adaptive tests to detect group differences in brain connectivity while accounting for unknown optimal choices of some tuning parameters. The proposed tests combine statistical evidence against a null hypothesis from multiple sources across a range of plausible tuning parameter values reflecting uncertainty with the unknown truth. These highly adaptive tests are not only easy to use, but also high-powered robustly across various scenarios. The usage and advantages of these novel tests are demonstrated on an Alzheimer's disease dataset and simulated data.
Method for evaluating moisture tensions of soils using spectral data
NASA Technical Reports Server (NTRS)
Peterson, John B. (Inventor)
1982-01-01
A method is disclosed which permits evaluation of soil moisture utilizing remote sensing. Spectral measurements at a plurality of different wavelengths are taken with respect to sample soils and the bidirectional reflectance factor (BRF) measurements produced are submitted to regression analysis for development therefrom of predictable equations calculated for orderly relationships. Soil of unknown reflective and unknown soil moisture tension is thereafter analyzed for bidirectional reflectance and the resulting data utilized to determine the soil moisture tension of the soil as well as providing a prediction as to the bidirectional reflectance of the soil at other moisture tensions.
[Effects of azadirachtin on rice plant volatiles induced by Nilaparvata lugens].
Lu, Hai-Yan; Liu, Fang; Zhu, Shu-De; Zhang, Qing
2010-01-01
With the method of solid phase microextraction (SPME), a total of twenty-five volatiles were collected from rice plants induced by Nilaparvata lugens, and after applying azadirachtin fourteen of them were qualitatively identified by gas chromatography coupled by mass spectrometry (GC-MS), mainly of nine kinds of sesquiterpenes. Comparing with healthy rice plants, the plants attacked by N. lugens had more kinds of volatiles, including limonene, linalool, methyl salicylate, unknown 6, unknown 7, zingiberene, nerolidol, and hexadecane. Applying azadirachtin did not result in the production of new kind volatiles, but affected the relative concentrations of the volatiles induced by N. lugens. The proportions of limonene, linalool, methyl salicylate, unknown 6, zingiberene, and hexadecane changed obviously with the concentration of applied azadirachtin, while those of methyl salicylate, unknown 6, unknown 7, zingiberene, and nerolidol changed significantly with the days after azadirachtin application. Azadirachtin concentration, rice variety, and N. lugens density had significant interactions on the relative concentrations of all test N. lugens-induced volatiles.
Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan
2017-09-01
While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.
Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens
2017-12-01
The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.
Chang, Jinyuan; Zhou, Wen; Zhou, Wen-Xin; Wang, Lan
2017-03-01
Comparing large covariance matrices has important applications in modern genomics, where scientists are often interested in understanding whether relationships (e.g., dependencies or co-regulations) among a large number of genes vary between different biological states. We propose a computationally fast procedure for testing the equality of two large covariance matrices when the dimensions of the covariance matrices are much larger than the sample sizes. A distinguishing feature of the new procedure is that it imposes no structural assumptions on the unknown covariance matrices. Hence, the test is robust with respect to various complex dependence structures that frequently arise in genomics. We prove that the proposed procedure is asymptotically valid under weak moment conditions. As an interesting application, we derive a new gene clustering algorithm which shares the same nice property of avoiding restrictive structural assumptions for high-dimensional genomics data. Using an asthma gene expression dataset, we illustrate how the new test helps compare the covariance matrices of the genes across different gene sets/pathways between the disease group and the control group, and how the gene clustering algorithm provides new insights on the way gene clustering patterns differ between the two groups. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2016, The International Biometric Society.
Clinic Network Collaboration and Patient Tracing to Maximize Retention in HIV Care.
McMahon, James H; Moore, Richard; Eu, Beng; Tee, Ban-Kiem; Chen, Marcus; El-Hayek, Carol; Street, Alan; Woolley, Ian; Buggie, Andrew; Collins, Danielle; Medland, Nicholas; Hoy, Jennifer
2015-01-01
Understanding retention and loss to follow up in HIV care, in particular the number of people with unknown outcomes, is critical to maximise the benefits of antiretroviral therapy. Individual-level data are not available for these outcomes in Australia, which has an HIV epidemic predominantly focused amongst men who have sex with men. A network of the 6 main HIV clinical care sites was established in the state of Victoria, Australia. Individuals who had accessed care at these sites between February 2011 and June 2013 as assessed by HIV viral load testing but not accessed care between June 2013 and February 2014 were considered individuals with potentially unknown outcomes. For this group an intervention combining cross-referencing of clinical data between sites and phone tracing individuals with unknown outcomes was performed. 4966 people were in care in the network and before the intervention estimates of retention ranged from 85.9%-95.8% and the proportion with unknown outcomes ranged from 1.3-5.5%. After the intervention retention increased to 91.4-98.8% and unknown outcomes decreased to 0.1-2.4% (p<.01 for all sites for both outcomes). Most common reasons for disengagement from care were being too busy to attend or feeling well. For those with unknown outcomes prior to the intervention documented active psychiatric illness at last visit was associated with not re-entering care (p = 0.04). The network demonstrated low numbers of people with unknown outcomes and high levels of retention in care. Increased levels of retention in care and reductions in unknown outcomes identified after the intervention largely reflected confirmation of clinic transfers while a smaller number were successfully re-engaged in care. Factors associated with disengagement from care were identified. Systems to monitor patient retention, care transfer and minimize disengagement will maximise individual and population-level outcomes for populations with HIV.
Radiocarbon dating of large termite mounds of the miombo woodland of Katanga, DR Congo
NASA Astrophysics Data System (ADS)
Erens, Hans; Boudin, Mathieu; Mees, Florias; Dumon, Mathijs; Mujinya, Basile; Van Strydonck, Mark; Baert, Geert; Boeckx, Pascal; Van Ranst, Eric
2015-04-01
The miombo woodlands of South Katanga (D.R. Congo) are characterized by a high spatial density of large conic termite mounds built by Macrotermes falciger (3 to 5 ha-1, ~5 m high, ~15 m in diameter). The time it takes for these mounds to attain this size is still largely unknown. In this study, the age of four of these mounds is determined by 14C-dating the acid-insoluble organic carbon fraction of samples taken along the central vertical axis of two active and two abandoned mounds. The age sequence in the active mounds is erratic, but the results for the abandoned mounds show a logical increase of 14C-age with depth. The ages measured at 50 cm above ground level were 2335 - 2119 cal yr BP for the large abandoned mound (630 cm high), and 796 - 684 cal yr BP for the small abandoned mound (320 cm high). Cold-water-extractable organic carbon (CWEOC) measurements combined with spectroscopic analysis revealed that the lower parts of the active mounds may have been contaminated with recent carbon that leached from the active nest. Nonetheless, this method appears to provide reliable age estimates of large, abandoned termite mounds, which are older than previously estimated. Furthermore, historical mound growth rates seem to correspond to past temperature changes, suggesting a relation between past environmental conditions and mound occupancy. Keywords : 14C, water-extractable carbon, low-temperature combustion
Xu, Min; Wang, Yemin; Zhao, Zhilong; Gao, Guixi; Huang, Sheng-Xiong; Kang, Qianjin; He, Xinyi; Lin, Shuangjun; Pang, Xiuhua; Deng, Zixin
2016-01-01
ABSTRACT Genome sequencing projects in the last decade revealed numerous cryptic biosynthetic pathways for unknown secondary metabolites in microbes, revitalizing drug discovery from microbial metabolites by approaches called genome mining. In this work, we developed a heterologous expression and functional screening approach for genome mining from genomic bacterial artificial chromosome (BAC) libraries in Streptomyces spp. We demonstrate mining from a strain of Streptomyces rochei, which is known to produce streptothricins and borrelidin, by expressing its BAC library in the surrogate host Streptomyces lividans SBT5, and screening for antimicrobial activity. In addition to the successful capture of the streptothricin and borrelidin biosynthetic gene clusters, we discovered two novel linear lipopeptides and their corresponding biosynthetic gene cluster, as well as a novel cryptic gene cluster for an unknown antibiotic from S. rochei. This high-throughput functional genome mining approach can be easily applied to other streptomycetes, and it is very suitable for the large-scale screening of genomic BAC libraries for bioactive natural products and the corresponding biosynthetic pathways. IMPORTANCE Microbial genomes encode numerous cryptic biosynthetic gene clusters for unknown small metabolites with potential biological activities. Several genome mining approaches have been developed to activate and bring these cryptic metabolites to biological tests for future drug discovery. Previous sequence-guided procedures relied on bioinformatic analysis to predict potentially interesting biosynthetic gene clusters. In this study, we describe an efficient approach based on heterologous expression and functional screening of a whole-genome library for the mining of bioactive metabolites from Streptomyces. The usefulness of this function-driven approach was demonstrated by the capture of four large biosynthetic gene clusters for metabolites of various chemical types, including streptothricins, borrelidin, two novel lipopeptides, and one unknown antibiotic from Streptomyces rochei Sal35. The transfer, expression, and screening of the library were all performed in a high-throughput way, so that this approach is scalable and adaptable to industrial automation for next-generation antibiotic discovery. PMID:27451447
A scanning acoustic microscope discriminates cancer cells in fluid
NASA Astrophysics Data System (ADS)
Miura, Katsutoshi; Yamamoto, Seiji
2015-10-01
Scanning acoustic microscopy (SAM) discriminates lesions in sections by assessing the speed of sound (SOS) or attenuation of sound (AOS) through tissues within a few minutes without staining; however, its clinical use in cytological diagnosis is unknown. We applied a thin layer preparation method to observe benign and malignant effusions using SAM. Although SAM is inferior in detecting nuclear features than light microscopy, it can differentiate malignant from benign cells using the higher SOS and AOS values and large irregular cell clusters that are typical features of carcinomas. Moreover, each single malignant cell exhibits characteristic cytoplasmic features such as a large size, irregular borders and secretory or cytoskeletal content. By adjusting the observation range, malignant cells are differentiated from benign cells easily using SAM. Subtle changes in the functional and structural heterogeneity of tumour cells were pursuable with a different digital data of SAM. SAM can be a useful tool for screening malignant cells in effusions before light microscopic observation. Higher AOS values in malignant cells compared with those of benign cells support the feasibility of a novel sonodynamic therapy for malignant effusions.
Mapping the Schizophrenia Genes by Neuroimaging: The Opportunities and the Challenges
2018-01-01
Schizophrenia (SZ) is a heritable brain disease originating from a complex interaction of genetic and environmental factors. The genes underpinning the neurobiology of SZ are largely unknown but recent data suggest strong evidence for genetic variations, such as single nucleotide polymorphisms, making the brain vulnerable to the risk of SZ. Structural and functional brain mapping of these genetic variations are essential for the development of agents and tools for better diagnosis, treatment and prevention of SZ. Addressing this, neuroimaging methods in combination with genetic analysis have been increasingly used for almost 20 years. So-called imaging genetics, the opportunities of this approach along with its limitations for SZ research will be outlined in this invited paper. While the problems such as reproducibility, genetic effect size, specificity and sensitivity exist, opportunities such as multivariate analysis, development of multisite consortia for large-scale data collection, emergence of non-candidate gene (hypothesis-free) approach of neuroimaging genetics are likely to contribute to a rapid progress for gene discovery besides to gene validation studies that are related to SZ. PMID:29324666
Online Cross-Validation-Based Ensemble Learning
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2017-01-01
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419
Adaptive 3D Face Reconstruction from Unconstrained Photo Collections.
Roth, Joseph; Tong, Yiying; Liu, Xiaoming
2016-12-07
Given a photo collection of "unconstrained" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.
Lai, Yung-Lien
2017-01-01
The existing literature on turnover intent among correctional staff conducted in Western societies focuses on the impact of individual-level factors; the possible effects of institutional contexts have been largely overlooked. Moreover, the relationships of various multidimensional conceptualizations of both job satisfaction and organizational commitment to turnover intent are still largely unknown. Using data collected by a self-reported survey of 676 custody staff employed in 22 Taiwanese correctional facilities during April of 2011, the present study expands upon theoretical models developed in Western societies and examines the effects of both individual and institutional factors on turnover intent simultaneously. Results from the use of the hierarchical linear modeling (HLM) statistical method indicate that, at the individual-level, supervisory versus non-supervisory status, job stress, job dangerousness, job satisfaction, and organizational commitment consistently produce a significant association with turnover intent after controlling for personal characteristics. Specifically, three distinct forms of organizational commitment demonstrated an inverse impact on turnover intent. Among institutional-level variables, custody staff who came from a larger facility reported higher likelihood of thinking about quitting their job. © The Author(s) 2015.
Willem de Smalen, Allard; Mor, Siobhan M.
2017-01-01
Rift Valley fever (RVF) is an emerging, vector-borne viral zoonosis that has significantly impacted public health, livestock health and production, and food security over the last three decades across large regions of the African continent and the Arabian Peninsula. The potential for expansion of RVF outbreaks within and beyond the range of previous occurrence is unknown. Despite many large national and international epidemics, the landscape epidemiology of RVF remains obscure, particularly with respect to the ecological roles of wildlife reservoirs and surface water features. The current investigation modeled RVF risk throughout Africa and the Arabian Peninsula as a function of a suite of biotic and abiotic landscape features using machine learning methods. Intermittent wetland, wild Bovidae species richness and sheep density were associated with increased landscape suitability to RVF outbreaks. These results suggest the role of wildlife hosts and distinct hydrogeographic landscapes in RVF virus circulation and subsequent outbreaks may be underestimated. These results await validation by studies employing a deeper, field-based interrogation of potential wildlife hosts within high risk taxa. PMID:28742814
Mapping the Human Toxome by Systems Toxicology
Bouhifd, Mounir; Hogberg, Helena T.; Kleensang, Andre; Maertens, Alexandra; Zhao, Liang; Hartung, Thomas
2014-01-01
Toxicity testing typically involves studying adverse health outcomes in animals subjected to high doses of toxicants with subsequent extrapolation to expected human responses at lower doses. The low-throughput of current toxicity testing approaches (which are largely the same for industrial chemicals, pesticides and drugs) has led to a backlog of more than 80,000 chemicals to which human beings are potentially exposed whose potential toxicity remains largely unknown. Employing new testing strategies that employ the use of predictive, high-throughput cell-based assays (of human origin) to evaluate perturbations in key pathways, referred as pathways of toxicity, and to conduct targeted testing against those pathways, we can begin to greatly accelerate our ability to test the vast “storehouses” of chemical compounds using a rational, risk-based approach to chemical prioritization, and provide test results that are more predictive of human toxicity than current methods. The NIH Transformative Research Grant project Mapping the Human Toxome by Systems Toxicology aims at developing the tools for pathway mapping, annotation and validation as well as the respective knowledge base to share this information. PMID:24443875
A Method for Analyzing A+2 Isotope Patterns for Use in Undergraduate Organic Courses
ERIC Educational Resources Information Center
Gross, Ray A.
2007-01-01
A novel ratio method is developed and automated for finding the bromine-chlorine-sulfur stoichiometry in the molecular formula of an unknown. This method is also useful in spectrometric analysis or beginning organic chemistry.