The Chebyshev-Legendre method: Implementing Legendre methods on Chebyshev points
NASA Technical Reports Server (NTRS)
Don, Wai Sun; Gottlieb, David
1993-01-01
We present a new collocation method for the numerical solution of partial differential equations. This method uses the Chebyshev collocation points, but because of the way the boundary conditions are implemented, it has all the advantages of the Legendre methods. In particular, L2 estimates can be obtained easily for hyperbolic and parabolic problems.
Bringing numerous methods for expression and promoter analysis to a public cloud computing service.
Polanski, Krzysztof; Gao, Bo; Mason, Sam A; Brown, Paul; Ott, Sascha; Denby, Katherine J; Wild, David L
2018-03-01
Every year, a large number of novel algorithms are introduced to the scientific community for a myriad of applications, but using these across different research groups is often troublesome, due to suboptimal implementations and specific dependency requirements. This does not have to be the case, as public cloud computing services can easily house tractable implementations within self-contained dependency environments, making the methods easily accessible to a wider public. We have taken 14 popular methods, the majority related to expression data or promoter analysis, developed these up to a good implementation standard and housed the tools in isolated Docker containers which we integrated into the CyVerse Discovery Environment, making these easily usable for a wide community as part of the CyVerse UK project. The integrated apps can be found at http://www.cyverse.org/discovery-environment, while the raw code is available at https://github.com/cyversewarwick and the corresponding Docker images are housed at https://hub.docker.com/r/cyversewarwick/. info@cyverse.warwick.ac.uk or D.L.Wild@warwick.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
A finite-element toolbox for the stationary Gross-Pitaevskii equation with rotation
NASA Astrophysics Data System (ADS)
Vergez, Guillaume; Danaila, Ionut; Auliac, Sylvain; Hecht, Frédéric
2016-12-01
We present a new numerical system using classical finite elements with mesh adaptivity for computing stationary solutions of the Gross-Pitaevskii equation. The programs are written as a toolbox for FreeFem++ (www.freefem.org), a free finite-element software available for all existing operating systems. This offers the advantage to hide all technical issues related to the implementation of the finite element method, allowing to easily code various numerical algorithms. Two robust and optimized numerical methods were implemented to minimize the Gross-Pitaevskii energy: a steepest descent method based on Sobolev gradients and a minimization algorithm based on the state-of-the-art optimization library Ipopt. For both methods, mesh adaptivity strategies are used to reduce the computational time and increase the local spatial accuracy when vortices are present. Different run cases are made available for 2D and 3D configurations of Bose-Einstein condensates in rotation. An optional graphical user interface is also provided, allowing to easily run predefined cases or with user-defined parameter files. We also provide several post-processing tools (like the identification of quantized vortices) that could help in extracting physical features from the simulations. The toolbox is extremely versatile and can be easily adapted to deal with different physical models.
A progressive gradient moment nulling design technique.
Pipe, J G; Chenevert, T L
1991-05-01
A method is presented for designing motion-compensated gradients in a progressive manner. The method is easily applicable to many types of waveforms, and can compensate for any order of motion. It can be implemented graphically or numerically. Underlying theory and examples of its application are provided.
On a categorial aspect of knowledge representation
NASA Astrophysics Data System (ADS)
Tataj, Emanuel; Mulawka, Jan; Nieznański, Edward
Adequate representation of data is crucial for modeling any type of data. To faithfully present and describe the relevant section of the world it is necessary to select the method that can easily be implemented on a computer system which will help in further description allowing reasoning. The main objective of this contribution is to present methods of knowledge representation using categorial approach. Next to identify the main advantages for computer implementation. Categorical aspect of knowledge representation is considered in semantic networks realisation. Such method borrows already known metaphysics properties for data modeling process. The potential topics of further development of categorical semantic networks implementations are also underlined.
ERIC Educational Resources Information Center
Minor, Darrell P.
2005-01-01
In "Beyond Pascals Triangle" the author demonstrates ways of using "Pascallike" triangles to expand polynomials raised to powers in a fairly quick and easy fashion. The recursive method could easily be implemented within a spreadsheet, or simply by using paper and pencil. An explanation of why the method works follows the several examples that are…
Improved Margin of Error Estimates for Proportions in Business: An Educational Example
ERIC Educational Resources Information Center
Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael
2015-01-01
This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…
A Simple Microsoft Excel Method to Predict Antibiotic Outbreaks and Underutilization.
Miglis, Cristina; Rhodes, Nathaniel J; Avedissian, Sean N; Zembower, Teresa R; Postelnick, Michael; Wunderink, Richard G; Sutton, Sarah H; Scheetz, Marc H
2017-07-01
Benchmarking strategies are needed to promote the appropriate use of antibiotics. We have adapted a simple regressive method in Microsoft Excel that is easily implementable and creates predictive indices. This method trends consumption over time and can identify periods of over- and underuse at the hospital level. Infect Control Hosp Epidemiol 2017;38:860-862.
Robust location of optical fiber modes via the argument principle method
NASA Astrophysics Data System (ADS)
Chen, Parry Y.; Sivan, Yonatan
2017-05-01
We implement a robust, globally convergent root search method for transcendental equations guaranteed to locate all complex roots within a specified search domain, based on Cauchy's residue theorem. Although several implementations of the argument principle already exist, ours has several advantages: it allows singularities within the search domain and branch points are not fatal to the method. Furthermore, our implementation is simple and is written in MATLAB, fulfilling the need for an easily integrated implementation which can be readily modified to accommodate the many variations of the argument principle method, each of which is suited to a different application. We apply the method to the step index fiber dispersion relation, which has become topical due to the recent proliferation of high index contrast fibers. We also find modes with permittivity as the eigenvalue, catering to recent numerical methods that expand the radiation of sources using eigenmodes.
Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods
NASA Astrophysics Data System (ADS)
Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi
2018-03-01
Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.
Distributed Kernelized Locality-Sensitive Hashing for Faster Image Based Navigation
2015-03-26
Facebook, Google, and Yahoo !. Current methods for image retrieval become problematic when implemented on image datasets that can easily reach billions of...correlations. Tech industry leaders like Facebook, Google, and Yahoo ! sort and index even larger volumes of “big data” daily. When attempting to process...open source implementation of Google’s MapReduce programming paradigm [13] which has been used for many different things. Using Apache Hadoop, Yahoo
Non-Intrusive Pressure/Multipurpose Sensor and Method
NASA Technical Reports Server (NTRS)
Smith, William C. (Inventor)
2001-01-01
Method and apparatus are provided for determining pressure using a non-intrusive sensor that is easily attachable to the plumbing of a pressurized system. A bent mode implementation and a hoop mode implementation of the invention are disclosed. Each of these implementations is able to nonintrusively measure pressure while fluid is flowing. As well, each implementation may be used to measure mass flow rate simultaneously with pressure. An ultra low noise control system is provided for making pressure measurements during gas flow. The control system includes two tunable digital bandpass filters with center frequencies that are responsive to a clock frequency. The clock frequency is divided by a factor of N to produce a driving vibrational signal for resonating a metal sensor section.
NASA Technical Reports Server (NTRS)
Vaidman, Lev
1994-01-01
Possible realistic implementations of a method for interaction-free measurements, due to Elitzur and Vaidman, are proposed and discussed. It is argued that the effect can be easily demonstrated in an optical laboratory.
Detecting Lower Bounds to Quantum Channel Capacities.
Macchiavello, Chiara; Sacchi, Massimiliano F
2016-04-08
We propose a method to detect lower bounds to quantum capacities of a noisy quantum communication channel by means of a few measurements. The method is easily implementable and does not require any knowledge about the channel. We test its efficiency by studying its performance for most well-known single-qubit noisy channels and for the generalized Pauli channel in an arbitrary finite dimension.
An easily implemented static condensation method for structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.
1990-01-01
A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.
Exact solution of some linear matrix equations using algebraic methods
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
Algebraic methods are used to construct the exact solution P of the linear matrix equation PA + BP = - C, where A, B, and C are matrices with real entries. The emphasis of this equation is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The paper is divided into six sections which include the proof of the basic lemma, the Liapunov equation, and the computer implementation for the rational, integer and modular algorithms. Two numerical examples are given and the entire calculation process is depicted.
NASA Technical Reports Server (NTRS)
Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; Scola, Salvatore; Tobin, Steven; McLeod, Shawn; Mannu, Sergio; Guglielmo, Corrado; Moeller, Timothy
2013-01-01
The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle in 2015. A detailed thermal model of the SAGE III payload has been developed in Thermal Desktop (TD). Several novel methods have been implemented to facilitate efficient payload-level thermal analysis, including the use of a design of experiments (DOE) methodology to determine the worst-case orbits for SAGE III while on ISS, use of TD assemblies to move payloads from the Dragon trunk to the Enhanced Operational Transfer Platform (EOTP) to its final home on the Expedite the Processing of Experiments to Space Station (ExPRESS) Logistics Carrier (ELC)-4, incorporation of older models in varying unit sets, ability to change units easily (including hardcoded logic blocks), case-based logic to facilitate activating heaters and active elements for varying scenarios within a single model, incorporation of several coordinate frames to easily map to structural models with differing geometries and locations, and streamlined results processing using an Excel-based text file plotter developed in-house at LaRC. This document presents an overview of the SAGE III thermal model and describes the development and implementation of these efficiency-improving analysis methods.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
Robust regression on noisy data for fusion scaling laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be; Laboratoire de Physique des Plasmas de l'ERM - Laboratorium voor Plasmafysica van de KMS
2014-11-15
We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.
Comparison of Modern Methods for Analyzing Repeated Measures Data with Missing Values
ERIC Educational Resources Information Center
Vallejo, G.; Fernandez, M. P.; Livacic-Rojas, P. E.; Tuero-Herrero, E.
2011-01-01
Missing data are a pervasive problem in many psychological applications in the real world. In this article we study the impact of dropout on the operational characteristics of several approaches that can be easily implemented with commercially available software. These approaches include the covariance pattern model based on an unstructured…
Sybil--efficient constraint-based modelling in R.
Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J
2013-11-13
Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).
A simple finite element method for non-divergence form elliptic equation
Mu, Lin; Ye, Xiu
2017-03-01
Here, we develop a simple finite element method for solving second order elliptic equations in non-divergence form by combining least squares concept with discontinuous approximations. This simple method has a symmetric and positive definite system and can be easily analyzed and implemented. We could have also used general meshes with polytopal element and hanging node in the method. We prove that our finite element solution approaches to the true solution when the mesh size approaches to zero. Numerical examples are tested that demonstrate the robustness and flexibility of the method.
A simple finite element method for non-divergence form elliptic equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Ye, Xiu
Here, we develop a simple finite element method for solving second order elliptic equations in non-divergence form by combining least squares concept with discontinuous approximations. This simple method has a symmetric and positive definite system and can be easily analyzed and implemented. We could have also used general meshes with polytopal element and hanging node in the method. We prove that our finite element solution approaches to the true solution when the mesh size approaches to zero. Numerical examples are tested that demonstrate the robustness and flexibility of the method.
Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Become a Star: Teaching the Process of Design and Implementation of an Intelligent System
ERIC Educational Resources Information Center
Venables, Anne; Tan, Grace
2005-01-01
Teaching future knowledge engineers, the necessary skills for designing and implementing intelligent software solutions required by business, industry and research today, is a very tall order. These skills are not easily taught in traditional undergraduate computer science lectures; nor are the practical experiences easily reinforced in laboratory…
QUEST - A Bayesian adaptive psychometric method
NASA Technical Reports Server (NTRS)
Watson, A. B.; Pelli, D. G.
1983-01-01
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.
A Hot-Wire Method Based Thermal Conductivity Measurement Apparatus for Teaching Purposes
ERIC Educational Resources Information Center
Alvarado, S.; Marin, E.; Juarez, A. G.; Calderon, A.; Ivanov, R.
2012-01-01
The implementation of an automated system based on the hot-wire technique is described for the measurement of the thermal conductivity of liquids using equipment easily available in modern physics laboratories at high schools and universities (basically a precision current source and a voltage meter, a data acquisition card, a personal computer…
FPGA Implementation of Heart Rate Monitoring System.
Panigrahy, D; Rakshit, M; Sahu, P K
2016-03-01
This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA.
Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Reddy, C. J.
2011-01-01
This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.
Simplified formulae for the estimation of offshore wind turbines clutter on marine radars.
Grande, Olatz; Cañizo, Josune; Angulo, Itziar; Jenn, David; Danoon, Laith R; Guerra, David; de la Vega, David
2014-01-01
The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario.
Simplified Formulae for the Estimation of Offshore Wind Turbines Clutter on Marine Radars
Grande, Olatz; Cañizo, Josune; Jenn, David; Danoon, Laith R.; Guerra, David
2014-01-01
The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario. PMID:24782682
Optimum aerodynamic design via boundary control
NASA Technical Reports Server (NTRS)
Jameson, Antony
1994-01-01
These lectures describe the implementation of optimization techniques based on control theory for airfoil and wing design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. Recently the method has been implemented in an alternative formulation which does not depend on conformal mapping, so that it can more easily be extended to treat general configurations. The method has also been extended to treat the Euler equations, and results are presented for both two and three dimensional cases, including the optimization of a swept wing.
Smoothing of climate time series revisited
NASA Astrophysics Data System (ADS)
Mann, Michael E.
2008-08-01
We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.
An application of artificial neural networks to experimental data approximation
NASA Technical Reports Server (NTRS)
Meade, Andrew J., Jr.
1993-01-01
As an initial step in the evaluation of networks, a feedforward architecture is trained to approximate experimental data by the backpropagation algorithm. Several drawbacks were detected and an alternative learning algorithm was then developed to partially address the drawbacks. This noniterative algorithm has a number of advantages over the backpropagation method and is easily implemented on existing hardware.
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
Dietz, Dennis C.
2014-01-01
A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070
Generalized Flip-Flop Input Equations Based on a Four-Valued Boolean Algebra
NASA Technical Reports Server (NTRS)
Tucker, Jerry H.; Tapia, Moiez A.
1996-01-01
A procedure is developed for obtaining generalized flip-flop input equations, and a concise method is presented for representing these equations. The procedure is based on solving a four-valued characteristic equation of the flip-flop, and can encompass flip-flops that are too complex to approach intuitively. The technique is presented using Karnaugh maps, but could easily be implemented in software.
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
Efficient design of CMOS TSC checkers
NASA Technical Reports Server (NTRS)
Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling
1990-01-01
This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.
Light-Cone Effect of Radiation Fields in Cosmological Radiative Transfer Simulations
NASA Astrophysics Data System (ADS)
Ahn, Kyungjin
2015-02-01
We present a novel method to implement time-delayed propagation of radiation fields in cosmo-logical radiative transfer simulations. Time-delayed propagation of radiation fields requires construction of retarded-time fields by tracking the location and lifetime of radiation sources along the corresponding light-cones. Cosmological radiative transfer simulations have, until now, ignored this "light-cone effect" or implemented ray-tracing methods that are computationally demanding. We show that radiative trans-fer calculation of the time-delayed fields can be easily achieved in numerical simulations when periodic boundary conditions are used, by calculating the time-discretized retarded-time Green's function using the Fast Fourier Transform (FFT) method and convolving it with the source distribution. We also present a direct application of this method to the long-range radiation field of Lyman-Werner band photons, which is important in the high-redshift astrophysics with first stars.
Factorized Runge-Kutta-Chebyshev Methods
NASA Astrophysics Data System (ADS)
O'Sullivan, Stephen
2017-05-01
The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc
Optimization of magnet end-winding geometry
NASA Astrophysics Data System (ADS)
Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.
1994-03-01
A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.
Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua
2015-04-01
We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.
1986-07-31
designer will be able to more rapid- ly assemble a total software package from perfected modules that can be easily de - bugged or replaced with more...antinuclear interactions e. gravitational effects of antimatter 2. possible machine parameters and lattice design 3. electron and stochastic cooling needs 4...implementation, reliability requirements; development of design environments and of experimental methodology; technology transfer methods from
Verification of Methods for Assessing the Sustainability of Monitored Natural Attenuation (MNA)
2013-01-01
sugars TOC total organic carbon TSR thermal source removal USACE U.S. Army Corps of Engineers USEPA U.S. Environmental Protection Agency USGS...the SZD function for long-term DNAPL dissolution simulations. However, the sustainability assessment was easily implemented using an alternative...neutral sugars [THNS]). Chapelle et al. (2009) suggested THAA and THNS as measures of the bioavailability of organic carbon based on an analysis of
Tsai, Chung-Yu
2017-07-01
A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vittoria, Fabio A., E-mail: fabio.vittoria.12@ucl.ac.uk; Diemoz, Paul C.; Research Complex at Harwell, Harwell Oxford Campus, OX11 0FA Didcot
2014-03-31
We propose two different approaches to retrieve x-ray absorption, refraction, and scattering signals using a one dimensional scan and a high resolution detector. The first method can be easily implemented in existing procedures developed for edge illumination to retrieve absorption and refraction signals, giving comparable image quality while reducing exposure time and delivered dose. The second method tracks the variations of the beam intensity profile on the detector through a multi-Gaussian interpolation, allowing the additional retrieval of the scattering signal.
Meckes, David G
2014-01-01
The identification and characterization of herpes simplex virus protein interaction complexes are fundamental to understanding the molecular mechanisms governing the replication and pathogenesis of the virus. Recent advances in affinity-based methods, mass spectrometry configurations, and bioinformatics tools have greatly increased the quantity and quality of protein-protein interaction datasets. In this chapter, detailed and reliable methods that can easily be implemented are presented for the identification of protein-protein interactions using cryogenic cell lysis, affinity purification, trypsin digestion, and mass spectrometry.
Tsai, Tzung-Cheng; Hsu, Yeh-Liang; Ma, An-I; King, Trevor; Wu, Chang-Huei
2007-08-01
"Telepresence" is an interesting field that includes virtual reality implementations with human-system interfaces, communication technologies, and robotics. This paper describes the development of a telepresence robot called Telepresence Robot for Interpersonal Communication (TRIC) for the purpose of interpersonal communication with the elderly in a home environment. The main aim behind TRIC's development is to allow elderly populations to remain in their home environments, while loved ones and caregivers are able to maintain a higher level of communication and monitoring than via traditional methods. TRIC aims to be a low-cost, lightweight robot, which can be easily implemented in the home environment. Under this goal, decisions on the design elements included are discussed. In particular, the implementation of key autonomous behaviors in TRIC to increase the user's capability of projection of self and operation of the telepresence robot, in addition to increasing the interactive capability of the participant as a dialogist are emphasized. The technical development and integration of the modules in TRIC, as well as human factors considerations are then described. Preliminary functional tests show that new users were able to effectively navigate TRIC and easily locate visual targets. Finally the future developments of TRIC, especially the possibility of using TRIC for home tele-health monitoring and tele-homecare visits are discussed.
NASA Astrophysics Data System (ADS)
Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter
2018-03-01
The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.
2012-08-01
This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.
A post-processing method to simulate the generalized RF sheath boundary condition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myra, James R.; Kohno, Haruhiko
For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less
A post-processing method to simulate the generalized RF sheath boundary condition
Myra, James R.; Kohno, Haruhiko
2017-10-23
For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less
Cockpit weather radar display demonstrator and ground-to-air sferics telemetry system
NASA Technical Reports Server (NTRS)
Nickum, J. D.; Mccall, D. L.
1982-01-01
The results of two methods of obtaining timely and accurate severe weather presentations in the cockpit are detailed. The first method described is a course up display of uplinked weather radar data. This involves the construction of a demonstrator that will show the feasibility of producing a course up display in the cockpit of the NASA simulator at Langley. A set of software algorithms was designed that could easily be implemented, along with data tapes generated to provide the cockpit simulation. The second method described involves the uplinking of sferic data from a ground based 3M-Ryan Stormscope. The technique involves transfer of the data on the CRT of the Stormscope to a remote CRT. This sferic uplink and display could also be included in an implementation on the NASA cockpit simulator, allowing evaluation of pilot responses based on real Stormscope data.
Non-invasive acoustic-based monitoring of uranium in solution and H/D ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pantea, Cristian; Beedle, Christopher Craig; Sinha, Dipen N.
The primary objective of this project is to adapt existing non-invasive acoustic techniques (Swept-Frequency Acoustic Interferometry and Gaussian-pulse acoustic technique) for the purpose of demonstrating the ability to quantify U or H/D ratios in solution. Furthermore, a successful demonstration will provide an easily implemented, low cost, and non-invasive method for remote and unattended uranium mass measurements for International Atomic Energy Agency (IAEA).
An operational approach to high resolution agro-ecological zoning in West-Africa.
Le Page, Y; Vasconcelos, Maria; Palminha, A; Melo, I Q; Pereira, J M C
2017-01-01
The objective of this work is to develop a simple methodology for high resolution crop suitability analysis under current and future climate, easily applicable and useful in Least Developed Countries. The approach addresses both regional planning in the context of climate change projections and pre-emptive short-term rural extension interventions based on same-year agricultural season forecasts, while implemented with off-the-shelf resources. The developed tools are applied operationally in a case-study developed in three regions of Guinea-Bissau and the obtained results, as well as the advantages and limitations of methods applied, are discussed. In this paper we show how a simple approach can easily generate information on climate vulnerability and how it can be operationally used in rural extension services.
Island custom blocking technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carabetta, R.J.
The technique of Island blocking is being used more frequently since the advent of our new head and neck blocking techniques and the implementation of a newly devised lung protocol. The system presented affords the mould room personnel a quick and accurate means of island block fabrication without the constant remeasuring or subtle shifting to approximate correct placement. The cookie cutter is easily implemented into any department's existing block cutting techniques. The device is easily and inexpensively made either in a machine shop or acquired by contacting the author.
Newton-Euler Dynamic Equations of Motion for a Multi-body Spacecraft
NASA Technical Reports Server (NTRS)
Stoneking, Eric
2007-01-01
The Magnetospheric MultiScale (MMS) mission employs a formation of spinning spacecraft with several flexible appendages and thruster-based control. To understand the complex dynamic interaction of thruster actuation, appendage motion, and spin dynamics, each spacecraft is modeled as a tree of rigid bodies connected by spherical or gimballed joints. The method presented facilitates assembling by inspection the exact, nonlinear dynamic equations of motion for a multibody spacecraft suitable for solution by numerical integration. The building block equations are derived by applying Newton's and Euler's equations of motion to an "element" consisting of two bodies and one joint (spherical and gimballed joints are considered separately). Patterns in the "mass" and L'force" matrices guide assembly by inspection of a general N-body tree-topology system. Straightforward linear algebra operations are employed to eliminate extraneous constraint equations, resulting in a minimum-dimension system of equations to solve. This method thus combines a straightforward, easily-extendable, easily-mechanized formulation with an efficient computer implementation.
Exact solution of some linear matrix equations using algebraic methods
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1977-01-01
A study is done of solution methods for Linear Matrix Equations including Lyapunov's equation, using methods of modern algebra. The emphasis is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The action f sub BA is introduced a Basic Lemma is proven. The equation PA + BP = -C as well as the Lyapunov equation are analyzed. Algorithms are given for the solution of the Lyapunov and comment is given on its arithmetic complexity. The equation P - A'PA = Q is studied and numerical examples are given.
Application of IR imaging for free-surface velocity measurement in liquid-metal systems
Hvasta, M. G.; Kolemen, E.; Fisher, A.
2017-01-05
Measuring free-surface, liquid-metal flow velocity is challenging to do in a reliable and accurate manner. This paper presents a non-invasive, easily calibrated method of measuring the surface velocities of open-channel liquid-metal flows using an IR camera. Unlike other spatially limited methods, this IR camera particle tracking technique provides full field-of-view data that can be used to better understand open-channel flows and determine surface boundary conditions. Lastly, this method could be implemented and automated for a wide range of liquid-metal experiments, even if they operate at high-temperatures or within strong magnetic fields.
The Implementation of Multiple Lifestyle Interventions in Two Organizations
Engbers, L. H.; Van Empelen, P.; De Moes, K. J.; Wittink, H.; Gründemann, R.; van Mechelen, W.
2014-01-01
Objective: To evaluate the implementation of a multicomponent lifestyle intervention at two different worksites. Methods: Data on eight process components were collected by means of questionnaires and interviews. Data on the effectiveness were collected using questionnaires. Results: The program was implemented partly as planned, and 84.0% (max 25) and 85.7% (max 14) of all planned interventions were delivered at the university and hospital, respectively. Employees showed high reach (96.6%) and overall participation (75.1%) but moderate overall satisfaction rates (6.8 ± 1.1). Significant intervention effects were found for days of fruit consumption (β = 0.44 days/week, 95% CI: 0.02 to 0.85) in favor of the intervention group. Conclusions: The study showed successful reach, dose, and maintenance but moderate fidelity and satisfaction. Mainly relatively simple and easily implemented interventions were chosen, which were effective only in improving employees’ days of fruit consumption. PMID:25376415
The ONIOM molecular dynamics method for biochemical applications: cytidine deaminase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2007-03-22
Abstract We derived and implemented the ONIOM-molecular dynamics (MD) method for biochemical applications. The implementation allows the characterization of the functions of the real enzymes taking account of their thermal motion. In this method, the direct MD is performed by calculating the ONIOM energy and gradients of the system on the fly. We describe the first application of this ONOM-MD method to cytidine deaminase. The environmental effects on the substrate in the active site are examined. The ONIOM-MD simulations show that the product uridine is strongly perturbed by the thermal motion of the environment and dissociates easily from the activemore » site. TM and MA were supported in part by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan. MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.« less
1992-12-01
cm 2 heat flux which must be transferred by the buoyancy-induced gas flow. A survey of electronic cooling literature can easily demonstrate how large...Toward Implementation of a Certification Framework for Reusable Dr. Allen S. Parrish Software Modules 15 Data Association Problems in Multisensor Data...next section and the reader is referred to [5] for additional details of the analysis. Then the method is applied to a dipole element with straight
Robot Control Based On Spatial-Operator Algebra
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Jain, Abhinandan
1992-01-01
Method for mathematical modeling and control of robotic manipulators based on spatial-operator algebra providing concise representation and simple, high-level theoretical frame-work for solution of kinematical and dynamical problems involving complicated temporal and spatial relationships. Recursive algorithms derived immediately from abstract spatial-operator expressions by inspection. Transition from abstract formulation through abstract solution to detailed implementation of specific algorithms to compute solution greatly simplified. Complicated dynamical problems like two cooperating robot arms solved more easily.
Exact solutions of fractional mBBM equation and coupled system of fractional Boussinesq-Burgers
NASA Astrophysics Data System (ADS)
Javeed, Shumaila; Saif, Summaya; Waheed, Asif; Baleanu, Dumitru
2018-06-01
The new exact solutions of nonlinear fractional partial differential equations (FPDEs) are established by adopting first integral method (FIM). The Riemann-Liouville (R-L) derivative and the local conformable derivative definitions are used to deal with the fractional order derivatives. The proposed method is applied to get exact solutions for space-time fractional modified Benjamin-Bona-Mahony (mBBM) equation and coupled time-fractional Boussinesq-Burgers equation. The suggested technique is easily applicable and effectual which can be implemented successfully to obtain the solutions for different types of nonlinear FPDEs.
NASA Astrophysics Data System (ADS)
Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick
2017-12-01
In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.
Development of the Tensoral Computer Language
NASA Technical Reports Server (NTRS)
Ferziger, Joel; Dresselhaus, Eliot
1996-01-01
The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.
Thole, Henning
2011-01-01
While methods for the production of guidelines (evidence analysis, assessment, adaptation) have been continually refined throughout the past years, there is a lack of instruments for the production of easily understandable synopses. Definition of a methodological approach to encompass synopses by Spidernet diagrams. Tables of synopses can be generated with distinct information to bring down the main results in one Spidernet diagram. This is possible for both the entire synopsis and parts of it. Guideline comparisons require detailed analyses on the one hand and easily understandable presentations of their results on the other. Guideline synopses can be substantially supported by graphic presentation of the results of synopsis. Graphic synopsis is also helpful in other cases; it may be used, for example, to summarise HTA reports, systematic reviews or guidelines. Copyright © 2011. Published by Elsevier GmbH.
Polarisation in spin-echo experiments: Multi-point and lock-in measurements
NASA Astrophysics Data System (ADS)
Tamtögl, Anton; Davey, Benjamin; Ward, David J.; Jardine, Andrew P.; Ellis, John; Allison, William
2018-02-01
Spin-echo instruments are typically used to measure diffusive processes and the dynamics and motion in samples on ps and ns time scales. A key aspect of the spin-echo technique is to determine the polarisation of a particle beam. We present two methods for measuring the spin polarisation in spin-echo experiments. The current method in use is based on taking a number of discrete readings. The implementation of a new method involves continuously rotating the spin and measuring its polarisation after being scattered from the sample. A control system running on a microcontroller is used to perform the spin rotation and to calculate the polarisation of the scattered beam based on a lock-in amplifier. First experimental tests of the method on a helium spin-echo spectrometer show that it is clearly working and that it has advantages over the discrete approach, i.e., it can track changes of the beam properties throughout the experiment. Moreover, we show that real-time numerical simulations can perfectly describe a complex experiment and can be easily used to develop improved experimental methods prior to a first hardware implementation.
Lax-Friedrichs sweeping scheme for static Hamilton-Jacobi equations
NASA Astrophysics Data System (ADS)
Kao, Chiu Yen; Osher, Stanley; Qian, Jianliang
2004-05-01
We propose a simple, fast sweeping method based on the Lax-Friedrichs monotone numerical Hamiltonian to approximate viscosity solutions of arbitrary static Hamilton-Jacobi equations in any number of spatial dimensions. By using the Lax-Friedrichs numerical Hamiltonian, we can easily obtain the solution at a specific grid point in terms of its neighbors, so that a Gauss-Seidel type nonlinear iterative method can be utilized. Furthermore, by incorporating a group-wise causality principle into the Gauss-Seidel iteration by following a finite group of characteristics, we have an easy-to-implement, sweeping-type, and fast convergent numerical method. However, unlike other methods based on the Godunov numerical Hamiltonian, some computational boundary conditions are needed in the implementation. We give a simple recipe which enforces a version of discrete min-max principle. Some convergence analysis is done for the one-dimensional eikonal equation. Extensive 2-D and 3-D numerical examples illustrate the efficiency and accuracy of the new approach. To our knowledge, this is the first fast numerical method based on discretizing the Hamilton-Jacobi equation directly without assuming convexity and/or homogeneity of the Hamiltonian.
Pistorio, Salvatore G; Nigudkar, Swati S; Stine, Keith J; Demchenko, Alexei V
2016-10-07
The development of a useful methodology for simple, scalable, and transformative automation of oligosaccharide synthesis that easily interfaces with existing methods is reported. The automated synthesis can now be performed using accessible equipment where the reactants and reagents are delivered by the pump or the autosampler and the reactions can be monitored by the UV detector. The HPLC-based platform for automation is easy to setup and adapt to different systems and targets.
Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.
2015-01-01
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.
ERIC Educational Resources Information Center
Bast, Lotus S.; Due, Pernille; Ersbøll, Annette K.; Damsgaard, Mogens T.; Andersen, Anette
2017-01-01
Background: Assessment of implementation is essential for the evaluation of school-based preventive activities. Interventions are more easily implemented in schools if detailed instructional manuals, lesson plans, and materials are provided; however, implementation may also be affected by other factors than the intervention itself--for example,…
Flexible Residential Smart Grid Simulation Framework
NASA Astrophysics Data System (ADS)
Xiang, Wang
Different scheduling and coordination algorithms controlling household appliances' operations can potentially lead to energy consumption reduction and/or load balancing in conjunction with different electricity pricing methods used in smart grid programs. In order to easily implement different algorithms and evaluate their efficiency against other ideas, a flexible simulation framework is desirable in both research and business fields. However, such a platform is currently lacking or underdeveloped. In this thesis, we provide a simulation framework to focus on demand side residential energy consumption coordination in response to different pricing methods. This simulation framework, equipped with an appliance consumption library using realistic values, aims to closely represent the average usage of different types of appliances. The simulation results of traditional usage yield close matching values compared to surveyed real life consumption records. Several sample coordination algorithms, pricing schemes, and communication scenarios are also implemented to illustrate the use of the simulation framework.
NASA Astrophysics Data System (ADS)
Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads
2017-03-01
We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.
On the Quantification of Cellular Velocity Fields.
Vig, Dhruv K; Hamby, Alex E; Wolgemuth, Charles W
2016-04-12
The application of flow visualization in biological systems is becoming increasingly common in studies ranging from intracellular transport to the movements of whole organisms. In cell biology, the standard method for measuring cell-scale flows and/or displacements has been particle image velocimetry (PIV); however, alternative methods exist, such as optical flow constraint. Here we review PIV and optical flow, focusing on the accuracy and efficiency of these methods in the context of cellular biophysics. Although optical flow is not as common, a relatively simple implementation of this method can outperform PIV and is easily augmented to extract additional biophysical/chemical information such as local vorticity or net polymerization rates from speckle microscopy. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, J.H.
1984-05-01
Model design, implementation and quality assurance procedures can have a significant impact on the effectiveness of long term utility of any modeling approach. The Regional Oxidant Modeling System (ROMS) is exceptionally complex because it treats all chemical and physical processes thought to affect ozone concentration on a regional scale. Thus, to effectively illustrate useful design and implementation techniques, this paper describes the general modeling framework which forms the basis of the ROMS. This framework is flexible enough to allow straightforward update or replacement of the chemical kinetics mechanism and/or any theoretical formulations of the physical processes. Use of the Jacksonmore » Structured Programming (JSP) method to implement this modeling framework has not only increased programmer productivity and quality of the resulting programs, but also has provided standardized program design, dynamic documentation, and easily maintainable and transportable code. A summary of the JSP method is presented to encourage modelers to pursue this technique in their own model development efforts. In addition, since data preparation is such an integral part of a successful modeling system, the ROMS processor network is described with emphasis on the internal quality control techniques.« less
Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel
2015-01-01
Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by eTOXlab (web services, VM, object-oriented programming) provide an elegant solution to common practical issues; the system can be installed easily in heterogeneous environments and integrates well with other software. Moreover, the system provides a simple and safe solution for building models with confidential structures that can be shared without disclosing sensitive information.
Scaling of mode shapes from operational modal analysis using harmonic forces
NASA Astrophysics Data System (ADS)
Brandt, A.; Berardengo, M.; Manzoni, S.; Cigada, A.
2017-10-01
This paper presents a new method for scaling mode shapes obtained by means of operational modal analysis. The method is capable of scaling mode shapes on any structure, also structures with closely coupled modes, and the method can be used in the presence of ambient vibration from traffic or wind loads, etc. Harmonic excitation can be relatively easily accomplished by using general-purpose actuators, also for force levels necessary for driving large structures such as bridges and highrise buildings. The signal processing necessary for mode shape scaling by the proposed method is simple and the method can easily be implemented in most measurement systems capable of generating a sine wave output. The tests necessary to scale the modes are short compared to typical operational modal analysis test time. The proposed method is thus easy to apply and inexpensive relative to some other methods for scaling mode shapes that are available in literature. Although it is not necessary per se, we propose to excite the structure at, or close to, the eigenfrequencies of the modes to be scaled, since this provides better signal-to-noise ratio in the response sensors, thus permitting the use of smaller actuators. An extensive experimental activity on a real structure was carried out and the results reported demonstrate the feasibility and accuracy of the proposed method. Since the method utilizes harmonic excitation for the mode shape scaling, we propose to call the method OMAH.
Arridge, S R; Dehghani, H; Schweiger, M; Okada, E
2000-01-01
We present a method for handling nonscattering regions within diffusing domains. The method develops from an iterative radiosity-diffusion approach using Green's functions that was computationally slow. Here we present an improved implementation using a finite element method (FEM) that is direct. The fundamental idea is to introduce extra equations into the standard diffusion FEM to represent nondiffusive light propagation across a nonscattering region. By appropriate mesh node ordering the computational time is not much greater than for diffusion alone. We compare results from this method with those from a discrete ordinate transport code, and with Monte Carlo calculations. The agreement is very good, and, in addition, our scheme allows us to easily model time-dependent and frequency domain problems.
Laser penetration spike welding: a welding tool enabling novel process and design opportunities
NASA Astrophysics Data System (ADS)
Dijken, Durandus K.; Hoving, Willem; De Hosson, J. Th. M.
2002-06-01
A novel method for laser welding for sheet metal. is presented. This laser spike welding method is capable of bridging large gaps between sheet metal plates. Novel constructions can be designed and manufactured. Examples are light weight metal epoxy multi-layers and constructions having additional strength with respect to rigidity and impact resistance. Its capability to bridge large gaps allows higher dimensional tolerances in production. The required laser systems are commercially available and are easily implemented in existing production lines. The lasers are highly reliable, the resulting spike welds are quickly realized and the cost price per weld is very low.
The use of music in aged care facilities: A mixed-methods study.
Garrido, Sandra; Dunne, Laura; Perz, Janette; Chang, Esther; Stevens, Catherine J
2018-02-01
Music is frequently used in aged care, being easily accessible and cost-effective. Research indicates that certain types of musical engagement hold greater benefits than others. However, it is not clear how effectively music is utilized in aged care facilities and what the barriers are to its further use. This study used a mixed-methods paradigm, surveying 46 aged care workers and conducting in-depth interviews with 5, to explore how music is used in aged care facilities in Australia, staff perceptions of the impact of music on residents, and the barriers to more effective implementation of music in aged care settings.
The lab without walls: a deployable approach to tropical infectious diseases.
Inglis, Timothy J J
2013-04-01
The Laboratory Without Walls is a modular field application of molecular biology that provides clinical laboratory support in resource-limited, remote locations. The current repertoire arose from early attempts to deliver clinical pathology and public health investigative services in remote parts of tropical Australia, to address the shortcomings of conventional methods when faced with emerging infectious diseases. Advances in equipment platforms and reagent chemistry have enabling rapid progress, but also ensure the Laboratory Without Walls is subject to continual improvement. Although new molecular biology methods may lead to more easily deployable clinical laboratory capability, logistic and technical governance issues continue to act as important constraints on wider implementation.
A computer program for predicting nonlinear uniaxial material responses using viscoplastic models
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Thompson, R. L.
1984-01-01
A computer program was developed for predicting nonlinear uniaxial material responses using viscoplastic constitutive models. Four specific models, i.e., those due to Miller, Walker, Krieg-Swearengen-Rhode, and Robinson, are included. Any other unified model is easily implemented into the program in the form of subroutines. Analysis features include stress-strain cycling, creep response, stress relaxation, thermomechanical fatigue loop, or any combination of these responses. An outline is given on the theoretical background of uniaxial constitutive models, analysis procedure, and numerical integration methods for solving the nonlinear constitutive equations. In addition, a discussion on the computer program implementation is also given. Finally, seven numerical examples are included to demonstrate the versatility of the computer program developed.
Fiber pushout test: A three-dimensional finite element computational simulation
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Chamis, Christos C.
1990-01-01
A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computationally very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictional stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.
Acrylic Resin Molding Based Head Fixation Technique in Rodents.
Roh, Mootaek; Lee, Kyungmin; Jang, Il-Sung; Suk, Kyoungho; Lee, Maan-Gee
2016-01-12
Head fixation is a technique of immobilizing animal's head by attaching a head-post on the skull for rigid clamping. Traditional head fixation requires surgical attachment of metallic frames on the skull. The attached frames are then clamped to a stationary platform resulting in immobilization of the head. However, metallic frames for head fixation have been technically difficult to design and implement in general laboratory environment. In this study, we provide a novel head fixation method. Using a custom-made head fixation bar, head mounter is constructed during implantation surgery. After the application of acrylic resin for affixing implants such as electrodes and cannula on the skull, additional resins applied on top of that to build a mold matching to the port of the fixation bar. The molded head mounter serves as a guide rails, investigators conveniently fixate the animal's head by inserting the head mounter into the port of the fixation bar. This method could be easily applicable if implantation surgery using dental acrylics is necessary and might be useful for laboratories that cannot easily fabricate CNC machined metal head-posts.
Detection of microbial concentration in ice-cream using the impedance technique.
Grossi, M; Lanzoni, M; Pompei, A; Lazzarini, R; Matteuzzi, D; Riccò, B
2008-06-15
The detection of microbial concentration, essential for safe and high quality food products, is traditionally made with the plate count technique, that is reliable, but also slow and not easily realized in the automatic form, as required for direct use in industrial machines. To this purpose, the method based on impedance measurements represents an attractive alternative since it can produce results in about 10h, instead of the 24-48h needed by standard plate counts and can be easily realized in automatic form. In this paper such a method has been experimentally studied in the case of ice-cream products. In particular, all main ice-cream compositions of real interest have been considered and no nutrient media has been used to dilute the samples. A measurement set-up has been realized using benchtop instruments for impedance measurements on samples whose bacteria concentration was independently measured by means of standard plate counts. The obtained results clearly indicate that impedance measurement represents a feasible and reliable technique to detect total microbial concentration in ice-cream, suitable to be implemented as an embedded system for industrial machines.
3D scan line method for identifying void fabric of granular materials
NASA Astrophysics Data System (ADS)
Theocharis, Alexandros I.; Vairaktaris, Emmanouil; Dafalias, Yannis F.
2017-06-01
Among other processes measuring the void phase of porous or fractured media, scan line approach is a simplified "graphical" method, mainly used in image processing related procedures. In soil mechanics, the application of scan line method is related to the soil fabric, which is important in characterizing the anisotropic mechanical response of soils. Void fabric is of particular interest, since graphical approaches are well defined experimentally and most of them can also be easily used in numerical experiments, like the scan line method. This is in contrast to the definition of fabric based on contact normal vectors that are extremely difficult to determine, especially considering physical experiments. The scan line method has been proposed by Oda et al [1] and implemented again by Ghedia and O'Sullivan [2]. A modified method based on DEM analysis instead of image measurements of fabric has been previously proposed and implemented by the authors in a 2D scheme [3-4]. In this work, a 3D extension of the modified scan line definition is presented using PFC 3D®. The results show clearly similar trends with the 2D case and the same behaviour of fabric anisotropy is presented.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
NASA Astrophysics Data System (ADS)
Nagarajan, K.; Shashidharan Nair, C. K.
2007-07-01
The channelled spectrum employing polarized light interference is a very convenient method for the study of dispersion of birefringence. However, while using this method, the absolute order of the polarized light interference fringes cannot be determined easily. Approximate methods are therefore used to estimate the order. One of the approximations is that the dispersion of birefringence across neighbouring integer order fringes is negligible. In this paper, we show how this approximation can cause errors. A modification is reported whereby the error in the determination of absolute fringe order can be reduced using fractional orders instead of integer orders. The theoretical background for this method supported with computer simulation is presented. An experimental arrangement implementing these modifications is described. This method uses a Constant Deviation Spectrometer (CDS) and a Soleil Babinet Compensator (SBC).
Akins, Ralitsa B.; Handal, Gilbert A.
2009-01-01
Objective Although there is an expectation for outcomes-oriented training in residency programs, the reality is that few guidelines and examples exist as to how to provide this type of education and training. We aimed to improve patient care outcomes in our pediatric residency program by using quality improvement (QI) methods, tools, and approaches. Methods A series of QI projects were implemented over a 3-year period in a pediatric residency program to improve patient care outcomes and teach the residents how to use QI methods, tools, and approaches. Residents experienced practice-based learning and systems-based assessment through group projects and review of their own patient outcomes. Resident QI experiences were reviewed quarterly by the program director and were a mandatory part of resident training portfolios. Results Using QI methodology, we were able to improve management of children with obesity, to achieve high compliance with the national patient safety goals, improve the pediatric hotline service, and implement better patient flow in resident continuity clinic. Conclusion Based on our experiences, we conclude that to successfully implement QI projects in residency programs, QI techniques must be formally taught, the opportunities for resident participation must be multiple and diverse, and QI outcomes should be incorporated in resident training and assessment so that they experience the benefits of the QI intervention. The lessons learned from our experiences, as well as the projects we describe, can be easily deployed and implemented in other residency programs. PMID:21975995
NASA Astrophysics Data System (ADS)
Pescaru, A.; Oanta, E.; Axinte, T.; Dascalescu, A.-D.
2015-11-01
Computer aided engineering is based on models of the phenomena which are expressed as algorithms. The implementations of the algorithms are usually software applications which are processing a large volume of numerical data, regardless the size of the input data. In this way, the finite element method applications used to have an input data generator which was creating the entire volume of geometrical data, starting from the initial geometrical information and the parameters stored in the input data file. Moreover, there were several data processing stages, such as: renumbering of the nodes meant to minimize the size of the band length of the system of equations to be solved, computation of the equivalent nodal forces, computation of the element stiffness matrix, assemblation of system of equations, solving the system of equations, computation of the secondary variables. The modern software application use pre-processing and post-processing programs to easily handle the information. Beside this example, CAE applications use various stages of complex computation, being very interesting the accuracy of the final results. Along time, the development of CAE applications was a constant concern of the authors and the accuracy of the results was a very important target. The paper presents the various computing techniques which were imagined and implemented in the resulting applications: finite element method programs, finite difference element method programs, applied general numerical methods applications, data generators, graphical applications, experimental data reduction programs. In this context, the use of the extended precision data types was one of the solutions, the limitations being imposed by the size of the memory which may be allocated. To avoid the memory-related problems the data was stored in files. To minimize the execution time, part of the file was accessed using the dynamic memory allocation facilities. One of the most important consequences of the paper is the design of a library which includes the optimized solutions previously tested, that may be used for the easily development of original CAE cross-platform applications. Last but not least, beside the generality of the data type solutions, there is targeted the development of a software library which may be used for the easily development of node-based CAE applications, each node having several known or unknown parameters, the system of equations being automatically generated and solved.
Note: A simple image processing based fiducial auto-alignment method for sample registration.
Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne
2015-08-01
A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.
TECHNICAL DESIGN NOTE: Picosecond resolution programmable delay line
NASA Astrophysics Data System (ADS)
Suchenek, Mariusz
2009-11-01
The note presents implementation of a programmable delay line for digital signals. The tested circuit has a subnanosecond delay range programmable with a resolution of picoseconds. Implementation of the circuit was based on low-cost components, easily available on the market.
1978-12-01
Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
JCell--a Java-based framework for inferring regulatory networks from time series data.
Spieth, C; Supper, J; Streichert, F; Speer, N; Zell, A
2006-08-15
JCell is a Java-based application for reconstructing gene regulatory networks from experimental data. The framework provides several algorithms to identify genetic and metabolic dependencies based on experimental data conjoint with mathematical models to describe and simulate regulatory systems. Owing to the modular structure, researchers can easily implement new methods. JCell is a pure Java application with additional scripting capabilities and thus widely usable, e.g. on parallel or cluster computers. The software is freely available for download at http://www-ra.informatik.uni-tuebingen.de/software/JCell.
2010-06-01
different approaches were used to model MEMS OM as a grating in Zemax software. First, a 2D grating was directly modeled as a combination of two ID...method of modeling ~IEMS DM in Zemax was implemented by combining two ID gratings. Due to the fact that ZEl\\’IAX allows to easily use ID physical...optics shows thc far field diffractioll pattcrn, which in Zemax geometrical model shows up as distinct spots. each one corresponding to a specific
autokonf - A Configuration Script Generator Implemented in Perl
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reus, J F
This paper discusses configuration scripts in general and the scripting language issues involved. A brief description of GNU autoconf is provided along with a contrasting overview of autokonf, a configuration script generator implemented in Perl, whose macros are implemented in Perl, generating a configuration script in Perl. It is very portable, easily extensible, and readily mastered.
A Markovian Entropy Measure for the Analysis of Calcium Activity Time Series.
Marken, John P; Halleran, Andrew D; Rahman, Atiqur; Odorizzi, Laura; LeFew, Michael C; Golino, Caroline A; Kemper, Peter; Saha, Margaret S
2016-01-01
Methods to analyze the dynamics of calcium activity often rely on visually distinguishable features in time series data such as spikes, waves, or oscillations. However, systems such as the developing nervous system display a complex, irregular type of calcium activity which makes the use of such methods less appropriate. Instead, for such systems there exists a class of methods (including information theoretic, power spectral, and fractal analysis approaches) which use more fundamental properties of the time series to analyze the observed calcium dynamics. We present a new analysis method in this class, the Markovian Entropy measure, which is an easily implementable calcium time series analysis method which represents the observed calcium activity as a realization of a Markov Process and describes its dynamics in terms of the level of predictability underlying the transitions between the states of the process. We applied our and other commonly used calcium analysis methods on a dataset from Xenopus laevis neural progenitors which displays irregular calcium activity and a dataset from murine synaptic neurons which displays activity time series that are well-described by visually-distinguishable features. We find that the Markovian Entropy measure is able to distinguish between biologically distinct populations in both datasets, and that it can separate biologically distinct populations to a greater extent than other methods in the dataset exhibiting irregular calcium activity. These results support the benefit of using the Markovian Entropy measure to analyze calcium dynamics, particularly for studies using time series data which do not exhibit easily distinguishable features.
Measures of Agreement Between Many Raters for Ordinal Classifications
Nelson, Kerrie P.; Edwards, Don
2015-01-01
Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449
A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid
2004-01-01
Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.
A novel in vivo method for lung segment movement tracking
NASA Astrophysics Data System (ADS)
Leira, H. O.; Tangen, G. A.; Hofstad, E. F.; Langø, T.; Amundsen, T.
2012-02-01
Knowledge about lung movement in health and disease is sparse. Current evaluation methods, such as CT, MRI and external view have significant limitations. To study respiratory movement for image guided tumour diagnostics and respiratory physiology, we needed a method that overcomes these limitations. We fitted balloon catheters with electromagnetic sensors, and placed them in lung lobes of ventilated pigs. The sensors sensed their position at 40 Hz in an electromagnetic tracking field with a precision of ∼0.5 mm. The method was evaluated by recording sensor movement in different body positions and at different tidal volumes. No ‘gold standard’ exists for lung segment tracking, so our results were compared to ‘common knowledge’. The sensors were easily placed, showed no clinically relevant position drift and yielded sub-millimetre accuracy. Our measurements fit ‘common knowledge’, as increased ventilation volume increased respiratory movement, and the right lung moved significantly less in the right than the left lateral position. The novel method for tracking lung segment movements during respiration was easy to implement and yielded high spatial and temporal resolution, and the equipment parts are reusable. It is easy to implement as a research tool for lung physiology, navigated bronchoscopy and radiation therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, C; Elgorriaga, I; McConaghy, C
2001-07-03
Emerging CMOS and MEMS technologies enable the implementation of a large number of wireless distributed microsensors that can be easily and rapidly deployed to form highly redundant, self-configuring, and ad hoc sensor networks. To facilitate ease of deployment, these sensors should operate on battery for extended periods of time. A particular challenge in maintaining extended battery lifetime lies in achieving communications with low power. This paper presents a direct-sequence spread-spectrum modem architecture that provides robust communications for wireless sensor networks while dissipating very low power. The modem architecture has been verified in an FPGA implementation that dissipates only 33 mWmore » for both transmission and reception. The implementation can be easily mapped to an ASIC technology, with an estimated power performance of less than 1 mW.« less
Aerodynamic shape optimization of wing and wing-body configurations using control theory
NASA Technical Reports Server (NTRS)
Reuther, James; Jameson, Antony
1995-01-01
This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.
An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring
NASA Technical Reports Server (NTRS)
Buratynski, E. K.; Caughey, D. A.
1984-01-01
An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.
Fast Laplace solver approach to pore-scale permeability
NASA Astrophysics Data System (ADS)
Arns, C. H.; Adler, P. M.
2018-02-01
We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.
RNA interactome capture in yeast.
Beckmann, Benedikt M
2017-04-15
RNA-binding proteins (RBPs) are key players in post-transcriptional regulation of gene expression in eukaryotic cells. To be able to unbiasedly identify RBPs in Saccharomyces cerevisiae, we developed a yeast RNA interactome capture protocol which employs RNA labeling, covalent UV crosslinking of RNA and proteins at 365nm wavelength (photoactivatable-ribonucleoside-enhanced crosslinking, PAR-CL) and finally purification of the protein-bound mRNA. The method can be easily implemented in common workflows and takes about 3days to complete. Next to a comprehensive explanation of the method, we focus on our findings about the choice of crosslinking in yeast and discuss the rationale of individual steps in the protocol. Copyright © 2016. Published by Elsevier Inc.
Game On, Science - How Video Game Technology May Help Biologists Tackle Visualization Challenges
Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc
2013-01-01
The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/. PMID:23483961
Game on, science - how video game technology may help biologists tackle visualization challenges.
Lv, Zhihan; Tek, Alex; Da Silva, Franck; Empereur-mot, Charly; Chavent, Matthieu; Baaden, Marc
2013-01-01
The video games industry develops ever more advanced technologies to improve rendering, image quality, ergonomics and user experience of their creations providing very simple to use tools to design new games. In the molecular sciences, only a small number of experts with specialized know-how are able to design interactive visualization applications, typically static computer programs that cannot easily be modified. Are there lessons to be learned from video games? Could their technology help us explore new molecular graphics ideas and render graphics developments accessible to non-specialists? This approach points to an extension of open computer programs, not only providing access to the source code, but also delivering an easily modifiable and extensible scientific research tool. In this work, we will explore these questions using the Unity3D game engine to develop and prototype a biological network and molecular visualization application for subsequent use in research or education. We have compared several routines to represent spheres and links between them, using either built-in Unity3D features or our own implementation. These developments resulted in a stand-alone viewer capable of displaying molecular structures, surfaces, animated electrostatic field lines and biological networks with powerful, artistic and illustrative rendering methods. We consider this work as a proof of principle demonstrating that the functionalities of classical viewers and more advanced novel features could be implemented in substantially less time and with less development effort. Our prototype is easily modifiable and extensible and may serve others as starting point and platform for their developments. A webserver example, standalone versions for MacOS X, Linux and Windows, source code, screen shots, videos and documentation are available at the address: http://unitymol.sourceforge.net/.
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-01-01
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. PMID:26921716
A Comparison of Ffowcs Williams-Hawkings Solvers for Airframe Noise Applications
NASA Technical Reports Server (NTRS)
Lockard, David P.
2002-01-01
This paper presents a comparison between two implementations of the Ffowcs Williams and Hawkings equation for airframe noise applications. Airframe systems are generally moving at constant speed and not rotating, so these conditions are used in the current investigation. Efficient and easily implemented forms of the equations applicable to subsonic, rectilinear motion of all acoustic sources are used. The assumptions allow the derivation of a simple form of the equations in the frequency-domain, and the time-domain method uses the restrictions on the motion to reduce the work required to find the emission time. The comparison between the frequency domain method and the retarded time formulation reveals some of the advantages of the different approaches. Both methods are still capable of predicting the far-field noise from nonlinear near-field flow quantities. Because of the large input data sets and potentially large numbers of observer positions of interest in three-dimensional problems, both codes utilize the message passing interface to divide the problem among different processors. Example problems are used to demonstrate the usefulness and efficiency of the two schemes.
Koul, Atesh; Becchio, Cristina; Cavallo, Andrea
2017-12-12
Recent years have seen an increased interest in machine learning-based predictive methods for analyzing quantitative behavioral data in experimental psychology. While these methods can achieve relatively greater sensitivity compared to conventional univariate techniques, they still lack an established and accessible implementation. The aim of current work was to build an open-source R toolbox - "PredPsych" - that could make these methods readily available to all psychologists. PredPsych is a user-friendly, R toolbox based on machine-learning predictive algorithms. In this paper, we present the framework of PredPsych via the analysis of a recently published multiple-subject motion capture dataset. In addition, we discuss examples of possible research questions that can be addressed with the machine-learning algorithms implemented in PredPsych and cannot be easily addressed with univariate statistical analysis. We anticipate that PredPsych will be of use to researchers with limited programming experience not only in the field of psychology, but also in that of clinical neuroscience, enabling computational assessment of putative bio-behavioral markers for both prognosis and diagnosis.
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
1994-01-01
It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.
Description and use of LSODE, the Livermore Solver for Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Hindmarsh, Alan C.
1993-01-01
LSODE, the Livermore Solver for Ordinary Differential Equations, is a package of FORTRAN subroutines designed for the numerical solution of the initial value problem for a system of ordinary differential equations. It is particularly well suited for 'stiff' differential systems, for which the backward differentiation formula method of orders 1 to 5 is provided. The code includes the Adams-Moulton method of orders 1 to 12, so it can be used for nonstiff problems as well. In addition, the user can easily switch methods to increase computational efficiency for problems that change character. For both methods a variety of corrector iteration techniques is included in the code. Also, to minimize computational work, both the step size and method order are varied dynamically. This report presents complete descriptions of the code and integration methods, including their implementation. It also provides a detailed guide to the use of the code, as well as an illustrative example problem.
Preliminary Tests For Development Of A Non-Pertechnetate Analysis Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diprete, D.; McCabe, D.
2016-09-28
The objective of this task was to develop a non-pertechnetate analysis method that 222-S lab could easily implement. The initial scope involved working with 222-S laboratory personnel to adapt the existing Tc analytical method to fractionate the non-pertechnetate and pertechnetate. SRNL then developed and tested a method using commercial sorbents containing Aliquat ® 336 to extract the pertechnetate (thereby separating it from non-pertechnetate), followed by oxidation, extraction, and stripping steps, and finally analysis by beta counting and Mass Spectroscopy. Several additional items were partially investigated, including impacts of a 137Cs removal step. The method was initially tested on SRS tankmore » waste samples to determine its viability. Although SRS tank waste does not contain non-pertechnetate, testing with it was useful to investigate the compatibility, separation efficiency, interference removal efficacy, and method sensitivity.« less
Comparative homology agreement search: An effective combination of homology-search methods
Alam, Intikhab; Dress, Andreas; Rehmsmeier, Marc; Fuellen, Georg
2004-01-01
Many methods have been developed to search for homologous members of a protein family in databases, and the reliability of results and conclusions may be compromised if only one method is used, neglecting the others. Here we introduce a general scheme for combining such methods. Based on this scheme, we implemented a tool called comparative homology agreement search (chase) that integrates different search strategies to obtain a combined “E value.” Our results show that a consensus method integrating distinct strategies easily outperforms any of its component algorithms. More specifically, an evaluation based on the Structural Classification of Proteins database reveals that, on average, a coverage of 47% can be obtained in searches for distantly related homologues (i.e., members of the same superfamily but not the same family, which is a very difficult task), accepting only 10 false positives, whereas the individual methods obtain a coverage of 28–38%. PMID:15367730
Andersen, Tonni Grube; Nintemann, Sebastian J.; Marek, Magdalena; Halkier, Barbara A.; Schulz, Alexander; Burow, Meike
2016-01-01
When investigating interactions between two proteins with complementary reporter tags in yeast two-hybrid or split GFP assays, it remains troublesome to discriminate true- from false-negative results and challenging to compare the level of interaction across experiments. This leads to decreased sensitivity and renders analysis of weak or transient interactions difficult to perform. In this work, we describe the development of reporters that can be chemically induced to dimerize independently of the investigated interactions and thus alleviate these issues. We incorporated our reporters into the widely used split ubiquitin-, bimolecular fluorescence complementation (BiFC)- and Förster resonance energy transfer (FRET)- based methods and investigated different protein-protein interactions in yeast and plants. We demonstrate the functionality of this concept by the analysis of weakly interacting proteins from specialized metabolism in the model plant Arabidopsis thaliana. Our results illustrate that chemically induced dimerization can function as a built-in control for split-based systems that is easily implemented and allows for direct evaluation of functionality. PMID:27282591
Parker, R David; Regier, Michael; Brown, Zachary; Davis, Stephen
2015-02-01
Homelessness is a primary concern for community health. Scientific literature on homelessness is wide ranging and diverse. One opportunity to add to existing literature is the development and testing of affordable, easily implemented methods for measuring the impact of homeless on the healthcare system. Such methodological approaches rely on the strengths in a multidisciplinary approach, including providers, both healthcare and homeless services and applied clinical researchers. This paper is a proof of concept for a methodology which is easily adaptable nationwide, given the mandated implementation of homeless management information systems in the United States and other countries; medical billing systems by hospitals; and research methods of researchers. Adaptation is independent of geographic region, budget restraints, specific agency skill sets, and many other factors that impact the application of a consistent methodological science based approach to assess and address homelessness. We conducted a secondary data analysis merging data from homeless utilization and hospital case based data. These data detailed care utilization among homeless persons in a small, Appalachian city in the United States. In our sample of 269 persons who received at least one hospital based service and one homeless service between July 1, 2012 and June 30, 2013, the total billed costs were $5,979,463 with 10 people costing more than one-third ($1,957,469) of the total. Those persons were primarily men, living in an emergency shelter, with pre-existing disabling conditions. We theorize that targeted services, including Housing First, would be an effective intervention. This is proposed in a future study.
A modular method for evaluating the performance of picture archiving and communication systems.
Sanders, W H; Kant, L A; Kudrimoti, A
1993-08-01
Modeling can be used to predict the performance of picture archiving and communication system (PACS) configurations under various load conditions at an early design stage. This is important because choices made early in the design of a system can have a significant impact on the performance of the resulting implementation. Because PACS consist of many types of components, it is important to do such evaluations in a modular manner, so that alternative configurations and designs can be easily investigated. Stochastic activity networks (SANs) and reduced base model construction methods can aid in doing this. SANs are a model type particularly suited to the evaluation of systems in which several activities may be in progress concurrently, and each activity may affect the others through the results of its completion. Together with SANs, reduced base model construction methods provide a means to build highly modular models, in which models of particular components can be easily reused. In this article, we investigate the use of SANs and reduced base model construction techniques in evaluating PACS. Construction and solution of the models is done using UltraSAN, a graphic-oriented software tool for model specification, analysis, and simulation. The method is illustrated via the evaluation of a realistically sized PACS for a typical United States hospital of 300 to 400 beds, and the derivation of system response times and component utilizations.
The Influence of Large-Scale Computing on Aircraft Structural Design.
1986-04-01
the customer in the most cost- effective manner. Computer facility organizations became computer resource power brokers. A good data processing...capabilities generated on other processors can be easily used. This approach is easily implementable and provides a good strategy for using existing...assistance to member nations for the purpose of increasing their scientific and technical potential; - Recommending effective ways for the member nations to
Microprocessor utilization in search and rescue missions
NASA Technical Reports Server (NTRS)
Schwartz, M.
1977-01-01
The feasibility of performing the same task in real time using microprocessor technology was determined. The least square algorithm was implemented on an Intel 8080 microprocessor. Results indicated that a microprocessor could easily match the IBM implementation in accuracy and be performed inside the time limitations set.
A computing method for spatial accessibility based on grid partition
NASA Astrophysics Data System (ADS)
Ma, Linbing; Zhang, Xinchang
2007-06-01
An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.
A Walking Method for Non-Decomposition Intersection and Union of Arbitrary Polygons and Polyhedrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, M.; Yao, J.
We present a method for computing the intersection and union of non- convex polyhedrons without decomposition in O(n log n) time, where n is the total number of faces of both polyhedrons. We include an accompanying Python package which addresses many of the practical issues associated with implementation and serves as a proof of concept. The key to the method is that by considering the edges of the original ob- jects and the intersections between faces as walking routes, we can e ciently nd the boundary of the intersection of arbitrary objects using directional walks, thus handling the concave casemore » in a natural manner. The method also easily extends to plane slicing and non-convex polyhedron unions, and both the polyhedron and its constituent faces may be non-convex.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sader, John E., E-mail: jsader@unimelb.edu.au; Friend, James R.; Department of Mechanical and Aerospace Engineering, University of California-San Diego, La Jolla, California 92122
2014-11-15
A simplified method for calibrating atomic force microscope cantilevers was recently proposed by Sader et al. [Rev. Sci. Instrum. 83, 103705 (2012); Sec. III D] that relies solely on the resonant frequency and quality factor of the cantilever in fluid (typically air). This method eliminates the need to measure the hydrodynamic function of the cantilever, which can be time consuming given the wide range of cantilevers now available. Using laser Doppler vibrometry, we rigorously assess the accuracy of this method for a series of commercially available cantilevers and explore its performance under non-ideal conditions. This shows that the simplified methodmore » is highly accurate and can be easily implemented to perform fast, robust, and non-invasive spring constant calibration.« less
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon
1990-01-01
A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
An algorithm for solving a large class of two- and three-dimensional nonseparable elliptic partial differential equations (PDE's) is developed and tested. It uses a modified D'Yakanov-Gunn iterative procedure in which the relaxation factor is grid-point dependent. It is easy to implement and applicable to a variety of boundary conditions. It is also computationally efficient, as indicated by the results of numerical comparisons with other established methods. Furthermore, the current algorithm has the advantage of possessing two important properties which the traditional iterative methods lack; that is: (1) the convergence rate is relatively insensitive to grid-cell size and aspect ratio, and (2) the convergence rate can be easily estimated by using the coefficient of the PDE being solved.
Cvetkovic, Dean
2013-01-01
The Cooperative Learning in Engineering Design curriculum can be enhanced with structured and timely self and peer assessment teaching methodologies which can easily be applied to any Biomedical Engineering curriculum. A study was designed and implemented to evaluate the effectiveness of this structured and timely self and peer assessment on student team-based projects. In comparing the 'peer-blind' and 'face-to-face' Fair Contribution Scoring (FCS) methods, both had advantages and disadvantages. The 'peer-blind' self and peer assessment method would cause high discrepancy between self and team ratings. But the 'face-to-face' method on the other hand did not have the discrepancy issue and had actually proved to be a more accurate and effective, indicating team cohesiveness and good cooperative learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torella, JP; Lienert, F; Boehm, CR
2014-08-07
Recombination-based DNA construction methods, such as Gibson assembly, have made it possible to easily and simultaneously assemble multiple DNA parts, and they hold promise for the development and optimization of metabolic pathways and functional genetic circuits. Over time, however, these pathways and circuits have become more complex, and the increasing need for standardization and insulation of genetic parts has resulted in sequence redundancies-for example, repeated terminator and insulator sequences-that complicate recombination-based assembly. We and others have recently developed DNA assembly methods, which we refer to collectively as unique nucleotide sequence (UNS)-guided assembly, in which individual DNA parts are flanked withmore » UNSs to facilitate the ordered, recombination-based assembly of repetitive sequences. Here we present a detailed protocol for UNS-guided assembly that enables researchers to convert multiple DNA parts into sequenced, correctly assembled constructs, or into high-quality combinatorial libraries in only 2-3 d. If the DNA parts must be generated from scratch, an additional 2-5 d are necessary. This protocol requires no specialized equipment and can easily be implemented by a student with experience in basic cloning techniques.« less
Torella, Joseph P.; Lienert, Florian; Boehm, Christian R.; Chen, Jan-Hung; Way, Jeffrey C.; Silver, Pamela A.
2016-01-01
Recombination-based DNA construction methods, such as Gibson assembly, have made it possible to easily and simultaneously assemble multiple DNA parts and hold promise for the development and optimization of metabolic pathways and functional genetic circuits. Over time, however, these pathways and circuits have become more complex, and the increasing need for standardization and insulation of genetic parts has resulted in sequence redundancies — for example repeated terminator and insulator sequences — that complicate recombination-based assembly. We and others have recently developed DNA assembly methods that we refer to collectively as unique nucleotide sequence (UNS)-guided assembly, in which individual DNA parts are flanked with UNSs to facilitate the ordered, recombination-based assembly of repetitive sequences. Here we present a detailed protocol for UNS-guided assembly that enables researchers to convert multiple DNA parts into sequenced, correctly-assembled constructs, or into high-quality combinatorial libraries in only 2–3 days. If the DNA parts must be generated from scratch, an additional 2–5 days are necessary. This protocol requires no specialized equipment and can easily be implemented by a student with experience in basic cloning techniques. PMID:25101822
From master slave interferometry to complex master slave interferometry: theoretical work
NASA Astrophysics Data System (ADS)
Rivet, Sylvain; Bradu, Adrian; Maria, Michael; Feuchter, Thomas; Leick, Lasse; Podoleanu, Adrian
2018-03-01
A general theoretical framework is described to obtain the advantages and the drawbacks of two novel Fourier Domain Optical Coherence Tomography (OCT) methods denoted as Master/Slave Interferometry (MSI) and its extension denoted as Complex Master/Slave Interferometry (CMSI). Instead of linearizing the digital data representing the channeled spectrum before a Fourier transform can be applied to it (as in OCT standard methods), channeled spectrum is decomposed on the basis of local oscillations. This replaces the need for linearization, generally time consuming, before any calculation of the depth profile in the range of interest. In this model two functions, g and h, are introduced. The function g describes the modulation chirp of the channeled spectrum signal due to nonlinearities in the decoding process from wavenumber to time. The function h describes the dispersion in the interferometer. The utilization of these two functions brings two major improvements to previous implementations of the MSI method. The paper details the steps to obtain the functions g and h, and represents the CMSI in a matrix formulation that enables to implement easily this method in LabVIEW by using parallel programming with multi-cores.
GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids
NASA Astrophysics Data System (ADS)
Hubber, D. A.; Rosotti, G. P.; Booth, R. A.
2018-01-01
GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.
Fung, Russell; Hyde, Jensen Hart; Davis, Mike
2018-01-01
The process of admitting patients from the emergency department (ED) to an academic internal medicine (AIM) service in a community teaching hospital is one fraught with variability and disorder. This results in an inconsistent volume of patients admitted to academic versus private hospitalist services and results in frustration of both ED and AIM clinicians. We postulated that implementation of a mobile application (app) would improve provider satisfaction and increase admissions to the academic service. The app was designed and implemented to be easily accessible to ED physicians, regularly updated by academic residents on call, and a real-time source of the number of open AIM admission spots. We found a significant improvement in ED and AIM provider satisfaction with the admission process. There was also a significant increase in admissions to the AIM service after implementation of the app. We submit that the implementation of a mobile app is a viable, cost-efficient, and effective method to streamline the admission process from the ED to AIM services at community-based hospitals.
NASA Astrophysics Data System (ADS)
Hapsari, T.; Darhim; Dahlan, J. A.
2018-05-01
This research discusses the differentiated instruction, a mathematic learning which is as expected by the students in connection with the differentiated instruction itself, its implementation, and the students’ responses. This research employs a survey method which involves 62 students as the research respondents. The mathematics learning types required by the students and their responses to the differentiated instruction are examined through questionnaire and interview. The mathematics learning types in orderly required by the students, from the highest frequency cover the easily understood instructions, slowly/not rushing teaching, fun, not complicated, interspersed with humour, various question practices, not too serious, and conducive class atmosphere for the instructions. Implementing the differentiated instruction is not easy. The teacher should be able to constantly assess the students, s/he should have good knowledge of relevant materials and instructions, and properly prepare the instructions, although it is time-consuming. The differentiated instruction is implemented on the instructions of numerical pattern materials. The strategies implemented are flexible grouping, tiered assignment, and compacting. The students positively respond the differentiated learning instruction that they become more motivated and involved in the instruction.
Polski, J M; Kimzey, S; Percival, R W; Grosso, L E
1998-01-01
AIM: To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. METHODS: The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. RESULTS: In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. CONCLUSION: In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition. PMID:9893748
NASA Astrophysics Data System (ADS)
Szafranko, Elżbieta
2017-10-01
Assessment of variant solutions developed for a building investment project needs to be made at the stage of planning. While considering alternative solutions, the investor defines various criteria, but a direct evaluation of the degree of their fulfilment by developed variant solutions can be very difficult. In practice, there are different methods which enable the user to include a large number of parameters into an analysis, but their implementation can be challenging. Some methods require advanced mathematical computations, preceded by complicating input data processing, and the generated results may not lend themselves easily to interpretation. Hence, during her research, the author has developed a systemic approach, which involves several methods and whose goal is to compare their outcome. The final stage of the proposed method consists of graphic interpretation of results. The method has been tested on a variety of building and development projects.
Quality management in in vivo proton MRS.
Pedrosa de Barros, Nuno; Slotboom, Johannes
2017-07-15
The quality of MR-Spectroscopy data can easily be affected in in vivo applications. Several factors may produce signal artefacts, and often these are not easily detected, not even by experienced spectroscopists. Reliable and reproducible in vivo MRS-data requires the definition of quality requirements and goals, implementation of measures to guarantee quality standards, regular control of data quality, and a continuous search for quality improvement. The first part of this review includes a general introduction to different aspects of quality management in MRS. It is followed by the description of a series of tests and phantoms that can be used to assure the quality of the MR system. In the third part, several methods and strategies used for quality control of the spectroscopy data are presented. This review concludes with a reference to a few interesting techniques and aspects that may help to further improve the quality of in vivo MR-spectra. Copyright © 2017 Elsevier Inc. All rights reserved.
Towards a balanced performance measurement system in a public health care organization.
Yuen, Peter P; Ng, Artie W
2012-01-01
This article attempts to devise an integrated performance measurement framework to assess the Hong Kong Hospital Authority (HA) management system by harnessing previous performance measurement systems. An integrated evaluative framework based on the balanced score card (BSC) was developed and applied using the case study method and longitudinal data to evaluate the HA's performance management system. The authors unveil evolving HA performance indicators (P1). Despite the HA staffs explicit quality emphasis, cost control remains the primary focus in their performance measurements. RESEARCH LHNITATIONS/IMPLICATIONS: Data used in this study are from secondary sources, disclosed mostly by HA staff. This study shows public sector staff often attach too much importance to cost control and easily measurable activities at the expense of quality and other less easily measurable attributes'. A balanced performance measurement system, linked to health targets, with a complementary budgeting process that supports pertinent resource allocation is yet to be implemented in Hong Kong's public hospitals.
A Markovian Entropy Measure for the Analysis of Calcium Activity Time Series
Rahman, Atiqur; Odorizzi, Laura; LeFew, Michael C.; Golino, Caroline A.; Kemper, Peter; Saha, Margaret S.
2016-01-01
Methods to analyze the dynamics of calcium activity often rely on visually distinguishable features in time series data such as spikes, waves, or oscillations. However, systems such as the developing nervous system display a complex, irregular type of calcium activity which makes the use of such methods less appropriate. Instead, for such systems there exists a class of methods (including information theoretic, power spectral, and fractal analysis approaches) which use more fundamental properties of the time series to analyze the observed calcium dynamics. We present a new analysis method in this class, the Markovian Entropy measure, which is an easily implementable calcium time series analysis method which represents the observed calcium activity as a realization of a Markov Process and describes its dynamics in terms of the level of predictability underlying the transitions between the states of the process. We applied our and other commonly used calcium analysis methods on a dataset from Xenopus laevis neural progenitors which displays irregular calcium activity and a dataset from murine synaptic neurons which displays activity time series that are well-described by visually-distinguishable features. We find that the Markovian Entropy measure is able to distinguish between biologically distinct populations in both datasets, and that it can separate biologically distinct populations to a greater extent than other methods in the dataset exhibiting irregular calcium activity. These results support the benefit of using the Markovian Entropy measure to analyze calcium dynamics, particularly for studies using time series data which do not exhibit easily distinguishable features. PMID:27977764
NASA Astrophysics Data System (ADS)
Drabik, Timothy J.; Lee, Sing H.
1986-11-01
The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.
NASA Technical Reports Server (NTRS)
Martin, Gary L.; Baugher, Charles R.; Delombard, Richard
1990-01-01
In order to define the acceleration requirements for future Shuttle and Space Station Freedom payloads, methods and hardware characterizing accelerations on microgravity experiment carriers are discussed. The different aspects of the acceleration environment and the acceptable disturbance levels are identified. The space acceleration measurement system features an adjustable bandwidth, wide dynamic range, data storage, and ability to be easily reconfigured and is expected to fly on the Spacelab Life Sciences-1. The acceleration characterization and analysis project describes the Shuttle acceleration environment and disturbance mechanisms, and facilitates the implementation of the microgravity research program.
Annotare—a tool for annotating high-throughput biomedical investigations and resulting data
Shankar, Ravi; Parkinson, Helen; Burdett, Tony; Hastings, Emma; Liu, Junmin; Miller, Michael; Srinivasa, Rashmi; White, Joseph; Brazma, Alvis; Sherlock, Gavin; Stoeckert, Christian J.; Ball, Catherine A.
2010-01-01
Summary: Computational methods in molecular biology will increasingly depend on standards-based annotations that describe biological experiments in an unambiguous manner. Annotare is a software tool that enables biologists to easily annotate their high-throughput experiments, biomaterials and data in a standards-compliant way that facilitates meaningful search and analysis. Availability and Implementation: Annotare is available from http://code.google.com/p/annotare/ under the terms of the open-source MIT License (http://www.opensource.org/licenses/mit-license.php). It has been tested on both Mac and Windows. Contact: rshankar@stanford.edu PMID:20733062
Report on Non-invasive acoustic monitoring of D2O concentration Oct 31 2017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pantea, Cristian; Sinha, Dipen N.; Lakis, Rollin Evan
There is an urgent need for real-time monitoring of the hydrogen /deuterium ratio (H/D) for heavy water production monitoring. Based upon published literature, sound speed is sensitive to the deuterium content of heavy water and can be measured using existing acoustic methods to determine the deuterium concentration in heavy water solutions. We plan to adapt existing non-invasive acoustic techniques (Swept-Frequency Acoustic Interferometry and Gaussian-pulse acoustic technique) for the purpose of quantifying H/D ratios in solution. A successful demonstration will provide an easily implemented, low cost, and non-invasive method for remote and unattended H/D ratio measurements with a resolution of lessmore » than 0.2% vol.« less
Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference
NASA Astrophysics Data System (ADS)
Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun
2018-06-01
Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.
Towards Photo Watercolorization with Artistic Verisimilitude.
Wang, Miaoyi; Wang, Bin; Fei, Yun; Qian, Kanglai; Wang, Wenping; Chen, Jiating; Yong, Jun-Hai
2014-10-01
We present a novel artistic-verisimilitude driven system for watercolor rendering of images and photos. Our system achieves realistic simulation of a set of important characteristics of watercolor paintings that have not been well implemented before. Specifically, we designed several image filters to achieve: 1) watercolor-specified color transferring; 2) saliency-based level-of-detail drawing; 3) hand tremor effect due to human neural noise; and 4) an artistically controlled wet-in-wet effect in the border regions of different wet pigments. A user study indicates that our method can produce watercolor results of artistic verisimilitude better than previous filter-based or physical-based methods. Furthermore, our algorithm is efficient and can easily be parallelized, making it suitable for interactive image watercolorization.
NASA Technical Reports Server (NTRS)
Keymeulen, Didier; Ferguson, Michael I.; Fink, Wolfgang; Oks, Boris; Peay, Chris; Terrile, Richard; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David
2005-01-01
We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation. We also report on the development of a hardware platform for integrated tuning and closed loop operation of MEMS gyroscopes. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). The hardware platform easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.
Automatic generation of computable implementation guides from clinical information models.
Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat
2015-06-01
Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Luk, B. L.; Liu, K. P.; Tong, F.; Man, K. F.
2010-05-01
The impact-acoustics method utilizes different information contained in the acoustic signals generated by tapping a structure with a small metal object. It offers a convenient and cost-efficient way to inspect the tile-wall bonding integrity. However, the existence of the surface irregularities will cause abnormal multiple bounces in the practical inspection implementations. The spectral characteristics from those bounces can easily be confused with the signals obtained from different bonding qualities. As a result, it will deteriorate the classic feature-based classification methods based on frequency domain. Another crucial difficulty posed by the implementation is the additive noise existing in the practical environments that may also cause feature mismatch and false judgment. In order to solve this problem, the work described in this paper aims to develop a robust inspection method that applies model-based strategy, and utilizes the wavelet domain features with hidden Markov modeling. It derives a bonding integrity recognition approach with enhanced immunity to surface roughness as well as the environmental noise. With the help of the specially designed artificial sample slabs, experiments have been carried out with impact acoustic signals contaminated by real environmental noises acquired under practical inspection background. The results are compared with those using classic method to demonstrate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, K; Huang, T; Buttler, D
We present the C-Cat Wordnet package, an open source library for using and modifying Wordnet. The package includes four key features: an API for modifying Synsets; implementations of standard similarity metrics, implementations of well known Word Sense Disambiguation algorithms, and an implementation of the Castanet algorithm. The library is easily extendible and usable in many runtime environments. We demonstrate it's use on two standard Word Sense Disambiguation tasks and apply the Castanet algorithm to a corpus.
performance on a low cost, low size, weight, and power (SWAP) computer : a Raspberry Pi Model B. For a comparison of performance, a baseline implementation...improvement factor of 2-3 compared to filtered backprojection. Execution on a single Raspberry Pi is too slow for real-time imaging. However, factorized...backprojection is easily parallelized, and we include a discussion of parallel implementation across multiple Pis .
Applying reconfigurable hardware to the analysis of multispectral and hyperspectral imagery
NASA Astrophysics Data System (ADS)
Leeser, Miriam E.; Belanovic, Pavle; Estlick, Michael; Gokhale, Maya; Szymanski, John J.; Theiler, James P.
2002-01-01
Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.
NASA Astrophysics Data System (ADS)
Gassmöller, Rene; Bangerth, Wolfgang
2016-04-01
Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a modern advection-field approach, and demonstrate under which conditions which method is more efficient. We implemented the presented methods in ASPECT (aspect.dealii.org), a freely available open-source community code for geodynamic simulations. The structure of the particle code is highly modular, and segregated from the PDE solver, and can thus be easily transferred to other programs, or adapted for various application cases.
Preservice Teachers' Observations of Children's Learning during Family Math Night
ERIC Educational Resources Information Center
Kurz, Terri L.; Kokic, Ivana Batarelo
2011-01-01
Family math night can easily be implemented into mathematics methodology courses providing an opportunity for field-based learning. Preservice teachers were asked to develop and implement an inquiry-based activity at a family math night event held at a local school with personnel, elementary children and their parents in attendance. This action…
Polski, J M; Kimzey, S; Percival, R W; Grosso, L E
1998-08-01
To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition.
A Laplacian based image filtering using switching noise detector.
Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar
2015-01-01
This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
Payne, Hannah L
2017-01-01
Eye movements provide insights about a wide range of brain functions, from sensorimotor integration to cognition; hence, the measurement of eye movements is an important tool in neuroscience research. We describe a method, based on magnetic sensing, for measuring eye movements in head-fixed and freely moving mice. A small magnet was surgically implanted on the eye, and changes in the magnet angle as the eye rotated were detected by a magnetic field sensor. Systematic testing demonstrated high resolution measurements of eye position of <0.1°. Magnetic eye tracking offers several advantages over the well-established eye coil and video-oculography methods. Most notably, it provides the first method for reliable, high-resolution measurement of eye movements in freely moving mice, revealing increased eye movements and altered binocular coordination compared to head-fixed mice. Overall, magnetic eye tracking provides a lightweight, inexpensive, easily implemented, and high-resolution method suitable for a wide range of applications. PMID:28872455
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-05-15
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
An Algorithm Using Twelve Properties of Antibiotics to Find the Recommended Antibiotics, as in CPGs
Tsopra, R.; Venot, A.; Duclos, C.
2014-01-01
Background Clinical Decision Support Systems (CDSS) incorporating justifications, updating and adjustable recommendations can considerably improve the quality of healthcare. We propose a new approach to the design of CDSS for empiric antibiotic prescription, based on implementation of the deeper medical reasoning used by experts in the development of clinical practice guidelines (CPGs), to deduce the recommended antibiotics. Methods We investigated two methods (“exclusion” versus “scoring”) for reproducing this reasoning based on antibiotic properties. Results The “exclusion” method reproduced expert reasoning the more accurately, retrieving the full list of recommended antibiotics for almost all clinical situations. Discussion This approach has several advantages: (i) it provides convincing explanations for physicians; (ii) updating could easily be incorporated into the CDSS; (iii) it can provide recommendations for clinical situations missing from CPGs. PMID:25954422
A streamlined artificial variable free version of simplex method.
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.
A Streamlined Artificial Variable Free Version of Simplex Method
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. PMID:25767883
NASA Astrophysics Data System (ADS)
Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.
2005-02-01
A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.
Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.
Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang
2016-11-01
Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.
A data transmission method for particle physics experiments based on Ethernet physical layer
NASA Astrophysics Data System (ADS)
Huang, Xi-Ru; Cao, Ping; Zheng, Jia-Jun
2015-11-01
Due to its advantages of universality, flexibility and high performance, fast Ethernet is widely used in readout system design for modern particle physics experiments. However, Ethernet is usually used together with the TCP/IP protocol stack, which makes it difficult to implement readout systems because designers have to use the operating system to process this protocol. Furthermore, TCP/IP degrades the transmission efficiency and real-time performance. To maximize the performance of Ethernet in physics experiment applications, a data readout method based on the physical layer (PHY) is proposed. In this method, TCP/IP is replaced with a customized and simple protocol, which makes it easier to implement. On each readout module, data from the front-end electronics is first fed into an FPGA for protocol processing and then sent out to a PHY chip controlled by this FPGA for transmission. This kind of data path is fully implemented by hardware. From the side of the data acquisition system (DAQ), however, the absence of a standard protocol causes problems for the network related applications. To solve this problem, in the operating system kernel space, data received by the network interface card is redirected from the traditional flow to a specified memory space by a customized program. This memory space can easily be accessed by applications in user space. For the purpose of verification, a prototype system has been designed and implemented. Preliminary test results show that this method can meet the requirements of data transmission from the readout module to the DAQ with an efficient and simple manner. Supported by National Natural Science Foundation of China (11005107) and Independent Projects of State Key Laboratory of Particle Detection and Electronics (201301)
The focus on sample quality: Influence of colon tissue collection on reliability of qPCR data
Korenkova, Vlasta; Slyskova, Jana; Novosadova, Vendula; Pizzamiglio, Sara; Langerova, Lucie; Bjorkman, Jens; Vycital, Ondrej; Liska, Vaclav; Levy, Miroslav; Veskrna, Karel; Vodicka, Pavel; Vodickova, Ludmila; Kubista, Mikael; Verderio, Paolo
2016-01-01
Successful molecular analyses of human solid tissues require intact biological material with well-preserved nucleic acids, proteins, and other cell structures. Pre-analytical handling, comprising of the collection of material at the operating theatre, is among the first critical steps that influence sample quality. The aim of this study was to compare the experimental outcomes obtained from samples collected and stored by the conventional means of snap freezing and by PAXgene Tissue System (Qiagen). These approaches were evaluated by measuring rRNA and mRNA integrity of the samples (RNA Quality Indicator and Differential Amplification Method) and by gene expression profiling. The collection procedures of the biological material were implemented in two hospitals during colon cancer surgery in order to identify the impact of the collection method on the experimental outcome. Our study shows that the pre-analytical sample handling has a significant effect on the quality of RNA and on the variability of qPCR data. PAXgene collection mode proved to be more easily implemented in the operating room and moreover the quality of RNA obtained from human colon tissues by this method is superior to the one obtained by snap freezing. PMID:27383461
AACSD: An atomistic analyzer for crystal structure and defects
NASA Astrophysics Data System (ADS)
Liu, Z. R.; Zhang, R. F.
2018-01-01
We have developed an efficient command-line program named AACSD (Atomistic Analyzer for Crystal Structure and Defects) for the post-analysis of atomic configurations generated by various atomistic simulation codes. The program has implemented not only the traditional filter methods like the excess potential energy (EPE), the centrosymmetry parameter (CSP), the common neighbor analysis (CNA), the common neighborhood parameter (CNP), the bond angle analysis (BAA), and the neighbor distance analysis (NDA), but also the newly developed ones including the modified centrosymmetry parameter (m-CSP), the orientation imaging map (OIM) and the local crystallographic orientation (LCO). The newly proposed OIM and LCO methods have been extended for all three crystal structures including face centered cubic, body centered cubic and hexagonal close packed. More specially, AACSD can be easily used for the atomistic analysis of metallic nanocomposite with each phase to be analyzed independently, which provides a unique pathway to capture their dynamic evolution of various defects on the fly. In this paper, we provide not only a throughout overview on various theoretical methods and their implementation into AACSD program, but some critical evaluations, specific testing and applications, demonstrating the capability of the program on each functionality.
A Wigner-based ray-tracing method for imaging simulations
NASA Astrophysics Data System (ADS)
Mout, B. M.; Wick, M.; Bociort, F.; Urbach, H. P.
2015-09-01
The Wigner Distribution Function (WDF) forms an alternative representation of the optical field. It can be a valuable tool for understanding and classifying optical systems. Furthermore, it possesses properties that make it suitable for optical simulations: both the intensity and the angular spectrum can be easily obtained from the WDF and the WDF remains constant along the paths of paraxial geometrical rays. In this study we use these properties by implementing a numerical Wigner-Based Ray-Tracing method (WBRT) to simulate diffraction effects at apertures in free-space and in imaging systems. Both paraxial and non-paraxial systems are considered and the results are compared with numerical implementations of the Rayleigh-Sommerfeld and Fresnel diffraction integrals to investigate the limits of the applicability of this approach. The results of the different methods are in good agreement when simulating free-space diffraction or calculating point spread functions (PSFs) for aberration-free imaging systems, even at numerical apertures exceeding the paraxial regime. For imaging systems with aberrations, the PSFs of WBRT diverge from the results using diffraction integrals. For larger aberrations WBRT predicts negative intensities, suggesting that this model is unable to deal with aberrations.
NASA Astrophysics Data System (ADS)
Large, Nicolas; Cao, Yang; Manjavacas, Alejandro; Nordlander, Peter
2015-03-01
Electron energy-loss spectroscopy (EELS) is a unique tool that is extensively used to investigate the plasmonic response of metallic nanostructures since the early works in the '50s. To be able to interpret and theoretically investigate EELS results, a myriad of different numerical techniques have been developed for EELS simulations (BEM, DDA, FEM, GDTD, Green dyadic functions). Although these techniques are able to predict and reproduce experimental results, they possess significant drawbacks and are often limited to highly symmetrical geometries, non-penetrating trajectories, small nanostructures, and free standing nanostructures. We present here a novel approach for EELS calculations using the Finite-difference time-domain (FDTD) method: EELS-FDTD. We benchmark our approach by direct comparison with results from the well-established boundary element method (BEM) and published experimental results. In particular, we compute EELS spectra for spherical nanoparticles, nanoparticle dimers, nanodisks supported by various substrates, and gold bowtie antennas on a silicon nitride substrate. Our EELS-FDTD implementation can be easily extended to more complex geometries and configurations and can be directly implemented within other numerical methods. Work funded by the Welch Foundation (C-1222, L-C-004), and the NSF (CNS-0821727, OCI-0959097).
Frozen-Orbital and Downfolding Calculations with Auxiliary-Field Quantum Monte Carlo.
Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry
2013-11-12
We describe the implementation of the frozen-orbital and downfolding approximations in the auxiliary-field quantum Monte Carlo (AFQMC) method. These approaches can provide significant computational savings, compared to fully correlating all of the electrons. While the many-body wave function is never explicit in AFQMC, its random walkers are Slater determinants, whose orbitals may be expressed in terms of any one-particle orbital basis. It is therefore straightforward to partition the full N-particle Hilbert space into active and inactive parts to implement the frozen-orbital method. In the frozen-core approximation, for example, the core electrons can be eliminated in the correlated part of the calculations, greatly increasing the computational efficiency, especially for heavy atoms. Scalar relativistic effects are easily included using the Douglas-Kroll-Hess theory. Using this method, we obtain a way to effectively eliminate the error due to single-projector, norm-conserving pseudopotentials in AFQMC. We also illustrate a generalization of the frozen-orbital approach that downfolds high-energy basis states to a physically relevant low-energy sector, which allows a systematic approach to produce realistic model Hamiltonians to further increase efficiency for extended systems.
A novel Python program for implementation of quality control in the ELISA.
Wetzel, Hanna N; Cohen, Cinder; Norman, Andrew B; Webster, Rose P
2017-09-01
The use of semi-quantitative assays such as the enzyme-linked immunosorbent assay (ELISA) requires stringent quality control of the data. However, such quality control is often lacking in academic settings due to unavailability of software and knowledge. Therefore, our aim was to develop methods to easily implement Levey-Jennings quality control methods. For this purpose, we created a program written in Python (a programming language with an open-source license) and tested it using a training set of ELISA standard curves quantifying the Fab fragment of an anti-cocaine monoclonal antibody in mouse blood. A colorimetric ELISA was developed using a goat anti-human anti-Fab capture method. Mouse blood samples spiked with the Fab fragment were tested against a standard curve of known concentrations of Fab fragment in buffer over a period of 133days stored at 4°C to assess stability of the Fab fragment and to generate a test dataset to assess the program. All standard curves were analyzed using our program to batch process the data and to generate Levey-Jennings control charts and statistics regarding the datasets. The program was able to identify values outside of two standard deviations, and this identification of outliers was consistent with the results of a two-way ANOVA. This program is freely available, which will help laboratories implement quality control methods, thus improving reproducibility within and between labs. We report here successful testing of the program with our training set and development of a method for quantification of the Fab fragment in mouse blood. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo
This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.
NASA Astrophysics Data System (ADS)
Powell, Gavin; Markham, Keith C.; Marshall, David
2000-06-01
This paper presents the results of an investigation leading into an implementation of FLIR and LADAR data simulation for use in a multi sensor data fusion automated target recognition system. At present the main areas of application are in military environments but systems can easily be adapted to other areas such as security applications, robotics and autonomous cars. Recent developments have been away from traditional sensor modeling and toward modeling of features that are external to the system, such as atmosphere and part occlusion, to create a more realistic and rounded system. We have implemented such techniques and introduced a means of inserting these models into a highly detailed scene model to provide a rich data set for later processing. From our study and implementation we are able to embed sensor model components into a commercial graphics and animation package, along with object and terrain models, which can be easily used to create a more realistic sequence of images.
Quasi‐steady centrifuge method for unsaturated hydraulic properties
Caputo, Maria C.; Nimmo, John R.
2005-01-01
We have developed the quasi‐steady centrifuge (QSC) method as a variation of the steady state centrifuge method that can be implemented simply and inexpensively with greater versatility in terms of sample size and other features. It achieves these advantages by somewhat relaxing the criterion for steadiness of flow through the sample. This compromise entails an increase in measurement uncertainty but to a degree that is tolerable in most applications. We have tested this new approach with an easily constructed apparatus to establish a quasi‐steady flow of water in unsaturated porous rock samples spinning in a centrifuge, obtaining measurements of unsaturated hydraulic conductivity and water retention that agree with results of other methods. The QSC method is adaptable to essentially any centrifuge suitable for hydrogeologic applications, over a wide range of sizes and operating speeds. The simplified apparatus and greater adaptability of this method expands the potential for exploring situations that are common in nature but have been the subject of few laboratory investigations.
High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster
NASA Astrophysics Data System (ADS)
Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku
2015-01-01
High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.
FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)
NASA Astrophysics Data System (ADS)
Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.
2011-04-01
A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peloquin, R.A.; McKenzie, D.H.
1994-10-01
A compartmental model has been implemented on a microcomputer as an aid in the analysis of alternative solutions to a problem. The model, entitled Smolt Survival Simulator, simulates the survival of juvenile salmon during their downstream migration and passage of hydroelectric dams in the Columbia River. The model is designed to function in a workshop environment where resource managers and fisheries biologists can study alternative measures that may potentially increase juvenile anadromous fish survival during downriver migration. The potential application of the model has placed several requirements on the implementing software. It must be available for use in workshop settings.more » The software must be easily to use with minimal computer knowledge. Scenarios must be created and executed quickly and efficiently. Results must be immediately available. Software design emphasis vas placed on the user interface because of these requirements. The discussion focuses on methods used in the development of the SSS software user interface. These methods should reduce user stress and alloy thorough and easy parameter modification.« less
Fast and precise thermoregulation system in physiological brain slice experiment
NASA Astrophysics Data System (ADS)
Sheu, Y. H.; Young, M. S.
1995-12-01
We have developed a fast and precise thermoregulation system incorporated within a physiological experiment on a brain slice. The thermoregulation system is used to control the temperature of a recording chamber in which the brain slice is placed. It consists of a single-chip microcomputer, a set command module, a display module, and an FLC module. A fuzzy control algorithm was developed and a fuzzy logic controller then designed for achieving fast, smooth thermostatic performance and providing precise temperature control with accuracy to 0.1 °C, from room temperature through 42 °C (experimental temperature range). The fuzzy logic controller is implemented by microcomputer software and related peripheral hardware circuits. Six operating modes of thermoregulation are offered with the system and this can be further extended according to experimental needs. The test results of this study demonstrate that the fuzzy control method is easily implemented by a microcomputer and also verifies that this method provides a simple way to achieve fast and precise high-performance control of a nonlinear thermoregulation system in a physiological brain slice experiment.
Multipath analysis diffraction calculations
NASA Technical Reports Server (NTRS)
Statham, Richard B.
1996-01-01
This report describes extensions of the Kirchhoff diffraction equation to higher edge terms and discusses their suitability to model diffraction multipath effects of a small satellite structure. When receiving signals, at a satellite, from the Global Positioning System (GPS), reflected signals from the satellite structure result in multipath errors in the determination of the satellite position. Multipath error can be caused by diffraction of the reflected signals and a method of calculating this diffraction is required when using a facet model of the satellite. Several aspects of the Kirchhoff equation are discussed and numerical examples, in the near and far fields, are shown. The vector form of the extended Kirchhoff equation, by adding the Larmor-Tedone and Kottler edge terms, is given as a mathematical model in an appendix. The Kirchhoff equation was investigated as being easily implemented and of good accuracy in the basic form, especially in phase determination. The basic Kirchhoff can be extended for higher accuracy if desired. A brief discussion of the method of moments and the geometric theory of diffraction is included, but seems to offer no clear advantage in implementation over the Kirchhoff for facet models.
A low-complexity add-on score for protein remote homology search with COMER.
Margelevicius, Mindaugas
2018-06-15
Protein sequence alignment forms the basis for comparative modeling, the most reliable approach to protein structure prediction, among many other applications. Alignment between sequence families, or profile-profile alignment, represents one of the most, if not the most, sensitive means for homology detection but still necessitates improvement. We aim at improving the quality of profile-profile alignments and the sensitivity induced by them by refining profile-profile substitution scores. We have developed a new score that represents an additional component of profile-profile substitution scores. A comprehensive evaluation shows that the new add-on score statistically significantly improves both the sensitivity and the alignment quality of the COMER method. We discuss why the score leads to the improvement and its almost optimal computational complexity that makes it easily implementable in any profile-profile alignment method. An implementation of the add-on score in the open-source COMER software and data are available at https://sourceforge.net/projects/comer. The COMER software is also available on Github at https://github.com/minmarg/comer and as a Docker image (minmar/comer). Supplementary data are available at Bioinformatics online.
An Adaptive 6-DOF Tracking Method by Hybrid Sensing for Ultrasonic Endoscopes
Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin
2014-01-01
In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°. PMID:24915179
Hastedt, Martin; Krumbiegel, Franziska; Gapert, René; Tsokos, Michael; Hartwig, Sven
2013-09-01
Alcohol consumption during pregnancy is a widespread problem and can cause severe fetal damage. As the diagnosis of fetal alcohol syndrome is difficult, the implementation of a reliable marker for alcohol consumption during pregnancy into meconium drug screening programs would be invaluable. A previously published gas chromatography mass spectrometry method for the detection of fatty acid ethyl esters (FAEEs) as alcohol markers in meconium was optimized and newly validated for a sample size of 50 mg. This method was applied to 122 cases from a drug-using population. The meconium samples were also tested for common drugs of abuse. In 73 % of the cases, one or more drugs were found. Twenty percent of the samples tested positive for FAEEs at levels indicating significant alcohol exposure. Consequently, alcohol was found to be the third most frequently abused substance within the study group. This re-validated method provides an increase in testing sensitivity, is reliable and easily applicable as part of a drug screening program. It can be used as a non-invasive tool to detect high alcohol consumption in the last trimester of pregnancy. The introduction of FAEEs testing in meconium screening was found to be of particular use in a drug-using population.
Rodríguez, J; Premier, G C; Dinsdale, R; Guwy, A J
2009-01-01
Mathematical modelling in environmental biotechnology has been a traditionally difficult resource to access for researchers and students without programming expertise. The great degree of flexibility required from model implementation platforms to be suitable for research applications restricts their use to programming expert users. More user friendly software packages however do not normally incorporate the necessary flexibility for most research applications. This work presents a methodology based on Excel and Matlab-Simulink for both flexible and accessible implementation of mathematical models by researchers with and without programming expertise. The models are almost fully defined in an Excel file in which the names and values of the state variables and parameters are easily created. This information is automatically processed in Matlab to create the model structure and almost immediate model simulation, after only a minimum Matlab code definition, is possible. The framework proposed also provides programming expert researchers with a highly flexible and modifiable platform on which to base more complex model implementations. The method takes advantage of structural generalities in most mathematical models of environmental bioprocesses while enabling the integration of advanced elements (e.g. heuristic functions, correlations). The methodology has already been successfully used in a number of research studies.
SLAR image interpretation keys for geographic analysis
NASA Technical Reports Server (NTRS)
Coiner, J. C.
1972-01-01
A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.
Mercante, Beniamina; Rangon, Claire-Marie
2018-01-01
Neuromodulation, thanks to intrinsic and extrinsic brain feedback loops, seems to be the best way to exploit brain plasticity for therapeutic purposes. In the past years, there has been tremendous advances in the field of non-pharmacological modulation of brain activity. This review of different neurostimulation techniques will focus on sites and mechanisms of both transcutaneous vagus and trigeminal nerve stimulation. These methods are scientifically validated non-invasive bottom-up brain modulation techniques, easily implemented from the outer ear. In the light of this, auricles could transpire to be the most affordable target for non-invasive manipulation of central nervous system functions. PMID:29361732
Sampling in the light of Wigner distribution.
Stern, Adrian; Javidi, Bahram
2004-03-01
We propose a new method for analysis of the sampling and reconstruction conditions of real and complex signals by use of the Wigner domain. It is shown that the Wigner domain may provide a better understanding of the sampling process than the traditional Fourier domain. For example, it explains how certain non-bandlimited complex functions can be sampled and perfectly reconstructed. On the basis of observations in the Wigner domain, we derive a generalization to the Nyquist sampling criterion. By using this criterion, we demonstrate simple preprocessing operations that can adapt a signal that does not fulfill the Nyquist sampling criterion. The preprocessing operations demonstrated can be easily implemented by optical means.
Contact planarization of ensemble nanowires
NASA Astrophysics Data System (ADS)
Chia, A. C. E.; LaPierre, R. R.
2011-06-01
The viability of four organic polymers (S1808, SC200, SU8 and Cyclotene) as filling materials to achieve planarization of ensemble nanowire arrays is reported. Analysis of the porosity, surface roughness and thermal stability of each filling material was performed. Sonication was used as an effective method to remove the tops of the nanowires (NWs) to achieve complete planarization. Ensemble nanowire devices were fully fabricated and I-V measurements confirmed that Cyclotene effectively planarizes the NWs while still serving the role as an insulating layer between the top and bottom contacts. These processes and analysis can be easily implemented into future characterization and fabrication of ensemble NWs for optoelectronic device applications.
Contact planarization of ensemble nanowires.
Chia, A C E; LaPierre, R R
2011-06-17
The viability of four organic polymers (S1808, SC200, SU8 and Cyclotene) as filling materials to achieve planarization of ensemble nanowire arrays is reported. Analysis of the porosity, surface roughness and thermal stability of each filling material was performed. Sonication was used as an effective method to remove the tops of the nanowires (NWs) to achieve complete planarization. Ensemble nanowire devices were fully fabricated and I-V measurements confirmed that Cyclotene effectively planarizes the NWs while still serving the role as an insulating layer between the top and bottom contacts. These processes and analysis can be easily implemented into future characterization and fabrication of ensemble NWs for optoelectronic device applications.
Paxman, Rosemary; Stinson, Jake; Dejardin, Anna; McKendry, Rachel A.; Hoogenboom, Bart W.
2012-01-01
Micromechanic resonators provide a small-volume and potentially high-throughput method to determine rheological properties of fluids. Here we explore the accuracy in measuring mass density and viscosity of ethanol-water and glycerol-water model solutions, using a simple and easily implemented model to deduce the hydrodynamic effects on resonating cantilevers of various length-to-width aspect ratios. We next show that these measurements can be extended to determine the alcohol percentage of both model solutions and commercial beverages such as beer, wine and liquor. This demonstrates how micromechanical resonators can be used for quality control of every-day drinks. PMID:22778654
Using Spatial Correlations of SPDC Sources for Increasing the Signal to Noise Ratio in Images
NASA Astrophysics Data System (ADS)
Ruíz, A. I.; Caudillo, R.; Velázquez, V. M.; Barrios, E.
2017-05-01
We experimentally show that, by using spatial correlations of photon pairs produced by Spontaneous Parametric Down-Conversion, it is possible to increase the Signal to Noise Ratio in images of objects illuminated with those photons; in comparison, objects illuminated with light from a laser present a minor ratio. Our simple experimental set-up was capable to produce an average improvement in signal to noise ratio of 11dB of Parametric Down-Converted light over laser light. This simple method can be easily implemented for obtaining high contrast images of faint objects and for transmitting information with low noise.
W -Boson Production in Association with a Jet at Next-to-Next-to-Leading Order in Perturbative QCD
NASA Astrophysics Data System (ADS)
Boughezal, Radja; Focke, Christfried; Liu, Xiaohui; Petriello, Frank
2015-08-01
We present the complete calculation of W -boson production in association with a jet in hadronic collisions through next-to-next-to-leading order (NNLO) in perturbative QCD. To cancel infrared divergences, we discuss a new subtraction method that exploits the fact that the N -jettiness event-shape variable fully captures the singularity structure of QCD amplitudes with final-state partons. This method holds for processes with an arbitrary number of jets and is easily implemented into existing frameworks for higher-order calculations. We present initial phenomenological results for W +jet production at the LHC. The NNLO corrections are small and lead to a significantly reduced theoretical error, opening the door to precision measurements in the W +jet channel at the LHC.
An Algorithm Using Twelve Properties of Antibiotics to Find the Recommended Antibiotics, as in CPGs.
Tsopra, R; Venot, A; Duclos, C
2014-01-01
Clinical Decision Support Systems (CDSS) incorporating justifications, updating and adjustable recommendations can considerably improve the quality of healthcare. We propose a new approach to the design of CDSS for empiric antibiotic prescription, based on implementation of the deeper medical reasoning used by experts in the development of clinical practice guidelines (CPGs), to deduce the recommended antibiotics. We investigated two methods ("exclusion" versus "scoring") for reproducing this reasoning based on antibiotic properties. The "exclusion" method reproduced expert reasoning the more accurately, retrieving the full list of recommended antibiotics for almost all clinical situations. This approach has several advantages: (i) it provides convincing explanations for physicians; (ii) updating could easily be incorporated into the CDSS; (iii) it can provide recommendations for clinical situations missing from CPGs.
Understanding valence-shell electron-pair repulsion (VSEPR) theory using origami molecular models
NASA Astrophysics Data System (ADS)
Endah Saraswati, Teguh; Saputro, Sulistyo; Ramli, Murni; Praseptiangga, Danar; Khasanah, Nurul; Marwati, Sri
2017-01-01
Valence-shell electron-pair repulsion (VSEPR) theory is conventionally used to predict molecular geometry. However, it is difficult to explore the full implications of this theory by simply drawing chemical structures. Here, we introduce origami modelling as a more accessible approach for exploration of the VSEPR theory. Our technique is simple, readily accessible and inexpensive compared with other sophisticated methods such as computer simulation or commercial three-dimensional modelling kits. This method can be implemented in chemistry education at both the high school and university levels. We discuss the example of a simple molecular structure prediction for ammonia (NH3). Using the origami model, both molecular shape and the scientific justification can be visualized easily. This ‘hands-on’ approach to building molecules will help promote understanding of VSEPR theory.
Experimental method for testing diffraction properties of reflection waveguide holograms.
Xie, Yi; Kang, Ming-Wu; Wang, Bao-Ping
2014-07-01
Waveguide holograms' diffraction properties include peak wavelength and diffraction efficiency, which play an important role in determining their display performance. Based on the record and reconstruction theory of reflection waveguide holograms, a novel experimental method for testing diffraction properties is introduced and analyzed in this paper, which uses a plano-convex lens optically contacted to the surface of the substrate plate of the waveguide hologram, so that the diffracted light beam can be easily detected. Then an experiment is implemented. The designed reconstruction wavelength of the test sample is 530 nm, and its diffraction efficiency is 100%. The experimental results are a peak wavelength of 527.7 nm and a diffraction efficiency of 94.1%. It is shown that the tested value corresponds well with the designed value.
Mosely, Jackie A; Stokes, Peter; Parker, David; Dyer, Philip W; Messinis, Antonis M
2018-02-01
A novel method has been developed that enables chemical compounds to be transferred from an inert atmosphere glove box and into the atmospheric pressure ion source of a mass spectrometer whilst retaining a controlled chemical environment. This innovative method is simple and cheap to implement on some commercially available mass spectrometers. We have termed this approach inert atmospheric pressure solids analysis probe ( iASAP) and demonstrate the benefit of this methodology for two air-/moisture-sensitive chemical compounds whose characterisation by mass spectrometry is now possible and easily achieved. The simplicity of the design means that moving between iASAP and standard ASAP is straightforward and quick, providing a highly flexible platform with rapid sample turnaround.
Self-interaction correction in multiple scattering theory: application to transition metal oxides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daene, Markus W; Lueders, Martin; Ernst, Arthur
2009-01-01
We apply to transition metal monoxides the self-interaction corrected (SIC) local spin density (LSD) approximation, implemented locally in the multiple scattering theory within the Korringa-Kohn-Rostoker (KKR) band structure method. The calculated electronic structure and in particular magnetic moments and energy gaps are discussed in reference to the earlier SIC results obtained within the LMTO-ASA band structure method, involving transformations between Bloch and Wannier representations to solve the eigenvalue problem and calculate the SIC charge and potential. Since the KKR can be easily extended to treat disordered alloys, by invoking the coherent potential approximation (CPA), in this paper we compare themore » CPA approach and supercell calculations to study the electronic structure of NiO with cation vacancies.« less
Molpher: a software framework for systematic chemical space exploration
2014-01-01
Background Chemical space is virtual space occupied by all chemically meaningful organic compounds. It is an important concept in contemporary chemoinformatics research, and its systematic exploration is vital to the discovery of either novel drugs or new tools for chemical biology. Results In this paper, we describe Molpher, an open-source framework for the systematic exploration of chemical space. Through a process we term ‘molecular morphing’, Molpher produces a path of structurally-related compounds. This path is generated by the iterative application of so-called ‘morphing operators’ that represent simple structural changes, such as the addition or removal of an atom or a bond. Molpher incorporates an optimized parallel exploration algorithm, compound logging and a two-dimensional visualization of the exploration process. Its feature set can be easily extended by implementing additional morphing operators, chemical fingerprints, similarity measures and visualization methods. Molpher not only offers an intuitive graphical user interface, but also can be run in batch mode. This enables users to easily incorporate molecular morphing into their existing drug discovery pipelines. Conclusions Molpher is an open-source software framework for the design of virtual chemical libraries focused on a particular mechanistic class of compounds. These libraries, represented by a morphing path and its surroundings, provide valuable starting data for future in silico and in vitro experiments. Molpher is highly extensible and can be easily incorporated into any existing computational drug design pipeline. PMID:24655571
Molpher: a software framework for systematic chemical space exploration.
Hoksza, David; Skoda, Petr; Voršilák, Milan; Svozil, Daniel
2014-03-21
Chemical space is virtual space occupied by all chemically meaningful organic compounds. It is an important concept in contemporary chemoinformatics research, and its systematic exploration is vital to the discovery of either novel drugs or new tools for chemical biology. In this paper, we describe Molpher, an open-source framework for the systematic exploration of chemical space. Through a process we term 'molecular morphing', Molpher produces a path of structurally-related compounds. This path is generated by the iterative application of so-called 'morphing operators' that represent simple structural changes, such as the addition or removal of an atom or a bond. Molpher incorporates an optimized parallel exploration algorithm, compound logging and a two-dimensional visualization of the exploration process. Its feature set can be easily extended by implementing additional morphing operators, chemical fingerprints, similarity measures and visualization methods. Molpher not only offers an intuitive graphical user interface, but also can be run in batch mode. This enables users to easily incorporate molecular morphing into their existing drug discovery pipelines. Molpher is an open-source software framework for the design of virtual chemical libraries focused on a particular mechanistic class of compounds. These libraries, represented by a morphing path and its surroundings, provide valuable starting data for future in silico and in vitro experiments. Molpher is highly extensible and can be easily incorporated into any existing computational drug design pipeline.
NASA Astrophysics Data System (ADS)
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
A technique for measuring petal gloss, with examples from the Namaqualand flora.
Whitney, Heather M; Rands, Sean A; Elton, Nick J; Ellis, Allan G
2012-01-01
The degree of floral gloss varies between species. However, little is known about this distinctive floral trait, even though it could be a key feature of floral biotic and abiotic interactions. One reason for the absence of knowledge is the lack of a simple, repeatable method of gloss measurement that can be used in the field to study floral gloss. A protocol is described for measuring gloss in petal samples collected in the field, using a glossmeter. Repeatability of the technique is assessed. We demonstrate a simple yet highly accurate and repeatable method that can easily be implemented in the field. We also highlight the huge variety of glossiness found within flowers and between species in a sample of spring-blooming flowers collected in Namaqualand, South Africa. We discuss the potential uses of this method and its applications for furthering studies in plant-pollinator interactions. We also discuss the potential functions of gloss in flowers.
Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leimkuhler, Benedict, E-mail: b.leimkuhler@ed.ac.uk; Shang, Xiaocheng, E-mail: x.shang@brown.edu
2016-11-01
We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé–Hoover–Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for anmore » important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees–Edwards boundary conditions to induce shear flow.« less
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
NASA Astrophysics Data System (ADS)
Zhang, Chaosheng
2010-05-01
Outliers in urban soil geochemical databases may imply potential contaminated land. Different methodologies which can be easily implemented for the identification of global and spatial outliers were applied for Pb concentrations in urban soils of Galway City in Ireland. Due to its strongly skewed probability feature, a Box-Cox transformation was performed prior to further analyses. The graphic methods of histogram and box-and-whisker plot were effective in identification of global outliers at the original scale of the dataset. Spatial outliers could be identified by a local indicator of spatial association of local Moran's I, cross-validation of kriging, and a geographically weighted regression. The spatial locations of outliers were visualised using a geographical information system. Different methods showed generally consistent results, but differences existed. It is suggested that outliers identified by statistical methods should be confirmed and justified using scientific knowledge before they are properly dealt with.
Turbomachinery aeroelasticity at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Kaza, Krishna Rao V.
1989-01-01
The turbomachinery aeroelastic effort is focused on unstalled and stalled flutter, forced response, and whirl flutter of both single rotation and counter rotation propfans. It also includes forced response of the Space Shuttle Main Engine (SSME) turbopump blades. Because of certain unique features of propfans and the SSME turbopump blades, it is not possible to directly use the existing aeroelastic technology of conventional propellers, turbofans or helicopters. Therefore, reliable aeroelastic stability and response analysis methods for these propulsion systems must be developed. The development of these methods for propfans requires specific basic technology disciplines, such as 2-D and 3-D steady and unsteady aerodynamic theories in subsonic, transonic and supersonic flow regimes; modeling of composite blades; geometric nonlinear effects; and passive and active control of flutter and response. These methods are incorporated in a computer program, ASTROP. The program has flexibility such that new and future models in basic disciplines can be easily implemented.
[Shock shape representation of sinus heart rate based on cloud model].
Yin, Wenfeng; Zhao, Jie; Chen, Tiantian; Zhang, Junjian; Zhang, Chunyou; Li, Dapeng; An, Baijing
2014-04-01
The present paper is to analyze the trend of sinus heart rate RR interphase sequence after a single ventricular premature beat and to compare it with the two parameters, turbulence onset (TO) and turbulence slope (TS). Based on the acquisition of sinus rhythm concussion sample, we in this paper use a piecewise linearization method to extract its linear characteristics, following which we describe shock form with natural language through cloud model. In the process of acquisition, we use the exponential smoothing method to forecast the position where QRS wave may appear to assist QRS wave detection, and use template to judge whether current cardiac is sinus rhythm. And we choose some signals from MIT-BIH Arrhythmia Database to detect whether the algorithm is effective in Matlab. The results show that our method can correctly detect the changing trend of sinus heart rate. The proposed method can achieve real-time detection of sinus rhythm shocks, which is simple and easily implemented, so that it is effective as a supplementary method.
Shera, Christopher A.
2014-01-01
Parent and Allen [(2007). J. Acoust. Soc. Am. 122, 918–931] introduced the “method of lumens” to compute the plane-wave reflectance in a duct terminated with a nonuniform impedance. The method involves splitting the duct into multiple, fictitious subducts (lumens), solving for the reflectance in each subduct, and then combining the results. The method of lumens has considerable intuitive appeal and is easily implemented in the time domain. Previously applied only in a complex acoustical setting where proper evaluation is difficult (i.e., in a model of the ear canal and tympanic membrane), the method is tested here by using it to compute the reflectance from an area constriction in an infinite lossless duct considered in the long-wavelength limit. Neither the original formulation of the method—shown here to violate energy conservation except when the termination impedance is uniform—nor a reformulation consistent with basic physical constraints yields the correct solution to this textbook problem in acoustics. The results are generalized and the nature of the errors illuminated. PMID:25480060
Free-form surface design method for a collimator TIR lens.
Tsai, Chung-Yu
2016-04-01
A free-form (FF) surface design method is proposed for a general axial-symmetrical collimator system consisting of a light source and a total internal reflection lens with two coupled FF boundary surfaces. The profiles of the boundary surfaces are designed using a FF surface construction method such that each incident ray is directed (refracted and reflected) in such a way as to form a specified image pattern on the target plane. The light ray paths within the system are analyzed using an exact analytical model and a skew-ray tracing approach. In addition, the validity of the proposed FF design method is demonstrated by means of ZEMAX simulations. It is shown that the illumination distribution formed on the target plane is in good agreement with that specified by the user. The proposed surface construction method is mathematically straightforward and easily implemented in computer code. As such, it provides a useful tool for the design and analysis of general axial-symmetrical optical systems.
Analysis and compensation of synchronous measurement error for multi-channel laser interferometer
NASA Astrophysics Data System (ADS)
Du, Shengwu; Hu, Jinchun; Zhu, Yu; Hu, Chuxiong
2017-05-01
Dual-frequency laser interferometer has been widely used in precision motion system as a displacement sensor, to achieve nanoscale positioning or synchronization accuracy. In a multi-channel laser interferometer synchronous measurement system, signal delays are different in the different channels, which will cause asynchronous measurement, and then lead to measurement error, synchronous measurement error (SME). Based on signal delay analysis of the measurement system, this paper presents a multi-channel SME framework for synchronous measurement, and establishes the model between SME and motion velocity. Further, a real-time compensation method for SME is proposed. This method has been verified in a self-developed laser interferometer signal processing board (SPB). The experiment result showed that, using this compensation method, at a motion velocity 0.89 m s-1, the max SME between two measuring channels in the SPB is 1.1 nm. This method is more easily implemented and applied to engineering than the method of directly testing smaller signal delay.
Efficient exploration of cosmology dependence in the EFT of LSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo, E-mail: matteoc@dark-cosmology.dk, E-mail: sfore@stanford.edu, E-mail: senatore@stanford.edu
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. The ideas and codes we present may easily be extended for other applications or higher-precision results.« less
Efficient exploration of cosmology dependence in the EFT of LSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. Finally, the ideas and codes we present may easily be extended for other applications or higher-precision results.« less
Efficient exploration of cosmology dependence in the EFT of LSS
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo
2017-04-18
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. Finally, the ideas and codes we present may easily be extended for other applications or higher-precision results.« less
SuBSENSE: a universal change detection method with local adaptive sensitivity.
St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert
2015-01-01
Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.
Schwartz, Mathew; Dixon, Philippe C
2018-01-01
The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided in open source format and available at https://github.com/cadop/pyCGM.
Dixon, Philippe C.
2018-01-01
The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided in open source format and available at https://github.com/cadop/pyCGM. PMID:29293565
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
Refractive index measurements of single, spherical cells using digital holographic microscopy.
Schürmann, Mirjam; Scholze, Jana; Müller, Paul; Chan, Chii J; Ekpenyong, Andrew E; Chalut, Kevin J; Guck, Jochen
2015-01-01
In this chapter, we introduce digital holographic microscopy (DHM) as a marker-free method to determine the refractive index of single, spherical cells in suspension. The refractive index is a conclusive measure in a biological context. Cell conditions, such as differentiation or infection, are known to yield significant changes in the refractive index. Furthermore, the refractive index of biological tissue determines the way it interacts with light. Besides the biological relevance of this interaction in the retina, a lot of methods used in biology, including microscopy, rely on light-tissue or light-cell interactions. Hence, determining the refractive index of cells using DHM is valuable in many biological applications. This chapter covers the main topics that are important for the implementation of DHM: setup, sample preparation, and analysis. First, the optical setup is described in detail including notes and suggestions for the implementation. Following that, a protocol for the sample and measurement preparation is explained. In the analysis section, an algorithm for the determination of quantitative phase maps is described. Subsequently, all intermediate steps for the calculation of the refractive index of suspended cells are presented, exploiting their spherical shape. In the last section, a discussion of possible extensions to the setup, further measurement configurations, and additional analysis methods are given. Throughout this chapter, we describe a simple, robust, and thus easily reproducible implementation of DHM. The different possibilities for extensions show the diverse fields of application for this technique. Copyright © 2015 Elsevier Inc. All rights reserved.
Research and recommendations for a statewide sign retroreflectivity maintenance program.
DOT National Transportation Integrated Search
2012-04-01
This study evaluated TxDOT's current sign retroreflectivity maintenance practices, assessed their : effectiveness, and recommended statewide sign retroreflectivity maintenance practices that could be easily : and effectively implemented to ensure tha...
A New Equivalence Theory Method for Treating Doubly Heterogeneous Fuel - I. Theory
Williams, Mark L.; Lee, Deokjung; Choi, Sooyoung
2015-03-04
A new methodology has been developed to treat resonance self-shielding in doubly heterogeneous very high temperature gas-cooled reactor systems in which the fuel compact region of a reactor lattice consists of small fuel grains dispersed in a graphite matrix. This new method first homogenizes the fuel grain and matrix materials using an analytically derived disadvantage factor from a two-region problem with equivalence theory and intermediate resonance method. This disadvantage factor accounts for spatial self-shielding effects inside each grain within the framework of an infinite array of grains. Then the homogenized fuel compact is self-shielded using a Bondarenko method to accountmore » for interactions between the fuel compact regions in the fuel lattice. In the final form of the equations for actual implementations, the double-heterogeneity effects are accounted for by simply using a modified definition of a background cross section, which includes geometry parameters and cross sections for both the grain and fuel compact regions. With the new method, the doubly heterogeneous resonance self-shielding effect can be treated easily even with legacy codes programmed only for a singly heterogeneous system by simple modifications in the background cross section for resonance integral interpolations. This paper presents a detailed derivation of the new method and a sensitivity study of double-heterogeneity parameters introduced during the derivation. The implementation of the method and verification results for various test cases are presented in the companion paper.« less
NASA Astrophysics Data System (ADS)
Gabellani, S.; Silvestro, F.; Rudari, R.; Boni, G.
2008-12-01
Flood forecasting undergoes a constant evolution, becoming more and more demanding about the models used for hydrologic simulations. The advantages of developing distributed or semi-distributed models have currently been made clear. Now the importance of using continuous distributed modeling emerges. A proper schematization of the infiltration process is vital to these types of models. Many popular infiltration schemes, reliable and easy to implement, are too simplistic for the development of continuous hydrologic models. On the other hand, the unavailability of detailed and descriptive information on soil properties often limits the implementation of complete infiltration schemes. In this work, a combination between the Soil Conservation Service Curve Number method (SCS-CN) and a method derived from Horton equation is proposed in order to overcome the inherent limits of the two schemes. The SCS-CN method is easily applicable on large areas, but has structural limitations. The Horton-like methods present parameters that, though measurable to a point, are difficult to achieve a reliable estimate at catchment scale. The objective of this work is to overcome these limits by proposing a calibration procedure which maintains the large applicability of the SCS-CN method as well as the continuous description of the infiltration process given by the Horton's equation suitably modified. The estimation of the parameters of the modified Horton method is carried out using a formal analogy with the SCS-CN method under specific conditions. Some applications, at catchment scale within a distributed model, are presented.
BCILAB: a platform for brain-computer interface development
NASA Astrophysics Data System (ADS)
Kothe, Christian Andreas; Makeig, Scott
2013-10-01
Objective. The past two decades have seen dramatic progress in our ability to model brain signals recorded by electroencephalography, functional near-infrared spectroscopy, etc., and to derive real-time estimates of user cognitive state, response, or intent for a variety of purposes: to restore communication by the severely disabled, to effect brain-actuated control and, more recently, to augment human-computer interaction. Continuing these advances, largely achieved through increases in computational power and methods, requires software tools to streamline the creation, testing, evaluation and deployment of new data analysis methods. Approach. Here we present BCILAB, an open-source MATLAB-based toolbox built to address the need for the development and testing of brain-computer interface (BCI) methods by providing an organized collection of over 100 pre-implemented methods and method variants, an easily extensible framework for the rapid prototyping of new methods, and a highly automated framework for systematic testing and evaluation of new implementations. Main results. To validate and illustrate the use of the framework, we present two sample analyses of publicly available data sets from recent BCI competitions and from a rapid serial visual presentation task. We demonstrate the straightforward use of BCILAB to obtain results compatible with the current BCI literature. Significance. The aim of the BCILAB toolbox is to provide the BCI community a powerful toolkit for methods research and evaluation, thereby helping to accelerate the pace of innovation in the field, while complementing the existing spectrum of tools for real-time BCI experimentation, deployment and use.
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter
Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao
2015-01-01
As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903
Role of Gist and PHOG Features in Computer-Aided Diagnosis of Tuberculosis without Segmentation
Chauhan, Arun; Chauhan, Devesh; Rout, Chittaranjan
2014-01-01
Purpose Effective diagnosis of tuberculosis (TB) relies on accurate interpretation of radiological patterns found in a chest radiograph (CXR). Lack of skilled radiologists and other resources, especially in developing countries, hinders its efficient diagnosis. Computer-aided diagnosis (CAD) methods provide second opinion to the radiologists for their findings and thereby assist in better diagnosis of cancer and other diseases including TB. However, existing CAD methods for TB are based on the extraction of textural features from manually or semi-automatically segmented CXRs. These methods are prone to errors and cannot be implemented in X-ray machines for automated classification. Methods Gabor, Gist, histogram of oriented gradients (HOG), and pyramid histogram of oriented gradients (PHOG) features extracted from the whole image can be implemented into existing X-ray machines to discriminate between TB and non-TB CXRs in an automated manner. Localized features were extracted for the above methods using various parameters, such as frequency range, blocks and region of interest. The performance of these features was evaluated against textural features. Two digital CXR image datasets (8-bit DA and 14-bit DB) were used for evaluating the performance of these features. Results Gist (accuracy 94.2% for DA, 86.0% for DB) and PHOG (accuracy 92.3% for DA, 92.0% for DB) features provided better results for both the datasets. These features were implemented to develop a MATLAB toolbox, TB-Xpredict, which is freely available for academic use at http://sourceforge.net/projects/tbxpredict/. This toolbox provides both automated training and prediction modules and does not require expertise in image processing for operation. Conclusion Since the features used in TB-Xpredict do not require segmentation, the toolbox can easily be implemented in X-ray machines. This toolbox can effectively be used for the mass screening of TB in high-burden areas with improved efficiency. PMID:25390291
Crowley, D Max; Coffman, Donna L; Feinberg, Mark E; Greenberg, Mark T; Spoth, Richard L
2014-04-01
Despite growing recognition of the important role implementation plays in successful prevention efforts, relatively little work has sought to demonstrate a causal relationship between implementation factors and participant outcomes. In turn, failure to explore the implementation-to-outcome link limits our understanding of the mechanisms essential to successful programming. This gap is partially due to the inability of current methodological procedures within prevention science to account for the multitude of confounders responsible for variation in implementation factors (i.e., selection bias). The current paper illustrates how propensity and marginal structural models can be used to improve causal inferences involving implementation factors not easily randomized (e.g., participant attendance). We first present analytic steps for simultaneously evaluating the impact of multiple implementation factors on prevention program outcome. Then, we demonstrate this approach for evaluating the impact of enrollment and attendance in a family program, over and above the impact of a school-based program, within PROSPER, a large-scale real-world prevention trial. Findings illustrate the capacity of this approach to successfully account for confounders that influence enrollment and attendance, thereby more accurately representing true causal relations. For instance, after accounting for selection bias, we observed a 5% reduction in the prevalence of 11th grade underage drinking for those who chose to receive a family program and school program compared to those who received only the school program. Further, we detected a 7% reduction in underage drinking for those with high attendance in the family program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espinosa, J. R.; Vega, C.; Sanz, E.
2014-10-07
The interfacial free energy between a crystal and a fluid, γ{sub cf}, is a highly relevant parameter in phenomena such as wetting or crystal nucleation and growth. Due to the difficulty of measuring γ{sub cf} experimentally, computer simulations are often used to study the crystal-fluid interface. Here, we present a novel simulation methodology for the calculation of γ{sub cf}. The methodology consists in using a mold composed of potential energy wells to induce the formation of a crystal slab in the fluid at coexistence conditions. This induction is done along a reversible pathway along which the free energy difference betweenmore » the initial and the final states is obtained by means of thermodynamic integration. The structure of the mold is given by that of the crystal lattice planes, which allows to easily obtain the free energy for different crystal orientations. The method is validated by calculating γ{sub cf} for previously studied systems, namely, the hard spheres and the Lennard-Jones systems. Our results for the latter show that the method is accurate enough to deal with the anisotropy of γ{sub cf} with respect to the crystal orientation. We also calculate γ{sub cf} for a recently proposed continuous version of the hard sphere potential and obtain the same γ{sub cf} as for the pure hard sphere system. The method can be implemented both in Monte Carlo and Molecular Dynamics. In fact, we show that it can be easily used in combination with the popular Molecular Dynamics package GROMACS.« less
ERIC Educational Resources Information Center
Tay, Lee Yong; Lim, Cher Ping; Lye, Sze Yee; Ng, Kay Joo; Lim, Siew Khiaw
2011-01-01
This paper analyses how an elementary-level future school in Singapore implements and uses various open-source online platforms, which are easily available online and could be implemented with minimal software cost, for the purpose of teaching and learning. Online platforms have the potential to facilitate students' engagement for independent and…
Simulation system architecture design for generic communications link
NASA Technical Reports Server (NTRS)
Tsang, Chit-Sang; Ratliff, Jim
1986-01-01
This paper addresses a computer simulation system architecture design for generic digital communications systems. It addresses the issues of an overall system architecture in order to achieve a user-friendly, efficient, and yet easily implementable simulation system. The system block diagram and its individual functional components are described in detail. Software implementation is discussed with the VAX/VMS operating system used as a target environment.
Modules based on the geochemical model PHREEQC for use in scripting and programming languages
Charlton, Scott R.; Parkhurst, David L.
2011-01-01
The geochemical model PHREEQC is capable of simulating a wide range of equilibrium reactions between water and minerals, ion exchangers, surface complexes, solid solutions, and gases. It also has a general kinetic formulation that allows modeling of nonequilibrium mineral dissolution and precipitation, microbial reactions, decomposition of organic compounds, and other kinetic reactions. To facilitate use of these reaction capabilities in scripting languages and other models, PHREEQC has been implemented in modules that easily interface with other software. A Microsoft COM (component object model) has been implemented, which allows PHREEQC to be used by any software that can interface with a COM server—for example, Excel®, Visual Basic®, Python, or MATLAB". PHREEQC has been converted to a C++ class, which can be included in programs written in C++. The class also has been compiled in libraries for Linux and Windows that allow PHREEQC to be called from C++, C, and Fortran. A limited set of methods implements the full reaction capabilities of PHREEQC for each module. Input methods use strings or files to define reaction calculations in exactly the same formats used by PHREEQC. Output methods provide a table of user-selected model results, such as concentrations, activities, saturation indices, and densities. The PHREEQC module can add geochemical reaction capabilities to surface-water, groundwater, and watershed transport models. It is possible to store and manipulate solution compositions and reaction information for many cells within the module. In addition, the object-oriented nature of the PHREEQC modules simplifies implementation of parallel processing for reactive-transport models. The PHREEQC COM module may be used in scripting languages to fit parameters; to plot PHREEQC results for field, laboratory, or theoretical investigations; or to develop new models that include simple or complex geochemical calculations.
Modules based on the geochemical model PHREEQC for use in scripting and programming languages
Charlton, S.R.; Parkhurst, D.L.
2011-01-01
The geochemical model PHREEQC is capable of simulating a wide range of equilibrium reactions between water and minerals, ion exchangers, surface complexes, solid solutions, and gases. It also has a general kinetic formulation that allows modeling of nonequilibrium mineral dissolution and precipitation, microbial reactions, decomposition of organic compounds, and other kinetic reactions. To facilitate use of these reaction capabilities in scripting languages and other models, PHREEQC has been implemented in modules that easily interface with other software. A Microsoft COM (component object model) has been implemented, which allows PHREEQC to be used by any software that can interface with a COM server-for example, Excel??, Visual Basic??, Python, or MATLAB??. PHREEQC has been converted to a C++ class, which can be included in programs written in C++. The class also has been compiled in libraries for Linux and Windows that allow PHREEQC to be called from C++, C, and Fortran. A limited set of methods implements the full reaction capabilities of PHREEQC for each module. Input methods use strings or files to define reaction calculations in exactly the same formats used by PHREEQC. Output methods provide a table of user-selected model results, such as concentrations, activities, saturation indices, and densities. The PHREEQC module can add geochemical reaction capabilities to surface-water, groundwater, and watershed transport models. It is possible to store and manipulate solution compositions and reaction information for many cells within the module. In addition, the object-oriented nature of the PHREEQC modules simplifies implementation of parallel processing for reactive-transport models. The PHREEQC COM module may be used in scripting languages to fit parameters; to plot PHREEQC results for field, laboratory, or theoretical investigations; or to develop new models that include simple or complex geochemical calculations. ?? 2011.
FERN - a Java framework for stochastic simulation and evaluation of reaction networks.
Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf
2008-08-29
Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.
Photorefractive optical fuzzy-logic processor based on grating degeneracy
NASA Astrophysics Data System (ADS)
Wu, Weishu; Yang, Changxi; Campbell, Scott; Yeh, Pochi
1995-04-01
A novel optical fuzzy-logic processor using light-induced gratings in photorefractive crystals is proposed and demonstrated. By exploiting grating degeneracy, one can easily implement parallel fuzzy-logic functions in disjunctive normal form.
Skin friction drag reduction in turbulent flow using spanwise traveling surface waves
NASA Astrophysics Data System (ADS)
Musgrave, Patrick F.; Tarazaga, Pablo A.
2017-04-01
A major technological driver in current aircraft and other vehicles is the improvement of fuel efficiency. One way to increase the efficiency is to reduce the skin friction drag on these vehicles. This experimental study presents an active drag reduction technique which decreases the skin friction using spanwise traveling waves. A novel method is introduced for generating traveling waves which is low-profile, non-intrusive, and operates under various flow conditions. This wave generation method is discussed and the resulting traveling waves are presented. These waves are then tested in a low-speed wind tunnel to determine their drag reduction potential. To calculate the drag reduction, the momentum integral method is applied to turbulent boundary layer data collected using a pitot tube and traversing system. The skin friction coefficients are then calculated and the drag reduction determined. Preliminary results yielded a drag reduction of ≍ 5% for 244Hz traveling waves. Thus, this novel wave generation method possesses the potential to yield an easily implementable, non-invasive drag reduction technology.
Complexity-Entropy Causality Plane as a Complexity Measure for Two-Dimensional Patterns
Ribeiro, Haroldo V.; Zunino, Luciano; Lenzi, Ervin K.; Santoro, Perseu A.; Mendes, Renio S.
2012-01-01
Complexity measures are essential to understand complex systems and there are numerous definitions to analyze one-dimensional data. However, extensions of these approaches to two or higher-dimensional data, such as images, are much less common. Here, we reduce this gap by applying the ideas of the permutation entropy combined with a relative entropic index. We build up a numerical procedure that can be easily implemented to evaluate the complexity of two or higher-dimensional patterns. We work out this method in different scenarios where numerical experiments and empirical data were taken into account. Specifically, we have applied the method to fractal landscapes generated numerically where we compare our measures with the Hurst exponent; liquid crystal textures where nematic-isotropic-nematic phase transitions were properly identified; 12 characteristic textures of liquid crystals where the different values show that the method can distinguish different phases; and Ising surfaces where our method identified the critical temperature and also proved to be stable. PMID:22916097
Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.
NASA Technical Reports Server (NTRS)
Thornton, C. L.
1976-01-01
An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.
Case Mis-Conceptualization in Psychological Treatment: An Enduring Clinical Problem.
Ridley, Charles R; Jeffrey, Christina E; Roberson, Richard B
2017-04-01
Case conceptualization, an integral component of mental health treatment, aims to facilitate therapeutic gains by formulating a clear picture of a client's psychological presentation. However, despite numerous attempts to improve this clinical activity, it remains unclear how well existing methods achieve their purported purpose. Case formulation is inconsistently defined in the literature and implemented in practice, with many methods varying in complexity, theoretical grounding, and empirical support. In addition, many of the methods demand a precise clinical acumen that is easily influenced by judgmental and inferential errors. These errors occur regardless of clinicians' level of training or amount of clinical experience. Overall, the lack of a consensus definition, a diversity of methods, and susceptibility of clinicians to errors are manifestations of the state of crisis in case conceptualization. This article, the 2nd in a series of 5 on thematic mapping, argues the need for more reliable and valid models of case conceptualization. © 2017 Wiley Periodicals, Inc.
Multi-Agent Methods for the Configuration of Random Nanocomputers
NASA Technical Reports Server (NTRS)
Lawson, John W.
2004-01-01
As computational devices continue to shrink, the cost of manufacturing such devices is expected to grow exponentially. One alternative to the costly, detailed design and assembly of conventional computers is to place the nano-electronic components randomly on a chip. The price for such a trivial assembly process is that the resulting chip would not be programmable by conventional means. In this work, we show that such random nanocomputers can be adaptively programmed using multi-agent methods. This is accomplished through the optimization of an associated high dimensional error function. By representing each of the independent variables as a reinforcement learning agent, we are able to achieve convergence must faster than with other methods, including simulated annealing. Standard combinational logic circuits such as adders and multipliers are implemented in a straightforward manner. In addition, we show that the intrinsic flexibility of these adaptive methods allows the random computers to be reconfigured easily, making them reusable. Recovery from faults is also demonstrated.
Investigations of the pushability behavior of cardiovascular angiographic catheters.
Bloss, Peter; Rothe, Wolfgang; Wünsche, Peter; Werner, Christian; Rothe, Alexander; Kneissl, Georg Dieter; Burger, Wolfram; Rehberg, Elisabeth
2003-01-01
The placement of angiographic catheters into the vascular system is a routine procedure in modern clinical business. The definition of objective but not yet available evaluation protocols based on measurable physical quantities correlated to the empirical clinical findings is of utmost importance for catheter manufacturers for in-house product screening and optimization. In this context, we present an assessment of multiple mechanical and surface catheter properties such as static and kinetic friction, bending stiffness, microscopic surface topology, surface roughness, surface free energy and their interrelation. Theoretical framework, description of experimental methods and extensive data measured on several different catheters are provided and in conclusion a testing procedure is defined. Although this procedure is based on the measurement of several physical quantities it can be easily implemented by commercial laboratories testing catheters as it is based on relatively low-cost standard methods.
Differential equation models for sharp threshold dynamics.
Schramm, Harrison C; Dimitrov, Nedialko B
2014-01-01
We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.
Adaptation of Decoy Fusion Strategy for Existing Multi-Stage Search Workflows
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Levitsky, Lev I.; Gorshkov, Mikhail V.
2016-09-01
A number of proteomic database search engines implement multi-stage strategies aiming at increasing the sensitivity of proteome analysis. These approaches often employ a subset of the original database for the secondary stage of analysis. However, if target-decoy approach (TDA) is used for false discovery rate (FDR) estimation, the multi-stage strategies may violate the underlying assumption of TDA that false matches are distributed uniformly across the target and decoy databases. This violation occurs if the numbers of target and decoy proteins selected for the second search are not equal. Here, we propose a method of decoy database generation based on the previously reported decoy fusion strategy. This method allows unbiased TDA-based FDR estimation in multi-stage searches and can be easily integrated into existing workflows utilizing popular search engines and post-search algorithms.
Performance Assessment Method for a Forged Fingerprint Detection Algorithm
NASA Astrophysics Data System (ADS)
Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang
The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.
Ultra-high-resolution X-ray structure of proteins.
Lecomte, C; Guillot, B; Muzet, N; Pichon-Pesme, V; Jelsch, C
2004-04-01
The constant advances in synchrotron radiation sources and crystallogenesis methods and the impulse of structural genomics projects have brought biocrystallography to a context favorable to subatomic resolution protein and nucleic acid structures. Thus, as soon as such precision can be frequently obtained, the amount of information available in the precise electron density should also be easily and naturally exploited, similarly to the field of small molecule charge density studies. Indeed, the use of a nonspherical model for the atomic electron density in the refinement of subatomic resolution protein structures allows the experimental description of their electrostatic properties. Some methods we have developed and implemented in our multipolar refinement program MoPro for this purpose are presented. Examples of successful applications to several subatomic resolution protein structures, including the 0.66 angstrom resolution human aldose reductase, are described.
Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.
2014-01-01
Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516
Decentralized stabilization of semi-active vibrating structures
NASA Astrophysics Data System (ADS)
Pisarski, Dominik
2018-02-01
A novel method of decentralized structural vibration control is presented. The control is assumed to be realized by a semi-active device. The objective is to stabilize a vibrating system with the optimal rates of decrease of the energy. The controller relies on an easily implemented decentralized switched state-feedback control law. It uses a set of communication channels to exchange the state information between the neighboring subcontrollers. The performance of the designed method is validated by means of numerical experiments performed for a double cantilever system equipped with a set of elastomers with controlled viscoelastic properties. In terms of the assumed objectives, the proposed control strategy significantly outperforms the passive damping cases and is competitive with a standard centralized control. The presented methodology can be applied to a class of bilinear control systems concerned with smart structural elements.
NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media
NASA Astrophysics Data System (ADS)
Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique
2017-08-01
NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.
Monajjemzadeh, Farnaz; Shokri, Javad; Mohajel Nayebi, Ali Reza; Nemati, Mahboob; Azarmi, Yadollah; Charkhpour, Mohammad; Najafi, Moslem
2014-01-01
Purpose: This study was aimed to design Objective Structured Field Examination (OSFE) and also standardize the course plan of community pharmacy clerkship at Pharmacy Faculty of Tabriz University of Medical Sciences (Iran). Methods: The study was composed of several stages including; evaluation of the old program, standardization and implementation of the new course plan, design and implementation of OSFE, and finally results evaluation. Results: Lack of a fair final assessment protocol and proper organized educating system in various fields of community pharmacy clerkship skills were assigned as the main weaknesses of the old program. Educational priorities were determined and student’s feedback was assessed to design the new curriculum consisting of sessions to fulfill a 60-hour training course. More than 70% of the students were satisfied and successfulness and efficiency of the new clerkship program was significantly greater than the old program (P<0.05). In addition, they believed that OSFE was a suitable testing method. Conclusion: The defined course plan was successfully improved different skills of the students and OSFE was concluded as a proper performance based assessment method. This is easily adoptable by pharmacy faculties to improve the educational outcomes of the clerkship course. PMID:24511477
A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant
NASA Astrophysics Data System (ADS)
Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.
2018-04-01
A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
Packing Boxes into Multiple Containers Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Menghani, Deepak; Guha, Anirban
2016-07-01
Container loading problems have been studied extensively in the literature and various analytical, heuristic and metaheuristic methods have been proposed. This paper presents two different variants of a genetic algorithm framework for the three-dimensional container loading problem for optimally loading boxes into multiple containers with constraints. The algorithms are designed so that it is easy to incorporate various constraints found in real life problems. The algorithms are tested on data of standard test cases from literature and are found to compare well with the benchmark algorithms in terms of utilization of containers. This, along with the ability to easily incorporate a wide range of practical constraints, makes them attractive for implementation in real life scenarios.
NASA Astrophysics Data System (ADS)
Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi
2014-10-01
The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.
PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing.
Soranzo, Alessandro; Grassi, Massimo
2014-01-01
PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment.
PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing
Soranzo, Alessandro; Grassi, Massimo
2014-01-01
PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment. PMID:25101013
Hensman, James; Lawrence, Neil D; Rattray, Magnus
2013-08-20
Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.
2008-04-23
Kotler , P.M. (1997). Marketing management: Analysis, planning, implementation, and control. Upper Saddle River, NJ: Prentice Hall...needed to provide needed items. Production needed to be stable so suppliers could more easily meet demand ( Kotler , 1997, pp. 214-215). The Robotics
Note: High temperature pulsed solenoid valve.
Shen, Wei; Sulkes, Mark
2010-01-01
We have developed a high temperature pulsed solenoid valve with reliable long term operation to at least 400 degrees C. As in earlier published designs, a needle extension sealing a heated orifice is lifted via solenoid actuation; the solenoid is thermally isolated from the heated orifice region. In this new implementation, superior sealing and reliability were attained by choosing a solenoid that produces considerably larger lifting forces on the magnetically actuated plunger. It is this property that facilitates easily attainable sealing and reliability, albeit with some tradeoff in attainable gas pulse durations. The cost of the solenoid valve employed is quite low and the necessary machining quite simple. Our ultimate level of sealing was attained by making a simple modification to the polished seal at the needle tip. The same sealing tip modification could easily be applied to one of the earlier high T valve designs, which could improve the attainability and tightness of sealing for these implementations.
BGFit: management and automated fitting of biological growth curves.
Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana
2013-09-25
Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.
Building Blocks for Peer Success: Lessons Learned from a Train-the-Trainer Program
Downes, Alicia; Eddens, Shalini; Ruiz, John
2012-01-01
Abstract The National HIV/AIDS Strategy (NHAS) calls for a reduction in health disparities, a reduction in new HIV infections, and improved retention in HIV care and treatment. It acknowledges that HIV-positive peers can play an important role in supporting these aims. However, peer training must be comprehensive enough to equip peers with the knowledge and skills needed for this work. This article describes the development of a national train the trainer (TTT) model for HIV peer educators, and the results of its implementation and replication. A mixed methods evaluation identified who was trained locally as a result of TTT implementation, what aspects of the TTT were most useful to trainers in implementing local training sessions, and areas for improvement. Over the course of 1 year, 91 individuals were trained at 1 of 6 TTT sessions. These individuals then conducted 26 local training sessions for 272 peers. Factors that facilitated local replication training included the teach-back/feedback model, faculty modeling of facilitation styles, financial support for training logistics, and faculty support in designing and implementing the training. The model could be improved by providing instruction on how to incorporate peers as part of the training team. TTT programs that are easily replicable in the community will be an important asset in developing a peer workforce that can help implement the National AIDS Strategy. PMID:22103430
Three-dimensional geometry of coronal loops inferred by the Principal Component Analysis
NASA Astrophysics Data System (ADS)
Nisticò, Giuseppe; Nakariakov, Valery
We propose a new method for the determination of the three dimensional (3D) shape of coronal loops from stereoscopy. The common approach requires to find a 1D geometric curve, as circumference or ellipse, that best-fits the 3D tie-points which sample the loop shape in a given coordinate system. This can be easily achieved by the Principal Component (PC) analysis. It mainly consists in calculating the eigenvalues and eigenvectors of the covariance matrix of the 3D tie-points: the eigenvalues give a measure of the variability of the distribution of the tie-points, and the corresponding eigenvectors define a new cartesian reference frame directly related to the loop. The eigenvector associated with the smallest eigenvalues defines the normal to the loop plane, while the other two determine the directions of the loop axes: the major axis is related to the largest eigenvalue, and the minor axis with the second one. The magnitude of the axes is directly proportional to the square roots of these eigenvalues. The technique is fast and easily implemented in some examples, returning best-fitting estimations of the loop parameters and 3D reconstruction with a reasonable small number of tie-points. The method is suitable for serial reconstruction of coronal loops in active regions, providing a useful tool for comparison between observations and theoretical magnetic field extrapolations from potential or force-free fields.
Linear homotopy solution of nonlinear systems of equations in geodesy
NASA Astrophysics Data System (ADS)
Paláncz, Béla; Awange, Joseph L.; Zaletnyik, Piroska; Lewis, Robert H.
2010-01-01
A fundamental task in geodesy is solving systems of equations. Many geodetic problems are represented as systems of multivariate polynomials. A common problem in solving such systems is improper initial starting values for iterative methods, leading to convergence to solutions with no physical meaning, or to convergence that requires global methods. Though symbolic methods such as Groebner bases or resultants have been shown to be very efficient, i.e., providing solutions for determined systems such as 3-point problem of 3D affine transformation, the symbolic algebra can be very time consuming, even with special Computer Algebra Systems (CAS). This study proposes the Linear Homotopy method that can be implemented easily in high-level computer languages like C++ and Fortran that are faster than CAS by at least two orders of magnitude. Using Mathematica, the power of Homotopy is demonstrated in solving three nonlinear geodetic problems: resection, GPS positioning, and affine transformation. The method enlarging the domain of convergence is found to be efficient, less sensitive to rounding of numbers, and has lower complexity compared to other local methods like Newton-Raphson.
Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi
2016-03-01
Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.
Pulse-coupled neural network implementation in FPGA
NASA Astrophysics Data System (ADS)
Waldemark, Joakim T. A.; Lindblad, Thomas; Lindsey, Clark S.; Waldemark, Karina E.; Oberg, Johnny; Millberg, Mikael
1998-03-01
Pulse Coupled Neural Networks (PCNN) are biologically inspired neural networks, mainly based on studies of the visual cortex of small mammals. The PCNN is very well suited as a pre- processor for image processing, particularly in connection with object isolation, edge detection and segmentation. Several implementations of PCNN on von Neumann computers, as well as on special parallel processing hardware devices (e.g. SIMD), exist. However, these implementations are not as flexible as required for many applications. Here we present an implementation in Field Programmable Gate Arrays (FPGA) together with a performance analysis. The FPGA hardware implementation may be considered a platform for further, extended implementations and easily expanded into various applications. The latter may include advanced on-line image analysis with close to real-time performance.
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
NASA Astrophysics Data System (ADS)
Xu, Boyi; Xu, Li Da; Fei, Xiang; Jiang, Lihong; Cai, Hongming; Wang, Shuai
2017-08-01
Facing the rapidly changing business environments, implementation of flexible business process is crucial, but difficult especially in data-intensive application areas. This study aims to provide scalable and easily accessible information resources to leverage business process management. In this article, with a resource-oriented approach, enterprise data resources are represented as data-centric Web services, grouped on-demand of business requirement and configured dynamically to adapt to changing business processes. First, a configurable architecture CIRPA involving information resource pool is proposed to act as a scalable and dynamic platform to virtualise enterprise information resources as data-centric Web services. By exposing data-centric resources as REST services in larger granularities, tenant-isolated information resources could be accessed in business process execution. Second, dynamic information resource pool is designed to fulfil configurable and on-demand data accessing in business process execution. CIRPA also isolates transaction data from business process while supporting diverse business processes composition. Finally, a case study of using our method in logistics application shows that CIRPA provides an enhanced performance both in static service encapsulation and dynamic service execution in cloud computing environment.
Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lin; Shao, Sihong; E, Weinan
2012-11-06
We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less
2012-01-01
Background To investigate organisational factors influencing the implementation challenges of redesigning services for people with long term conditions in three locations in England, using remote care (telehealth and telecare). Methods Case-studies of three sites forming the UK Department of Health’s Whole Systems Demonstrator (WSD) Programme. Qualitative research techniques were used to obtain data from various sources, including semi-structured interviews, observation of meetings over the course programme and prior to its launch, and document review. Participants were managers and practitioners involved in the implementation of remote care services. Results The implementation of remote care was nested within a large pragmatic cluster randomised controlled trial (RCT), which formed a core element of the WSD programme. To produce robust benefits evidence, many aspect of the trial design could not be easily adapted to local circumstances. While remote care was successfully rolled-out, wider implementation lessons and levels of organisational learning across the sites were hindered by the requirements of the RCT. Conclusions The implementation of a complex innovation such as remote care requires it to organically evolve, be responsive and adaptable to the local health and social care system, driven by support from front-line staff and management. This need for evolution was not always aligned with the imperative to gather robust benefits evidence. This tension needs to be resolved if government ambitions for the evidence-based scaling-up of remote care are to be realised. PMID:23153014
NoSQL Based 3D City Model Management System
NASA Astrophysics Data System (ADS)
Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.
2014-04-01
To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.
A new diode laser acupuncture therapy apparatus
NASA Astrophysics Data System (ADS)
Li, Chengwei; Huang, Zhen; Li, Dongyu; Zhang, Xiaoyuan
2006-06-01
Since the first laser-needles acupuncture apparatus was introduced in therapy, this kind of apparatus has been well used in laser biomedicine as its non-invasive, pain- free, non-bacterium, and safetool. The laser acupuncture apparatus in this paper is based on single-chip microcomputer and associated by semiconductor laser technology. The function like traditional moxibustion including reinforcing and reducing is implemented by applying chaos method to control the duty cycle of moxibustion signal, and the traditional lifting and thrusting of acupuncture is implemented by changing power output of the diode laser. The radiator element of diode laser is made and the drive circuit is designed. And chaos mathematic model is used to produce deterministic class stochastic signal to avoid the body adaptability. This function covers the shortages of continuous irradiation or that of simple disciplinary stimulate signal, which is controlled by some simple electronic circuit and become easily adjusted by human body. The realization of reinforcing and reducing of moxibustion is technological innovation in traditional acupuncture coming true in engineering.
Abascal, Ana J; Sanchez, Jorge; Chiri, Helios; Ferrer, María I; Cárdenas, Mar; Gallego, Alejandro; Castanedo, Sonia; Medina, Raúl; Alonso-Martirena, Andrés; Berx, Barbara; Turrell, William R; Hughes, Sarah L
2017-06-15
This paper presents a novel operational oil spill modelling system based on HF radar currents, implemented in a northwest European shelf sea. The system integrates Open Modal Analysis (OMA), Short Term Prediction algorithms (STPS) and an oil spill model to simulate oil spill trajectories. A set of 18 buoys was used to assess the accuracy of the system for trajectory forecast and to evaluate the benefits of HF radar data compared to the use of currents from a hydrodynamic model (HDM). The results showed that simulated trajectories using OMA currents were more accurate than those obtained using a HDM. After 48h the mean error was reduced by 40%. The forecast skill of the STPS method was valid up to 6h ahead. The analysis performed shows the benefits of HF radar data for operational oil spill modelling, which could be easily implemented in other regions with HF radar coverage. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Improved chemical identification from sensor arrays using intelligent algorithms
NASA Astrophysics Data System (ADS)
Roppel, Thaddeus A.; Wilson, Denise M.
2001-02-01
Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.
nSTAT: Open-Source Neural Spike Train Analysis Toolbox for Matlab
Cajigas, I.; Malik, W.Q.; Brown, E.N.
2012-01-01
Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - Generalized Linear Model (PPGLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural spike train analysis toolbox for Matlab®. By adopting an Object-Oriented Programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419
NASA Astrophysics Data System (ADS)
Leamy, Michael J.; Springer, Adam C.
In this research we report parallel implementation of a Cellular Automata-based simulation tool for computing elastodynamic response on complex, two-dimensional domains. Elastodynamic simulation using Cellular Automata (CA) has recently been presented as an alternative, inherently object-oriented technique for accurately and efficiently computing linear and nonlinear wave propagation in arbitrarily-shaped geometries. The local, autonomous nature of the method should lead to straight-forward and efficient parallelization. We address this notion on symmetric multiprocessor (SMP) hardware using a Java-based object-oriented CA code implementing triangular state machines (i.e., automata) and the MPI bindings written in Java (MPJ Express). We use MPJ Express to reconfigure our existing CA code to distribute a domain's automata to cores present on a dual quad-core shared-memory system (eight total processors). We note that this message passing parallelization strategy is directly applicable to computer clustered computing, which will be the focus of follow-on research. Results on the shared memory platform indicate nearly-ideal, linear speed-up. We conclude that the CA-based elastodynamic simulator is easily configured to run in parallel, and yields excellent speed-up on SMP hardware.
Livet, Melanie; Fixsen, Amanda
2018-01-01
With mental health services shifting to community-based settings, community mental health (CMH) organizations are under increasing pressure to deliver effective services. Despite availability of evidence-based interventions, there is a gap between effective mental health practices and the care that is routinely delivered. Bridging this gap requires availability of easily tailorable implementation support tools to assist providers in implementing evidence-based intervention with quality, thereby increasing the likelihood of achieving the desired client outcomes. This study documents the process and lessons learned from exploring the feasibility of adapting such a technology-based tool, Centervention, as the example innovation, for use in CMH settings. Mixed-methods data on core features, innovation-provider fit, and organizational capacity were collected from 44 CMH providers. Lessons learned included the need to augment delivery through technology with more personal interactions, the importance of customizing and integrating the tool with existing technologies, and the need to incorporate a number of strategies to assist with adoption and use of Centervention-like tools in CMH contexts. This study adds to the current body of literature on the adaptation process for technology-based tools and provides information that can guide additional innovations for CMH settings.
Ultrafast, 2 min synthesis of monolayer-protected gold nanoclusters (d < 2 nm)
NASA Astrophysics Data System (ADS)
Martin, Matthew N.; Li, Dawei; Dass, Amala; Eah, Sang-Kee
2012-06-01
An ultrafast synthesis method is presented for hexanethiolate-coated gold nanoclusters (d < 2 nm, <250 atoms per nanocluster), which takes only 2 min and can be easily reproduced. With two immiscible solvents, gold nanoclusters are separated from the reaction byproducts fast and easily without any need for post-synthesis cleaning.An ultrafast synthesis method is presented for hexanethiolate-coated gold nanoclusters (d < 2 nm, <250 atoms per nanocluster), which takes only 2 min and can be easily reproduced. With two immiscible solvents, gold nanoclusters are separated from the reaction byproducts fast and easily without any need for post-synthesis cleaning. Electronic supplementary information (ESI) available: Experimental details of gold nanocluster synthesis and mass-spectrometry. See DOI: 10.1039/c2nr30890h
Crowley, D. Max; Coffman, Donna L.; Feinberg, Mark; Greenberg, Mark; Spoth, Richard
2013-01-01
Despite growing recognition of the important role implementation plays in successful prevention efforts, relatively little work has sought to demonstrate a causal relationship between implementation factors and participant outcomes. In turn, failure to explore the implementation-to-outcome link limits our understanding of the mechanisms essential to successful programming. This gap is partially due to the inability of current methodological procedures within prevention science to account for the multitude of confounders responsible for variation in implementation factors (i.e., selection bias). The current paper illustrates how propensity and marginal structural models can be used to improve causal inferences involving implementation factors not easily randomized (e.g., participant attendance). We first present analytic steps for simultaneously evaluating the impact of multiple implementation factors on prevention program outcome. Then we demonstrate this approach for evaluating the impact of enrollment and attendance in a family program, over and above the impact of a school-based program, within PROSPER, a large scale real-world prevention trial. Findings illustrate the capacity of this approach to successfully account for confounders that influence enrollment and attendance, thereby more accurately representing true causal relations. For instance, after accounting for selection bias, we observed a 5% reduction in the prevalence of 11th grade underage drinking for those who chose to receive a family program and school program compared to those who received only the school program. Further, we detected a 7% reduction in underage drinking for those with high attendance in the family program. PMID:23430578
Vamos, Cheryl A; Cantor, Allison; Thompson, Erika L; Detman, Linda A; Bronson, Emily A; Phelps, Annette; Louis, Judette M; Gregg, Anthony R; Curran, John S; Sappenfield, William M
2016-10-01
Objectives Obstetric hemorrhage is one of the leading causes of maternal mortality. The Florida Perinatal Quality Collaborative coordinates a state-wide Obstetric Hemorrhage Initiative (OHI) to assist hospitals in implementing best practices related to this preventable condition. This study examined intervention characteristics that influenced the OHI implementation experiences among Florida hospitals. Methods Purposive sampling was employed to recruit diverse hospitals and multidisciplinary staff members. A semi-structured interview guide was developed based on the following constructs from the intervention characteristics domain of the Consolidated Framework for Implementation Research: evidence strength; complexity; adaptability; and packaging. Interviews were audio-recorded, transcribed and analyzed using Atlas.ti. Results Participants (n = 50) across 12 hospitals agreed that OHI is evidence-based and supported by various information sources (scientific literature, experience, and other epidemiologic or quality improvement data). Participants believed the OHI was 'average' in complexity, with variation depending on participant's role and intervention component. Participants discussed how the OHI is flexible and can be easily adapted and integrated into different hospital settings, policies and resources. The packaging was also found to be valuable in providing materials and supports (e.g., toolkit; webinars; forms; technical assistance) that assisted implementation across activities. Conclusions for Practice Participants reflected positively with regards to the evidence strength, adaptability, and packaging of the OHI. However, the complexity of the initiative adversely affected implementation experiences and required additional efforts to maximize the initiative effectiveness. Findings will inform future efforts to facilitate implementation experiences of evidence-based practices for hemorrhage prevention, ultimately decreasing maternal morbidity and mortality.
In-Memory Business Intelligence: Concepts and Performance
NASA Astrophysics Data System (ADS)
Rantung, V. P.; Kembuan, O.; Rompas, P. T. D.; Mewengkang, A.; Liando, O. E. S.; Sumayku, J.
2018-02-01
This research aims to discuss in-memory Business Intelligent (BI) and to model the business analysis questions to know the performance of the in-memory BI. By using, the Qlickview application found BI dashboards that easily accessed and modified. The dashboards are developed together using an agile development approach such as pre-study, planning, iterative execution, implementation, and evaluation. At the end, this research helping analyzer in choosing a right implementation for BI solution.
NASA software specification and evaluation system: Software verification/validation techniques
NASA Technical Reports Server (NTRS)
1977-01-01
NASA software requirement specifications were used in the development of a system for validating and verifying computer programs. The software specification and evaluation system (SSES) provides for the effective and efficient specification, implementation, and testing of computer software programs. The system as implemented will produce structured FORTRAN or ANSI FORTRAN programs, but the principles upon which SSES is designed allow it to be easily adapted to other high order languages.
ZebraBeat: a flexible platform for the analysis of the cardiac rate in zebrafish embryos
NASA Astrophysics Data System (ADS)
de Luca, Elisa; Zaccaria, Gian Maria; Hadhoud, Marwa; Rizzo, Giovanna; Ponzini, Raffaele; Morbiducci, Umberto; Santoro, Massimo Mattia
2014-05-01
Heartbeat measurement is important in assesssing cardiac function because variations in heart rhythm can be the cause as well as an effect of hidden pathological heart conditions. Zebrafish (Danio rerio) has emerged as one of the most useful model organisms for cardiac research. Indeed, the zebrafish heart is easily accessible for optical analyses without conducting invasive procedures and shows anatomical similarity to the human heart. In this study, we present a non-invasive, simple, cost-effective process to quantify the heartbeat in embryonic zebrafish. To achieve reproducibility, high throughput and flexibility (i.e., adaptability to any existing confocal microscope system and with a user-friendly interface that can be easily used by researchers), we implemented this method within a software program. We show here that this platform, called ZebraBeat, can successfully detect heart rate variations in embryonic zebrafish at various developmental stages, and it can record cardiac rate fluctuations induced by factors such as temperature and genetic- and chemical-induced alterations. Applications of this methodology may include the screening of chemical libraries affecting heart rhythm and the identification of heart rhythm variations in mutants from large-scale forward genetic screens.
Guo, Lili; Qi, Junwei; Xue, Wei
2018-01-01
This article proposes a novel active localization method based on the mixed polarization multiple signal classification (MP-MUSIC) algorithm for positioning a metal target or an insulator target in the underwater environment by using a uniform circular antenna (UCA). The boundary element method (BEM) is introduced to analyze the boundary of the target by use of a matrix equation. In this method, an electric dipole source as a part of the locating system is set perpendicularly to the plane of the UCA. As a result, the UCA can only receive the induction field of the target. The potential of each electrode of the UCA is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields-based localization method, which can be easily implemented in practical engineering applications. A simulation model and a physical experiment are constructed. The simulation and the experiment results provide accurate positioning performance, with the help of verifying the effectiveness of the proposed localization method in underwater target locating. PMID:29439495
A level set method for cupping artifact correction in cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Shipeng; Li, Haibo; Ge, Qi
2015-08-15
Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts inmore » CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.« less
Assessing and minimizing contamination in time of flight based validation data
NASA Astrophysics Data System (ADS)
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
CP-CHARM: segmentation-free image classification made accessible.
Uhlmann, Virginie; Singh, Shantanu; Carpenter, Anne E
2016-01-27
Automated classification using machine learning often relies on features derived from segmenting individual objects, which can be difficult to automate. WND-CHARM is a previously developed classification algorithm in which features are computed on the whole image, thereby avoiding the need for segmentation. The algorithm obtained encouraging results but requires considerable computational expertise to execute. Furthermore, some benchmark sets have been shown to be subject to confounding artifacts that overestimate classification accuracy. We developed CP-CHARM, a user-friendly image-based classification algorithm inspired by WND-CHARM in (i) its ability to capture a wide variety of morphological aspects of the image, and (ii) the absence of requirement for segmentation. In order to make such an image-based classification method easily accessible to the biological research community, CP-CHARM relies on the widely-used open-source image analysis software CellProfiler for feature extraction. To validate our method, we reproduced WND-CHARM's results and ensured that CP-CHARM obtained comparable performance. We then successfully applied our approach on cell-based assay data and on tissue images. We designed these new training and test sets to reduce the effect of batch-related artifacts. The proposed method preserves the strengths of WND-CHARM - it extracts a wide variety of morphological features directly on whole images thereby avoiding the need for cell segmentation, but additionally, it makes the methods easily accessible for researchers without computational expertise by implementing them as a CellProfiler pipeline. It has been demonstrated to perform well on a wide range of bioimage classification problems, including on new datasets that have been carefully selected and annotated to minimize batch effects. This provides for the first time a realistic and reliable assessment of the whole image classification strategy.
Prototyping a bedside documentation system.
Bachand, P; Bobis, K
1993-01-01
The implementation of a comprehensive bedside documentation system is a major project that demands careful analysis and planning. Since the cost of a typical bedside system can easily exceed $3 million, a design oversight could have disastrous effects on the benefits of the system.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
Rapid Assessment of Contrast Sensitivity with Mobile Touch-screens
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2013-01-01
The availability of low-cost high-quality touch-screen displays in modern mobile devices has created opportunities for new approaches to routine visual measurements. Here we describe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. We will demonstrate a prototype for Apple Computer's iPad-iPod-iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing,
Assessing the Robustness of Complete Bacterial Genome Segmentations
NASA Astrophysics Data System (ADS)
Devillers, Hugo; Chiapello, Hélène; Schbath, Sophie; El Karoui, Meriem
Comparison of closely related bacterial genomes has revealed the presence of highly conserved sequences forming a "backbone" that is interrupted by numerous, less conserved, DNA fragments. Segmentation of bacterial genomes into backbone and variable regions is particularly useful to investigate bacterial genome evolution. Several software tools have been designed to compare complete bacterial chromosomes and a few online databases store pre-computed genome comparisons. However, very few statistical methods are available to evaluate the reliability of these software tools and to compare the results obtained with them. To fill this gap, we have developed two local scores to measure the robustness of bacterial genome segmentations. Our method uses a simulation procedure based on random perturbations of the compared genomes. The scores presented in this paper are simple to implement and our results show that they allow to discriminate easily between robust and non-robust bacterial genome segmentations when using aligners such as MAUVE and MGA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spanner, Michael; Batista, Victor S.; Brumer, Paul
2005-02-22
The utility of the Filinov integral conditioning technique, as implemented in semiclassical initial value representation (SC-IVR) methods, is analyzed for a number of regular and chaotic systems. For nonchaotic systems of low dimensionality, the Filinov technique is found to be quite ineffective at accelerating convergence of semiclassical calculations since, contrary to the conventional wisdom, the semiclassical integrands usually do not exhibit significant phase oscillations in regions of large integrand amplitude. In the case of chaotic dynamics, it is found that the regular component is accurately represented by the SC-IVR, even when using the Filinov integral conditioning technique, but that quantummore » manifestations of chaotic behavior was easily overdamped by the filtering technique. Finally, it is shown that the level of approximation introduced by the Filinov filter is, in general, comparable to the simpler ad hoc truncation procedure introduced by Kay [J. Chem. Phys. 101, 2250 (1994)].« less
Visualization and imaging methods for flames in microgravity
NASA Technical Reports Server (NTRS)
Weiland, Karen J.
1993-01-01
The visualization and imaging of flames has long been acknowledged as the starting point for learning about and understanding combustion phenomena. It provides an essential overall picture of the time and length scales of processes and guides the application of other diagnostics. It is perhaps even more important in microgravity combustion studies, where it is often the only non-intrusive diagnostic measurement easily implemented. Imaging also aids in the interpretation of single-point measurements, such as temperature, provided by thermocouples, and velocity, by hot-wire anemometers. This paper outlines the efforts of the Microgravity Combustion Diagnostics staff at NASA Lewis Research Center in the area of visualization and imaging of flames, concentrating on methods applicable for reduced-gravity experimentation. Several techniques are under development: intensified array camera imaging, and two-dimensional temperature and species concentrations measurements. A brief summary of results in these areas is presented and future plans mentioned.
Learning gestures for customizable human-computer interaction in the operating room.
Schwarz, Loren Arthur; Bigdelou, Ali; Navab, Nassir
2011-01-01
Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.
Analysis of cold worked holes for structural life extension
NASA Technical Reports Server (NTRS)
Wieland, David H.; Cutshall, Jon T.; Burnside, O. Hal; Cardinal, Joseph W.
1994-01-01
Cold working holes for improved fatigue life of fastener holes are widely used on aircraft. This paper presents methods used by the authors to determine the percent of cold working to be applied and to analyze fatigue crack growth of cold worked fastener holes. An elastic, perfectly-plastic analysis of a thick-walled tube is used to determine the stress field during the cold working process and the residual stress field after the process is completed. The results of the elastic/plastic analysis are used to determine the amount of cold working to apply to a hole. The residual stress field is then used to perform damage tolerance analysis of a crack growing out of a cold worked fastener hole. This analysis method is easily implemented in existing crack growth computer codes so that the cold worked holes can be used to extend the structural life of aircraft. Analytical results are compared to test data where appropriate.
NASA Astrophysics Data System (ADS)
Barouchas, Pantelis; Koulos, Vasilios; Melfos, Vasilios
2017-04-01
For the determination of total carbonates in soil archaeometry a new technique was applied using a multi-sensor philosophy, which combines simultaneous measurement of pressure and temperature. This technology is innovative and complies with EN ISO 10693:2013, ASTM D4373-02(2007) and Soil Science Society of America standard test methods for calcium carbonate content in soils and sediments. The total carbonates analysis is based on a pressure method that utilizes the FOGII Digital Soil CalcimeterTM, which is a portable apparatus. The total carbonate content determined by treating a 1.000 g (+/- 0.001 g) dried sample specimens with 6N hydrochloric acid (HCL) reagent grade, in an enclosed reaction vessel. Carbon dioxide gas evolved during the reaction between the acid and carbonate fraction of the specimen, was measured by the resulting pressure generated, taking in account the temperature conditions during the reaction. Prior to analysis the procedure was validated with Sand/Soil mixtures from BIPEA proficiency testing program with soils of different origins. For applying this new method in archaeometry a total number of ten samples were used from various rocks which are related with cultural constructions and implements in Greece. They represent a large range of periods since the Neolithic times, and were selected because there was an uncertainty about their accurate mineralogical composition especially regarding the presence of carbonate minerals. The results were compared to the results from ELTRA CS580 inorganic carbon analyzer using an infrared cell. The determination of total carbonates for 10 samples from different ancient sites indicated a very good correlation (R2 >0.97) between the pressure method with temperature compensation and the infrared method. The proposed method is quickly and accurate in archaeometry and can replace easily other techniques for total carbonates testing. The FOGII Digital Soil CalcimeterTM is portable and easily can be carried for field work in archaeology.
Statistical Approaches to Type Determination of the Ejector Marks on Cartridge Cases.
Warren, Eric M; Sheets, H David
2018-03-01
While type determination on bullets has been performed for over a century, type determination on cartridge cases is often overlooked. Presented here is an example of type determination of ejector marks on cartridge cases from Glock and Smith & Wesson Sigma series pistols using Naïve Bayes and Random Forest classification methods. The shapes of ejector marks were captured from images of test-fired cartridge cases and subjected to multivariate analysis. Naïve Bayes and Random Forest methods were used to assign the ejector shapes to the correct class of firearm with success rates as high as 98%. This method is easily implemented with equipment already available in crime laboratories and can serve as an investigative lead in the form of a list of firearms that could have fired the evidence. Paired with the FBI's General Rifling Characteristics (GRC) database, this could be an invaluable resource for firearm evidence at crime scenes. © 2017 American Academy of Forensic Sciences.
Varying coefficient subdistribution regression for left-truncated semi-competing risks data.
Li, Ruosha; Peng, Limin
2014-10-01
Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.
Pulsation-based method for reduction of nitrogen oxides content in torch combustion products
NASA Astrophysics Data System (ADS)
Berg, I. A.; Porshnev, S. V.; Oshchepkova, V. Y.; Kit, M.
2018-01-01
Out of all ways to fuel bum the torch combustion systems is used most often. Even though the processes in the steam boiler are stochastic, the system can be controlled rather easily by changing the flowrate of the air pumped into it and - in case of balanced flue units - exhausters load. Advantages offered by torch-based combustion systems are offset by a disadvantage resulted in oxidation of nitrogen contained in the air. This paper provides rationale for an NOx content reduction method that employs pulsation mode of fuel combustion; it also describes combustion control and monitoring system employed for implementation of this method. Described methodology can be used not only for pulsation combustion studies but also for studies of torches formed by conventional burning systems. The outcome of the experimental study supports the assumption that it is possible to create conditions for NOx content reduction in flue gases by means of cycling the fuel supply on/off valve at the rate of 6 Hz.
Liu, Xiang; Peng, Yingwei; Tu, Dongsheng; Liang, Hua
2012-10-30
Survival data with a sizable cure fraction are commonly encountered in cancer research. The semiparametric proportional hazards cure model has been recently used to analyze such data. As seen in the analysis of data from a breast cancer study, a variable selection approach is needed to identify important factors in predicting the cure status and risk of breast cancer recurrence. However, no specific variable selection method for the cure model is available. In this paper, we present a variable selection approach with penalized likelihood for the cure model. The estimation can be implemented easily by combining the computational methods for penalized logistic regression and the penalized Cox proportional hazards models with the expectation-maximization algorithm. We illustrate the proposed approach on data from a breast cancer study. We conducted Monte Carlo simulations to evaluate the performance of the proposed method. We used and compared different penalty functions in the simulation studies. Copyright © 2012 John Wiley & Sons, Ltd.
Zhang, Chaosheng; Tang, Ya; Luo, Lin; Xu, Weilin
2009-11-01
Outliers in urban soil geochemical databases may imply potential contaminated land. Different methodologies which can be easily implemented for the identification of global and spatial outliers were applied for Pb concentrations in urban soils of Galway City in Ireland. Due to its strongly skewed probability feature, a Box-Cox transformation was performed prior to further analyses. The graphic methods of histogram and box-and-whisker plot were effective in identification of global outliers at the original scale of the dataset. Spatial outliers could be identified by a local indicator of spatial association of local Moran's I, cross-validation of kriging, and a geographically weighted regression. The spatial locations of outliers were visualised using a geographical information system. Different methods showed generally consistent results, but differences existed. It is suggested that outliers identified by statistical methods should be confirmed and justified using scientific knowledge before they are properly dealt with.
Tracking and Quantifying Developmental Processes in C. elegans Using Open-source Tools.
Dutta, Priyanka; Lehmann, Christina; Odedra, Devang; Singh, Deepika; Pohl, Christian
2015-12-16
Quantitatively capturing developmental processes is crucial to derive mechanistic models and key to identify and describe mutant phenotypes. Here protocols are presented for preparing embryos and adult C. elegans animals for short- and long-term time-lapse microscopy and methods for tracking and quantification of developmental processes. The methods presented are all based on C. elegans strains available from the Caenorhabditis Genetics Center and on open-source software that can be easily implemented in any laboratory independently of the microscopy system used. A reconstruction of a 3D cell-shape model using the modelling software IMOD, manual tracking of fluorescently-labeled subcellular structures using the multi-purpose image analysis program Endrov, and an analysis of cortical contractile flow using PIVlab (Time-Resolved Digital Particle Image Velocimetry Tool for MATLAB) are shown. It is discussed how these methods can also be deployed to quantitatively capture other developmental processes in different models, e.g., cell tracking and lineage tracing, tracking of vesicle flow.
Filtration Isolation of Nucleic Acids: A Simple and Rapid DNA Extraction Method.
McFall, Sally M; Neto, Mário F; Reed, Jennifer L; Wagner, Robin L
2016-08-06
FINA, filtration isolation of nucleic acids, is a novel extraction method which utilizes vertical filtration via a separation membrane and absorbent pad to extract cellular DNA from whole blood in less than 2 min. The blood specimen is treated with detergent, mixed briefly and applied by pipet to the separation membrane. The lysate wicks into the blotting pad due to capillary action, capturing the genomic DNA on the surface of the separation membrane. The extracted DNA is retained on the membrane during a simple wash step wherein PCR inhibitors are wicked into the absorbent blotting pad. The membrane containing the entrapped DNA is then added to the PCR reaction without further purification. This simple method does not require laboratory equipment and can be easily implemented with inexpensive laboratory supplies. Here we describe a protocol for highly sensitive detection and quantitation of HIV-1 proviral DNA from 100 µl whole blood as a model for early infant diagnosis of HIV that could readily be adapted to other genetic targets.
Experimental Methods for Trapping Ions Using Microfabricated Surface Ion Traps
Hong, Seokjun; Lee, Minjae; Kwon, Yeong-Dae; Cho, Dong-il "Dan"; Kim, Taehyun
2017-01-01
Ions trapped in a quadrupole Paul trap have been considered one of the strong physical candidates to implement quantum information processing. This is due to their long coherence time and their capability to manipulate and detect individual quantum bits (qubits). In more recent years, microfabricated surface ion traps have received more attention for large-scale integrated qubit platforms. This paper presents a microfabrication methodology for ion traps using micro-electro-mechanical system (MEMS) technology, including the fabrication method for a 14 µm-thick dielectric layer and metal overhang structures atop the dielectric layer. In addition, an experimental procedure for trapping ytterbium (Yb) ions of isotope 174 (174Yb+) using 369.5 nm, 399 nm, and 935 nm diode lasers is described. These methodologies and procedures involve many scientific and engineering disciplines, and this paper first presents the detailed experimental procedures. The methods discussed in this paper can easily be extended to the trapping of Yb ions of isotope 171 (171Yb+) and to the manipulation of qubits. PMID:28872137
Can we recognize horses by their ocular biometric traits using deep convolutional neural networks?
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Szadkowski, Mateusz
2017-08-01
This paper aims at determining the viability of horse recognition by the means of ocular biometrics and deep convolutional neural networks (deep CNNs). Fast and accurate identification of race horses before racing is crucial for ensuring that exactly the horses that were declared are participating, using methods that are non-invasive and friendly to these delicate animals. As typical iris recognition methods require lot of fine-tuning of the method parameters and high-quality data, CNNs seem like a natural candidate to be applied for recognition thanks to their potentially excellent abilities in describing texture, combined with ease of implementation in an end-to-end manner. Also, with such approach we can easily utilize both iris and periocular features without constructing complicated algorithms for each. We thus present a simple CNN classifier, able to correctly identify almost 80% of the samples in an identification scenario, and give equal error rate (EER) of less than 10% in a verification scenario.
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy.
Boswell, Sarah A; Jeraj, Robert; Ruchala, Kenneth J; Olivera, Gustavo H; Jaradat, Hazim A; James, Joshua A; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T Rock
2005-06-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle.
NASA Astrophysics Data System (ADS)
Maass, Bolko
2016-12-01
This paper describes an efficient and easily implemented algorithmic approach to extracting an approximation to an image's dominant projected illumination direction, based on intermediary results from a segmentation-based crater detection algorithm (CDA), at a computational cost that is negligible in comparison to that of the prior stages of the CDA. Most contemporary CDAs built for spacecraft navigation use this illumination direction as a means of improving performance or even require it to function at all. Deducing the illumination vector from the image alone reduces the reliance on external information such as the accurate knowledge of the spacecraft inertial state, accurate time base and solar system ephemerides. Therefore, a method such as the one described in this paper is a prerequisite for true "Lost in Space" operation of a purely segmentation-based crater detecting and matching method for spacecraft navigation. The proposed method is verified using ray-traced lunar elevation model data, asteroid image data, and in a laboratory setting with a camera in the loop.
A simple node and conductor data generator for SINDA
NASA Technical Reports Server (NTRS)
Gottula, Ronald R.
1992-01-01
This paper presents a simple, automated method to generate NODE and CONDUCTOR DATA for thermal match modes. The method uses personal computer spreadsheets to create SINDA inputs. It was developed in order to make SINDA modeling less time consuming and serves as an alternative to graphical methods. Anyone having some experience using a personal computer can easily implement this process. The user develops spreadsheets to automatically calculate capacitances and conductances based on material properties and dimensional data. The necessary node and conductor information is then taken from the spreadsheets and automatically arranged into the proper format, ready for insertion directly into the SINDA model. This technique provides a number of benefits to the SINDA user such as a reduction in the number of hand calculations, and an ability to very quickly generate a parametric set of NODE and CONDUCTOR DATA blocks. It also provides advantages over graphical thermal modeling systems by retaining the analyst's complete visibility into the thermal network, and by permitting user comments anywhere within the DATA blocks.
Does genomic selection have a future in plant breeding?
Jonas, Elisabeth; de Koning, Dirk-Jan
2013-09-01
Plant breeding largely depends on phenotypic selection in plots and only for some, often disease-resistance-related traits, uses genetic markers. The more recently developed concept of genomic selection, using a black box approach with no need of prior knowledge about the effect or function of individual markers, has also been proposed as a great opportunity for plant breeding. Several empirical and theoretical studies have focused on the possibility to implement this as a novel molecular method across various species. Although we do not question the potential of genomic selection in general, in this Opinion, we emphasize that genomic selection approaches from dairy cattle breeding cannot be easily applied to complex plant breeding. Copyright © 2013 Elsevier Ltd. All rights reserved.
Towards a detailed anthropometric body characterization using the Microsoft Kinect.
Domingues, Ana; Barbosa, Filipa; Pereira, Eduardo M; Santos, Márcio Borgonovo; Seixas, Adérito; Vilas-Boas, João; Gabriel, Joaquim; Vardasca, Ricardo
2016-01-01
Anthropometry has been widely used in different fields, providing relevant information for medicine, ergonomics and biometric applications. However, the existent solutions present marked disadvantages, reducing the employment of this type of evaluation. Studies have been conducted in order to easily determine anthropometric measures considering data provided by low-cost sensors, such as the Microsoft Kinect. In this work, a methodology is proposed and implemented for estimating anthropometric measures considering the information acquired with this sensor. The measures obtained with this method were compared with the ones from a validation system, Qualisys. Comparing the relative errors determined with state-of-art references, for some of the estimated measures, lower errors were verified and a more complete characterization of the whole body structure was achieved.
NASA Astrophysics Data System (ADS)
Zhang, Xianxia; Wang, Jian; Qin, Tinggao
2003-09-01
Intelligent control algorithms are introduced into the control system of temperature and humidity. A multi-mode control algorithm of PI-Single Neuron is proposed for single loop control of temperature and humidity. In order to remove the coupling between temperature and humidity, a new decoupling method is presented, which is called fuzzy decoupling. The decoupling is achieved by using a fuzzy controller that dynamically modifies the static decoupling coefficient. Taking the control algorithm of PI-Single Neuron as the single loop control of temperature and humidity, the paper provides the simulated output response curves with no decoupling control, static decoupling control and fuzzy decoupling control. Those control algorithms are easily implemented in singlechip-based hardware systems.
Global-Context Based Salient Region Detection in Nature Images
NASA Astrophysics Data System (ADS)
Bao, Hong; Xu, De; Tang, Yingjun
Visually saliency detection provides an alternative methodology to image description in many applications such as adaptive content delivery and image retrieval. One of the main aims of visual attention in computer vision is to detect and segment the salient regions in an image. In this paper, we employ matrix decomposition to detect salient object in nature images. To efficiently eliminate high contrast noise regions in the background, we integrate global context information into saliency detection. Therefore, the most salient region can be easily selected as the one which is globally most isolated. The proposed approach intrinsically provides an alternative methodology to model attention with low implementation complexity. Experiments show that our approach achieves much better performance than that from the existing state-of-art methods.
Integer-ambiguity resolution in astronomy and geodesy
NASA Astrophysics Data System (ADS)
Lannes, A.; Prieur, J.-L.
2014-02-01
Recent theoretical developments in astronomical aperture synthesis have revealed the existence of integer-ambiguity problems. Those problems, which appear in the self-calibration procedures of radio imaging, have been shown to be similar to the nearest-lattice point (NLP) problems encountered in high-precision geodetic positioning and in global navigation satellite systems. In this paper we analyse the theoretical aspects of the matter and propose new methods for solving those NLP~problems. The related optimization aspects concern both the preconditioning stage, and the discrete-search stage in which the integer ambiguities are finally fixed. Our algorithms, which are described in an explicit manner, can easily be implemented. They lead to substantial gains in the processing time of both stages. Their efficiency was shown via intensive numerical tests.
Girlanda, Francesca; Fiedler, Ines; Becker, Thomas; Barbui, Corrado; Koesters, Markus
2017-01-01
Clinical practice guidelines are not easily implemented, leading to a gap between research synthesis and their use in routine care. To summarise the evidence relating to the impact of guideline implementation on provider performance and patient outcomes in mental healthcare settings, and to explore the performance of different strategies for guideline implementation. A systematic review of randomised controlled trials, controlled clinical trials and before-and-after studies comparing guideline implementation strategies v. usual care, and different guideline implementation strategies, in patients with severe mental illness. In total, 19 studies met our inclusion criteria. The studies did not show a consistent positive effect of guideline implementation on provider performance, but a more consistent small to modest positive effect on patient outcomes. Guideline implementation does not seem to have an impact on provider performance, nonetheless it may influence patient outcomes positively. © The Royal College of Psychiatrists 2017.
An add-in implementation of the RESAMPLING syntax under Microsoft EXCEL.
Meineke, I
2000-10-01
The RESAMPLING syntax defines a set of powerful commands, which allow the programming of probabilistic statistical models with few, easily memorized statements. This paper presents an implementation of the RESAMPLING syntax using Microsoft EXCEL with Microsoft WINDOWS(R) as a platform. Two examples are given to demonstrate typical applications of RESAMPLING in biomedicine. Details of the implementation with special emphasis on the programming environment are discussed at length. The add-in is available electronically to interested readers upon request. The use of the add-in facilitates numerical statistical analyses of data from within EXCEL in a comfortable way.
A distributed Clips implementation: dClips
NASA Technical Reports Server (NTRS)
Li, Y. Philip
1993-01-01
A distributed version of the Clips language, dClips, was implemented on top of two existing generic distributed messaging systems to show that: (1) it is easy to create a coarse-grained parallel programming environment out of an existing language if a high level messaging system is used; and (2) the computing model of a parallel programming environment can be changed easily if we change the underlying messaging system. dClips processes were first connected with a simple master-slave model. A client-server model with intercommunicating agents was later implemented. The concept of service broker is being investigated.
Revitalizing Space Operations through Total Quality Management
NASA Technical Reports Server (NTRS)
Baylis, William T.
1995-01-01
The purpose of this paper is to show the reader what total quality management (TQM) is and how to apply TQM in the space systems and management arena. TQM is easily understood, can be implemented in any type of business organization, and works.
Chemical Instrumentation for the Visually Handicapped.
ERIC Educational Resources Information Center
Anderson, James L.
1982-01-01
Describes a simple, relatively inexpensive, and easily implemented approach for introducing visually handicapped students to chemical instrumentation via experiments on operational amplifiers as examples of some of the electronic building blocks of chemical instrumentation. The approach is applicable to other chemical instruments having electrical…
Ada as an implementation language for knowledge based systems
NASA Technical Reports Server (NTRS)
Rochowiak, Daniel
1990-01-01
Debates about the selection of programming languages often produce cultural collisions that are not easily resolved. This is especially true in the case of Ada and knowledge based programming. The construction of programming tools provides a desirable alternative for resolving the conflict.
Collaboration Services: Enabling Chat in Disadvantaged Grids
2014-06-01
grids in the tactical domain" [2]. The main focus of this group is to identify what we call tactical SOA foundation services. By this we mean which...Here, only IPv4 is supported, as differences relating to IPv4 and IPv6 addressing meant that this functionality was not easily extended to use IPv6 ...multicast groups. Our IPv4 implementation is fully compliant with the specification, whereas the IPv6 implementation uses our own interpretation of
Mullins, Darragh; Coburn, Derek; Hannon, Louise; Jones, Edward; Clifford, Eoghan; Glavin, Martin
2018-03-01
Wastewater treatment facilities are continually challenged to meet both environmental regulations and reduce running costs (particularly energy and staffing costs). Improving the efficiency of operational monitoring at wastewater treatment plants (WWTPs) requires the development and implementation of appropriate performance metrics; particularly those that are easily measured, strongly correlate to WWTP performance, and can be easily automated, with a minimal amount of maintenance or intervention by human operators. Turbidity is the measure of the relative clarity of a fluid. It is an expression of the optical property that causes light to be scattered and absorbed by fine particles in suspension (rather than transmitted with no change in direction or flux level through a fluid sample). In wastewater treatment, turbidity is often used as an indicator of effluent quality, rather than an absolute performance metric, although correlations have been found between turbidity and suspended solids. Existing laboratory-based methods to measure turbidity for WWTPs, while relatively simple, require human intervention and are labour intensive. Automated systems for on-site measuring of wastewater effluent turbidity are not commonly used, while those present are largely based on submerged sensors that require regular cleaning and calibration due to fouling from particulate matter in fluids. This paper presents a novel, automated system for estimating fluid turbidity. Effluent samples are imaged such that the light absorption characteristic is highlighted as a function of fluid depth, and computer vision processing techniques are used to quantify this characteristic. Results from the proposed system were compared with results from established laboratory-based methods and were found to be comparable. Tests were conducted using both synthetic dairy wastewater and effluent from multiple WWTPs, both municipal and industrial. This system has an advantage over current methods as it provides a multipoint analysis that can be easily repeated for large volumes of wastewater effluent. Although the system was specifically designed and tested for wastewater treatment applications, it could have applications such as in drinking water treatment, and in other areas where fluid turbidity is an important measurement.
Verhoeven, Joost Theo Petra; Canuti, Marta; Munro, Hannah J; Dufour, Suzanne C; Lang, Andrew S
2018-04-19
High-throughput sequencing (HTS) technologies are becoming increasingly important within microbiology research, but aspects of library preparation, such as high cost per sample or strict input requirements, make HTS difficult to implement in some niche applications and for research groups on a budget. To answer these necessities, we developed ViDiT, a customizable, PCR-based, extremely low-cost (<5 US dollars per sample) and versatile library preparation method, and CACTUS, an analysis pipeline designed to rely on cloud computing power to generate high-quality data from ViDiT-based experiments without the need of expensive servers. We demonstrate here the versatility and utility of these methods within three fields of microbiology: virus discovery, amplicon-based viral genome sequencing and microbiome profiling. ViDiT-CACTUS allowed the identification of viral fragments from 25 different viral families from 36 oropharyngeal-cloacal swabs collected from wild birds, the sequencing of three almost complete genomes of avian influenza A viruses (>90% coverage), and the characterization and functional profiling of the complete microbial diversity (bacteria, archaea, viruses) within a deep-sea carnivorous sponge. ViDiT-CACTUS demonstrated its validity in a wide range of microbiology applications and its simplicity and modularity make it easily implementable in any molecular biology laboratory, towards various research goals.
Systems analysis - a new paradigm and decision support tools for the water framework directive
NASA Astrophysics Data System (ADS)
Bruen, M.
2007-06-01
In the early days of Systems Analysis the focus was on providing tools for optimisation, modelling and simulation for use by experts. Now there is a recognition of the need to develop and disseminate tools to assist in making decisions, negotiating compromises and communicating preferences that can easily be used by stakeholders without the need for specialist training. The Water Framework Directive (WFD) requires public participation and thus provides a strong incentive for progress in this direction. This paper places the new paradigm in the context of the classical one and discusses some of the new approaches which can be used in the implementation of the WFD. These include multi-criteria decision support methods suitable for environmental problems, adaptive management, cognitive mapping, social learning and cooperative design and group decision-making. Concordance methods (such as ELECTRE) and the Analytical Hierarchy Process (AHP) are identified as multi-criteria methods that can be readily integrated into Decision Support Systems (DSS) that deal with complex environmental issues with very many criteria, some of which are qualitative. The expanding use of the new paradigm provides an opportunity to observe and learn from the interaction of stakeholders with the new technology and to assess its effectiveness. This is best done by trained sociologists fully integrated into the processes. The WINCOMS research project is an example applied to the implementation of the WFD in Ireland.
Henning, Jill D; DeGroote, Lucas; Dahlin, Christine R
2015-09-15
In 1999, West Nile virus (WNV) first appeared in the United States and has subsequently infected more than a million people and untold numbers of wildlife. Though primarily an avian virus, WNV can also infect humans and horses. The current status of WNV and its effects on wildlife in Pennsylvania (PA) is sparsely monitored through sporadic testing of dead birds. In order to acquire a more comprehensive understanding of the status of WNV in wild birds, a study was designed and implemented to sample populations of migratory and local birds at Powdermill Nature Reserve near Rector, PA. Resident and migratory bird species totaling 276 individuals were sampled cloacally and orally to compare the effectiveness of sampling methods. The presence of WNV was tested for using RT-PCR. Two positive samples were found, one from a migrating Tennessee warbler and another from an American robin. The low infection rates indicate that WNV may not be a critical conservation concern in the Westmoreland County region of PA. There was also agreement between oral and cloacal swabs, which provides support for both methods. This study describes a surveillance method that is easily incorporated into any banding operation and which determines the risks of WNV to various bird populations. Copyright © 2015 Elsevier B.V. All rights reserved.
Increasing Resident Wellness Through a Novel Retreat Curriculum
Cornelius, Brian G; Edens, Mary Ann
2017-01-01
Background Because of their arduous schedules, residents are susceptible to burnout, fatigue, and depression. In 2015, the Accreditation Council for Graduate Medical Education (ACGME) launched a campaign to foster physician wellness, in response to the suicides of three residents during the previous year. The campaign calls for strategies to developing resiliency, identify problems, and promote well-being. One of the suggested methods to promote well-being was a residency retreat. Objective To implement a novel retreat curriculum that emphasizes team building between residents and faculty, with which residents expressed high satisfaction. Methods We created an "Amazing Race" style retreat involving five activity stations set up in a neighborhood park in which 25 of our 34 residents participated. These stations implemented team building, faculty-resident bonding and resident-resident bonding. An anonymous survey was administered to the 25 participating emergency medicine (EM) residents after the retreat, of whom 21 returned the survey. The survey consisted of questions to assess the resident’s perception of the team building activities, their satisfaction with each of the five activity stations and overall retreat satisfaction. Results Of the 25 residents who participated in the retreat, 21 (84%) returned the post-retreat survey (one participant returned a survey leaving the ranking questions incomplete). This low-cost event received high satisfaction ratings in regard to team-building, resident bonding, and faculty-resident bonding. Conclusions This novel retreat proved to be a low-cost and easily implemented activity with which the residents expressed high levels of satisfaction. PMID:28966896
Development and validation of an open source quantification tool for DSC-MRI studies.
Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J
2015-03-01
This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.
Extending key sharing: how to generate a key tightly coupled to a network security policy
NASA Astrophysics Data System (ADS)
Kazantzidis, Matheos
2006-04-01
Current state of the art security policy technologies, besides the small scale limitation and largely manual nature of accompanied management methods, are lacking a) in real-timeliness of policy implementation and b) vulnerabilities and inflexibility stemming from the centralized policy decision making; even if, for example, a policy description or access control database is distributed, the actual decision is often a centralized action and forms a system single point of failure. In this paper we are presenting a new fundamental concept that allows implement a security policy by a systematic and efficient key distribution procedure. Specifically, we extend the polynomial Shamir key splitting. According to this, a global key is split into n parts, any k of which can re-construct the original key. In this paper we present a method that instead of having "any k parts" be able to re-construct the original key, the latter can only be reconstructed if keys are combined as any access control policy describes. This leads into an easily deployable key generation procedure that results a single key per entity that "knows" its role in the specific access control policy from which it was derived. The system is considered efficient as it may be used to avoid expensive PKI operations or pairwise key distributions as well as provides superior security due to its distributed nature, the fact that the key is tightly coupled to the policy, and that policy change may be implemented easier and faster.
Spatial event cluster detection using an approximate normal distribution.
Torabi, Mahmoud; Rosychuk, Rhonda J
2008-12-12
In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.
Zolnikov, Tara R
2012-03-01
Current solutions continue to be inadequate in addressing the longstanding, worldwide problem of mercury emissions from small artisanal gold mining. Mercury, an inexpensive and easily accessible heavy metal, is used in the process of extracting gold from ore. Mercury emissions disperse, affecting human populations by causing adverse health effects and environmental and social ramifications. Many developing nations have sizable gold ore deposits, making small artisanal gold mining a major source of employment in the world. Poverty drives vulnerable, rural populations into gold mining because of social and economic instabilities. Educational programs responding to this environmental hazard have been implemented in the past, but have had low positive results due to lack of governmental support and little economic incentive. Educational and enforced intervention programs must be developed in conjunction with governmental agencies in order to successfully eliminate this ongoing problem. Industry leaders offered hopeful suggestions, but revealed limitations when trying to develop encompassing solutions to halt mercury emissions. This research highlights potential options that have been attempted in the past and suggests alternative solutions to improve upon these methods. Some methods include buyer impact recognition, risk assessment proposals exposing a cost-benefit analysis and toxicokinetic modeling, public health awareness campaigns, and the education of miners, healthcare workers, and locals within hazardous areas of mercury exposure. These methods, paired with the implementation of alternative mining techniques, propose a substantial reduction of mercury emissions. Copyright © 2011 Elsevier B.V. All rights reserved.
LibKiSAO: a Java library for Querying KiSAO.
Zhukova, Anna; Adams, Richard; Laibe, Camille; Le Novère, Nicolas
2012-09-24
The Kinetic Simulation Algorithm Ontology (KiSAO) supplies information about existing algorithms available for the simulation of Systems Biology models, their characteristics, parameters and inter-relationships. KiSAO enables the unambiguous identification of algorithms from simulation descriptions. Information about analogous methods having similar characteristics and about algorithm parameters incorporated into KiSAO is desirable for simulation tools. To retrieve this information programmatically an application programming interface (API) for KiSAO is needed. We developed libKiSAO, a Java library to enable querying of the KiSA Ontology. It implements methods to retrieve information about simulation algorithms stored in KiSAO, their characteristics and parameters, and methods to query the algorithm hierarchy and search for similar algorithms providing comparable results for the same simulation set-up. Using libKiSAO, simulation tools can make logical inferences based on this knowledge and choose the most appropriate algorithm to perform a simulation. LibKiSAO also enables simulation tools to handle a wider range of simulation descriptions by determining which of the available methods are similar and can be used instead of the one indicated in the simulation description if that one is not implemented. LibKiSAO enables Java applications to easily access information about simulation algorithms, their characteristics and parameters stored in the OWL-encoded Kinetic Simulation Algorithm Ontology. LibKiSAO can be used by simulation description editors and simulation tools to improve reproducibility of computational simulation tasks and facilitate model re-use.
Grelewska-Nowotko, Katarzyna; Żurawska-Zajfert, Magdalena; Żmijewska, Ewelina; Sowa, Sławomir
2018-05-01
In recent years, digital polymerase chain reaction (dPCR), a new molecular biology technique, has been gaining in popularity. Among many other applications, this technique can also be used for the detection and quantification of genetically modified organisms (GMOs) in food and feed. It might replace the currently widely used real-time PCR method (qPCR), by overcoming problems related to the PCR inhibition and the requirement of certified reference materials to be used as a calibrant. In theory, validated qPCR methods can be easily transferred to the dPCR platform. However, optimization of the PCR conditions might be necessary. In this study, we report the transfer of two validated qPCR methods for quantification of maize DAS1507 and NK603 events to the droplet dPCR (ddPCR) platform. After some optimization, both methods have been verified according to the guidance of the European Network of GMO Laboratories (ENGL) on analytical method verification (ENGL working group on "Method Verification." (2011) Verification of Analytical Methods for GMO Testing When Implementing Interlaboratory Validated Methods). Digital PCR methods performed equally or better than the qPCR methods. Optimized ddPCR methods confirm their suitability for GMO determination in food and feed.
SNPversity: A web-based tool for visualizing diversity
USDA-ARS?s Scientific Manuscript database
Background: Many stand-alone desktop software suites exist to visualize single nucleotide polymorphisms (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualizat...
Fukuda, Ikuo
2013-11-07
The zero-multipole summation method has been developed to efficiently evaluate the electrostatic Coulombic interactions of a point charge system. This summation prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large amounts of energetic noise and significant artifacts. The resulting energy function is represented by a constant term plus a simple pairwise summation, using a damped or undamped Coulombic pair potential function along with a polynomial of the distance between each particle pair. Thus, the implementation is straightforward and enables facile applications to high-performance computations. Any higher-order multipole moment can be taken into account in the neutrality principle, and it only affects the degree and coefficients of the polynomial and the constant term. The lowest and second moments correspond respectively to the Wolf zero-charge scheme and the zero-dipole summation scheme, which was previously proposed. Relationships with other non-Ewald methods are discussed, to validate the current method in their contexts. Good numerical efficiencies were easily obtained in the evaluation of Madelung constants of sodium chloride and cesium chloride crystals.
Origami by frontal photopolymerization.
Zhao, Zeang; Wu, Jiangtao; Mu, Xiaoming; Chen, Haosen; Qi, H Jerry; Fang, Daining
2017-04-01
Origami structures are of great interest in microelectronics, soft actuators, mechanical metamaterials, and biomedical devices. Current methods of fabricating origami structures still have several limitations, such as complex material systems or tedious processing steps. We present a simple approach for creating three-dimensional (3D) origami structures by the frontal photopolymerization method, which can be easily implemented by using a commercial projector. The concept of our method is based on the volume shrinkage during photopolymerization. By adding photoabsorbers into the polymer resin, an attenuated light field is created and leads to a nonuniform curing along the thickness direction. The layer directly exposed to light cures faster than the next layer; this nonuniform curing degree leads to nonuniform curing-induced volume shrinkage. This further introduces a nonuniform stress field, which drives the film to bend toward the newly formed side. The degree of bending can be controlled by adjusting the gray scale and the irradiation time, an easy approach for creating origami structures. The behavior is examined both experimentally and theoretically. Two methods are also proposed to create different types of 3D origami structures.
A Distributed Signature Detection Method for Detecting Intrusions in Sensor Systems
Kim, Ilkyu; Oh, Doohwan; Yoon, Myung Kuk; Yi, Kyueun; Ro, Won Woo
2013-01-01
Sensor nodes in wireless sensor networks are easily exposed to open and unprotected regions. A security solution is strongly recommended to prevent networks against malicious attacks. Although many intrusion detection systems have been developed, most systems are difficult to implement for the sensor nodes owing to limited computation resources. To address this problem, we develop a novel distributed network intrusion detection system based on the Wu–Manber algorithm. In the proposed system, the algorithm is divided into two steps; the first step is dedicated to a sensor node, and the second step is assigned to a base station. In addition, the first step is modified to achieve efficient performance under limited computation resources. We conduct evaluations with random string sets and actual intrusion signatures to show the performance improvement of the proposed method. The proposed method achieves a speedup factor of 25.96 and reduces 43.94% of packet transmissions to the base station compared with the previously proposed method. The system achieves efficient utilization of the sensor nodes and provides a structural basis of cooperative systems among the sensors. PMID:23529146
A distributed signature detection method for detecting intrusions in sensor systems.
Kim, Ilkyu; Oh, Doohwan; Yoon, Myung Kuk; Yi, Kyueun; Ro, Won Woo
2013-03-25
Sensor nodes in wireless sensor networks are easily exposed to open and unprotected regions. A security solution is strongly recommended to prevent networks against malicious attacks. Although many intrusion detection systems have been developed, most systems are difficult to implement for the sensor nodes owing to limited computation resources. To address this problem, we develop a novel distributed network intrusion detection system based on the Wu-Manber algorithm. In the proposed system, the algorithm is divided into two steps; the first step is dedicated to a sensor node, and the second step is assigned to a base station. In addition, the first step is modified to achieve efficient performance under limited computation resources. We conduct evaluations with random string sets and actual intrusion signatures to show the performance improvement of the proposed method. The proposed method achieves a speedup factor of 25.96 and reduces 43.94% of packet transmissions to the base station compared with the previously proposed method. The system achieves efficient utilization of the sensor nodes and provides a structural basis of cooperative systems among the sensors.
NASA Astrophysics Data System (ADS)
Krapez, J.-C.
2016-09-01
The Darboux transformation is a differential transformation which, like other related methods (supersymmetry quantum mechanics-SUSYQM, factorization method) allows generating sequences of solvable potentials for the stationary 1D Schrodinger equation. It was recently shown that the heat equation in graded heterogeneous media, after a Liouville transformation, reduces to a pair of Schrödinger equations sharing the same potential function, one for the transformed temperature and one for the square root of effusivity. Repeated joint PROperty and Field Darboux Transformations (PROFIDT method) then yield two sequences of solutions: one of new solvable effusivity profiles and one of the corresponding temperature fields. In this paper we present and discuss the outcome in the case of a graded half-space domain. The interest in this methodology is that it provides closed-form solutions based on elementary functions. They are thus easily amenable to an implementation in an inversion process aimed, for example, at retrieving a subsurface effusivity profile from a modulated or transient surface temperature measurement (photothermal characterization).
Variable selection in subdistribution hazard frailty models with competing risks data
Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo
2014-01-01
The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872
WWWinda Orchestrator: a mechanism for coordinating distributed flocks of Java Applets
NASA Astrophysics Data System (ADS)
Gutfreund, Yechezkal-Shimon; Nicol, John R.
1997-01-01
The WWWinda Orchestrator is a simple but powerful tool for coordinating distributed Java applets. Loosely derived from the Linda programming language developed by David Gelernter and Nicholas Carriero of Yale, WWWinda implements a distributed shared object space called TupleSpace where applets can post, read, or permanently store arbitrary Java objects. In this manner, applets can easily share information without being aware of the underlying communication mechanisms. WWWinda is a very useful for orchestrating flocks of distributed Java applets. Coordination event scan be posted to WWWinda TupleSpace and used to orchestrate the actions of remote applets. Applets can easily share information via the TupleSpace. The technology combines several functions in one simple metaphor: distributed web objects, remote messaging between applets, distributed synchronization mechanisms, object- oriented database, and a distributed event signaling mechanisms. WWWinda can be used a s platform for implementing shared VRML environments, shared groupware environments, controlling remote devices such as cameras, distributed Karaoke, distributed gaming, and shared audio and video experiences.
Smielik, Ievgen; Hütwohl, Jan-Marco; Gierszewski, Stefanie; Witte, Klaudia; Kuhnert, Klaus-Dieter
2017-01-01
Abstract Animal behavior researchers often face problems regarding standardization and reproducibility of their experiments. This has led to the partial substitution of live animals with artificial virtual stimuli. In addition to standardization and reproducibility, virtual stimuli open new options for researchers since they are easily changeable in morphology and appearance, and their behavior can be defined. In this article, a novel toolchain to conduct behavior experiments with fish is presented by a case study in sailfin mollies Poecilia latipinna. As the toolchain holds many different and novel features, it offers new possibilities for studies in behavioral animal research and promotes the standardization of experiments. The presented method includes options to design, animate, and present virtual stimuli to live fish. The designing tool offers an easy and user-friendly way to define size, coloration, and morphology of stimuli and moreover it is able to configure virtual stimuli randomly without any user influence. Furthermore, the toolchain brings a novel method to animate stimuli in a semiautomatic way with the help of a game controller. These created swimming paths can be applied to different stimuli in real time. A presentation tool combines models and swimming paths regarding formerly defined playlists, and presents the stimuli onto 2 screens. Experiments with live sailfin mollies validated the usage of the created virtual 3D fish models in mate-choice experiments. PMID:29491963
Müller, Klaus; Smielik, Ievgen; Hütwohl, Jan-Marco; Gierszewski, Stefanie; Witte, Klaudia; Kuhnert, Klaus-Dieter
2017-02-01
Animal behavior researchers often face problems regarding standardization and reproducibility of their experiments. This has led to the partial substitution of live animals with artificial virtual stimuli. In addition to standardization and reproducibility, virtual stimuli open new options for researchers since they are easily changeable in morphology and appearance, and their behavior can be defined. In this article, a novel toolchain to conduct behavior experiments with fish is presented by a case study in sailfin mollies Poecilia latipinna . As the toolchain holds many different and novel features, it offers new possibilities for studies in behavioral animal research and promotes the standardization of experiments. The presented method includes options to design, animate, and present virtual stimuli to live fish. The designing tool offers an easy and user-friendly way to define size, coloration, and morphology of stimuli and moreover it is able to configure virtual stimuli randomly without any user influence. Furthermore, the toolchain brings a novel method to animate stimuli in a semiautomatic way with the help of a game controller. These created swimming paths can be applied to different stimuli in real time. A presentation tool combines models and swimming paths regarding formerly defined playlists, and presents the stimuli onto 2 screens. Experiments with live sailfin mollies validated the usage of the created virtual 3D fish models in mate-choice experiments.
NASA Technical Reports Server (NTRS)
Liu, Kuojuey Ray
1990-01-01
Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.
Implementation Of Quality Management System For Irradiation Processing Services
NASA Astrophysics Data System (ADS)
Lungu, Ion-Bogdan; Manea, Maria-Mihaela
2015-07-01
In today's market, due to an increasing competitiveness, quality management has set itself as an indispensable tool and a reference point for every business. It is ultimately focused on customer satisfaction which is a stringent factor for every business. Implementing and maintaining a QMS is a rather difficult, time consuming and expensive process which must be done with respect of many factors. The aim of this paper is to present a case study for implementing QMS ISO 9001 in a gamma irradiation treatment service provider. The research goals are the identification of key benefits, reasons, advantages, disadvantages, drawbacks etc for a successful QMS implementation and use. Finally, the expected results focus on creating a general framework for implementing an efficient QMS plan that can be easily adapted to other kind of services and markets.
Efficient IDUA Gene Mutation Detection with Combined Use of dHPLC and Dried Blood Samples
Duarte, Ana Joana; Vieira, Luis
2013-01-01
Objectives. Development of a simple mutation directed method in order to allow lowering the cost of mutation testing using an easily obtainable biological material. Assessment of the feasibility of such method was tested using a GC-rich amplicon. Design and Methods. A method of denaturing high-performance liquid chromatography (dHPLC) was improved and implemented as a technique for the detection of variants in exon 9 of the IDUA gene. The optimized method was tested in 500 genomic DNA samples obtained from dried blood spots (DBS). Results. With this dHPLC approach it was possible to detect different variants, including the common p.Trp402Ter mutation in the IDUA gene. The high GC content did not interfere with the resolution and reliability of this technique, and discrimination of G-C transversions was also achieved. Conclusion. This PCR-based dHPLC method is proved to be a rapid, a sensitive, and an excellent option for screening numerous samples obtained from DBS. Furthermore, it resulted in the consistent detection of clearly distinguishable profiles of the common p.Trp402Ter IDUA mutation with an advantageous balance of cost and technical requirements. PMID:27335677
Quantile Regression for Recurrent Gap Time Data
Luo, Xianghua; Huang, Chiung-Yu; Wang, Lan
2014-01-01
Summary Evaluating covariate effects on gap times between successive recurrent events is of interest in many medical and public health studies. While most existing methods for recurrent gap time analysis focus on modeling the hazard function of gap times, a direct interpretation of the covariate effects on the gap times is not available through these methods. In this article, we consider quantile regression that can provide direct assessment of covariate effects on the quantiles of the gap time distribution. Following the spirit of the weighted risk-set method by Luo and Huang (2011, Statistics in Medicine 30, 301–311), we extend the martingale-based estimating equation method considered by Peng and Huang (2008, Journal of the American Statistical Association 103, 637–649) for univariate survival data to analyze recurrent gap time data. The proposed estimation procedure can be easily implemented in existing software for univariate censored quantile regression. Uniform consistency and weak convergence of the proposed estimators are established. Monte Carlo studies demonstrate the effectiveness of the proposed method. An application to data from the Danish Psychiatric Central Register is presented to illustrate the methods developed in this article. PMID:23489055
Improving the local wavenumber method by automatic DEXP transformation
NASA Astrophysics Data System (ADS)
Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni
2014-12-01
In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.
Analysis of Environmental Contamination resulting from ...
Catastrophic incidents can generate a large number of samples with analytically diverse types including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to safe levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illu
A Fixed-Point Phase Lock Loop in a Software Defined Radio
2002-09-01
code from a simulation model. This feature will allow easy implementation on an FPGA as C can be easily converted to VHDL , the language required...this is equivalent to the MATLAB code implementation in Appendix A. The PD takes the input signal 40 and multiplies it by the in-phase and...stop to 60 mph in 3.1 seconds (the fastest production car ever built is the Porsche Carrera twin turbo which was tested at 0-60 mph in 3.1 seconds
Metadata-driven Delphi rating on the Internet.
Deshpande, Aniruddha M; Shiffman, Richard N; Nadkarni, Prakash M
2005-01-01
Paper-based data collection and analysis for consensus development is inefficient and error-prone. Computerized techniques that could improve efficiency, however, have been criticized as costly, inconvenient and difficult to use. We designed and implemented a metadata-driven Web-based Delphi rating and analysis tool, employing the flexible entity-attribute-value schema to create generic, reusable software. The software can be applied to various domains by altering the metadata; the programming code remains intact. This approach greatly reduces the marginal cost of re-using the software. We implemented our software to prepare for the Conference on Guidelines Standardization. Twenty-three invited experts completed the first round of the Delphi rating on the Web. For each participant, the software generated individualized reports that described the median rating and the disagreement index (calculated from the Interpercentile Range Adjusted for Symmetry) as defined by the RAND/UCLA Appropriateness Method. We evaluated the software with a satisfaction survey using a five-level Likert scale. The panelists felt that Web data entry was convenient (median 4, interquartile range [IQR] 4.0-5.0), acceptable (median 4.5, IQR 4.0-5.0) and easily accessible (median 5, IQR 4.0-5.0). We conclude that Web-based Delphi rating for consensus development is a convenient and acceptable alternative to the traditional paper-based method.
SimPhospho: a software tool enabling confident phosphosite assignment.
Suni, Veronika; Suomi, Tomi; Tsubosaka, Tomoya; Imanishi, Susumu Y; Elo, Laura L; Corthals, Garry L
2018-03-27
Mass spectrometry combined with enrichment strategies for phosphorylated peptides has been successfully employed for two decades to identify sites of phosphorylation. However, unambiguous phosphosite assignment is considered challenging. Given that site-specific phosphorylation events function as different molecular switches, validation of phosphorylation sites is of utmost importance. In our earlier study we developed a method based on simulated phosphopeptide spectral libraries, which enables highly sensitive and accurate phosphosite assignments. To promote more widespread use of this method, we here introduce a software implementation with improved usability and performance. We present SimPhospho, a fast and user-friendly tool for accurate simulation of phosphopeptide tandem mass spectra. Simulated phosphopeptide spectral libraries are used to validate and supplement database search results, with a goal to improve reliable phosphoproteome identification and reporting. The presented program can be easily used together with the Trans-Proteomic Pipeline and integrated in a phosphoproteomics data analysis workflow. SimPhospho is available for Windows, Linux and Mac operating systems at https://sourceforge.net/projects/simphospho/. It is open source and implemented in C ++. A user's manual with detailed description of data analysis using SimPhospho as well as test data can be found as supplementary material of this article. Supplementary data are available at https://www.btk.fi/research/ computational-biomedicine/software/.
Green, Julie M.; Wilcke, Jeffrey R.; Abbott, Jonathon; Rees, Loren P.
2006-01-01
Objective: This study evaluated an existing SNOMED-CT® model for structured recording of heart murmur findings and compared it to a concept-dependent attributes model using content from SNOMED-CT. Methods: The authors developed a model for recording heart murmur findings as an alternative to SNOMED-CT's use of Interprets and Has interpretation. A micro-nomenclature was then created to support each model using subset and extension mechanisms described for SNOMED-CT. Each micro-nomenclature included a partonomy of cardiac cycle timing values. A mechanism for handling ranges of values was also devised. One hundred clinical heart murmurs were recorded using purpose-built recording software based on both models. Results: Each micro-nomenclature was extended through the addition of the same list of concepts. SNOMED role grouping was required in both models. All 100 clinical murmurs were described using each model. The only major differences between the two models were the number of relationship rows required for storage and the hierarchical assignments of concepts within the micro-nomenclatures. Conclusion: The authors were able to capture 100 clinical heart murmurs with both models. Requirements for implementing the two models were virtually identical. In fact, data stored using these models could be easily interconverted. There is no apparent penalty for implementing either approach. PMID:16501179
Kokalj, Meta; Prikeržnik, Marcel; Kreft, Samo
2017-05-01
Rocket is a popular salad vegetable used all over the world and it has many health benefits. However, like with all plant material, there exists a danger of contamination with toxic substances. In the case of rocket, contamination with groundsel has occurred. Groundsel is a common weed in rocket crops, and it contains very toxic pyrrolizidine alkaloids. In our study infrared spectroscopy was used to distinguish groundsel samples from rocket leaves. Infrared spectroscopy is a very simple analytical technique; however, some specific conditions are more easily implemented in industrial environment than others. Some of these conditions and parameters of infrared spectroscopy were explored in detail. We tested for the influence of different parameters of attenuated total reflectance and transmission infrared method. Our results show that a 100 % correct classification can be obtained under conditions most suitable for industry: using fresh samples and parameters that enable fast spectral measurement. Infrared spectroscopy is a fast and easy-to-use method that has been shown to be able to differentiate between rocket and groundsel leaves. Therefore, it could be further studied for implementation in the safety control of rocket salads. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
A novel water quality data analysis framework based on time-series data mining.
Deng, Weihui; Wang, Guoyin
2017-07-01
The rapid development of time-series data mining provides an emerging method for water resource management research. In this paper, based on the time-series data mining methodology, we propose a novel and general analysis framework for water quality time-series data. It consists of two parts: implementation components and common tasks of time-series data mining in water quality data. In the first part, we propose to granulate the time series into several two-dimensional normal clouds and calculate the similarities in the granulated level. On the basis of the similarity matrix, the similarity search, anomaly detection, and pattern discovery tasks in the water quality time-series instance dataset can be easily implemented in the second part. We present a case study of this analysis framework on weekly Dissolve Oxygen time-series data collected from five monitoring stations on the upper reaches of Yangtze River, China. It discovered the relationship of water quality in the mainstream and tributary as well as the main changing patterns of DO. The experimental results show that the proposed analysis framework is a feasible and efficient method to mine the hidden and valuable knowledge from water quality historical time-series data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Integration and application of optical chemical sensors in microbioreactors.
Gruber, Pia; Marques, Marco P C; Szita, Nicolas; Mayr, Torsten
2017-08-08
The quantification of key variables such as oxygen, pH, carbon dioxide, glucose, and temperature provides essential information for biological and biotechnological applications and their development. Microfluidic devices offer an opportunity to accelerate research and development in these areas due to their small scale, and the fine control over the microenvironment, provided that these key variables can be measured. Optical sensors are well-suited for this task. They offer non-invasive and non-destructive monitoring of the mentioned variables, and the establishment of time-course profiles without the need for sampling from the microfluidic devices. They can also be implemented in larger systems, facilitating cross-scale comparison of analytical data. This tutorial review presents an overview of the optical sensors and their technology, with a view to support current and potential new users in microfluidics and biotechnology in the implementation of such sensors. It introduces the benefits and challenges of sensor integration, including, their application for microbioreactors. Sensor formats, integration methods, device bonding options, and monitoring options are explained. Luminescent sensors for oxygen, pH, carbon dioxide, glucose and temperature are showcased. Areas where further development is needed are highlighted with the intent to guide future development efforts towards analytes for which reliable, stable, or easily integrated detection methods are not yet available.
On the improvement of blood sample collection at clinical laboratories
2014-01-01
Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140
NASA Astrophysics Data System (ADS)
Ghosh, Pratik
1992-01-01
The investigations focussed on in vivo NMR imaging studies of magnetic particles with and within neural cells. NMR imaging methods, both Fourier transform and projection reconstruction, were implemented and new protocols were developed to perform "Neuronal Tracing with Magnetic Labels" on small animal brains. Having performed the preliminary experiments with neuronal tracing, new optimized coils and experimental set-up were devised. A novel gradient coil technology along with new rf-coils were implemented, and optimized for future use with small animals in them. A new magnetic labelling procedure was developed that allowed labelling of billions of cells with ultra -small magnetite particles in a short time. The relationships among the viability of such cells, the amount of label and the contrast in the images were studied as quantitatively as possible. Intracerebral grafting of magnetite labelled fetal rat brain cells made it possible for the first time to attempt monitoring in vivo the survival, differentiation, and possible migration of both host and grafted cells in the host rat brain. This constituted the early steps toward future experiments that may lead to the monitoring of human brain grafts of fetal brain cells. Preliminary experiments with direct injection of horse radish peroxidase-conjugated magnetite particles into neurons, followed by NMR imaging, revealed a possible non-invasive alternative, allowing serial study of the dynamic transport pattern of tracers in single living animals. New gradient coils were built by using parallel solid-conductor ribbon cables that could be wrapped easily and quickly. Rapid rise times provided by these coils allowed implementation of fast imaging methods. Optimized rf-coil circuit development made it possible to understand better the sample-coil properties and the associated trade -offs in cases of small but conducting samples.
Relationship between cotton yield and soil electrical conductivity, topography, and landsat imagery
USDA-ARS?s Scientific Manuscript database
Understanding spatial and temporal variability in crop yield is a prerequisite to implementing site-specific management of crop inputs. Apparent soil electrical conductivity (ECa), soil brightness, and topography are easily obtained data that can explain yield variability. The objectives of this stu...
The National Lakes Assessment (NLA) and other lake survey and monitoring efforts increasingly rely upon biological assemblage data to define lake condition. Information concerning the multiple dimensions of physical and chemical habitat is necessary to interpret this biological ...
A Nurse-Led Innovation in Education: Implementing a Collaborative Multidisciplinary Grand Rounds.
Matamoros, Lisa; Cook, Michelle
2017-08-01
Multidisciplinary grand rounds provides an opportunity to promote excellence in patient care through scholarly presentations and interdisciplinary collaboration with an innovative approach. In addition, multidisciplinary grand rounds serves to recognize expertise of staff, mentor and support professional development, and provide a collaborative environment across all clinical disciplines and support services. This article describes a process model developed by nurse educators for implementing a multidisciplinary grand rounds program. The components of the process model include topic submissions, coaching presenters, presentations, evaluations, and spreading the work. This model can be easily implemented at any organization. J Contin Educ Nurs. 2017;48(8):353-357. Copyright 2017, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Schauberger, Bernhard; Rolinski, Susanne; Müller, Christoph
2016-12-01
Variability of crop yields is detrimental for food security. Under climate change its amplitude is likely to increase, thus it is essential to understand the underlying causes and mechanisms. Crop models are the primary tool to project future changes in crop yields under climate change. A systematic overview of drivers and mechanisms of crop yield variability (YV) can thus inform crop model development and facilitate improved understanding of climate change impacts on crop yields. Yet there is a vast body of literature on crop physiology and YV, which makes a prioritization of mechanisms for implementation in models challenging. Therefore this paper takes on a novel approach to systematically mine and organize existing knowledge from the literature. The aim is to identify important mechanisms lacking in models, which can help to set priorities in model improvement. We structure knowledge from the literature in a semi-quantitative network. This network consists of complex interactions between growing conditions, plant physiology and crop yield. We utilize the resulting network structure to assign relative importance to causes of YV and related plant physiological processes. As expected, our findings confirm existing knowledge, in particular on the dominant role of temperature and precipitation, but also highlight other important drivers of YV. More importantly, our method allows for identifying the relevant physiological processes that transmit variability in growing conditions to variability in yield. We can identify explicit targets for the improvement of crop models. The network can additionally guide model development by outlining complex interactions between processes and by easily retrieving quantitative information for each of the 350 interactions. We show the validity of our network method as a structured, consistent and scalable dictionary of literature. The method can easily be applied to many other research fields.
The transient divided bar method for laboratory measurements of thermal properties
NASA Astrophysics Data System (ADS)
Bording, Thue S.; Nielsen, Søren B.; Balling, Niels
2016-12-01
Accurate information on thermal conductivity and thermal diffusivity of materials is of central importance in relation to geoscience and engineering problems involving the transfer of heat. Several methods, including the classical divided bar technique, are available for laboratory measurements of thermal conductivity, but much fewer for thermal diffusivity. We have generalized the divided bar technique to the transient case in which thermal conductivity, volumetric heat capacity and thereby also thermal diffusivity are measured simultaneously. As the density of samples is easily determined independently, specific heat capacity can also be determined. The finite element formulation provides a flexible forward solution for heat transfer across the bar, and thermal properties are estimated by inverse Monte Carlo modelling. This methodology enables a proper quantification of experimental uncertainties on measured thermal properties and information on their origin. The developed methodology was applied to various materials, including a standard ceramic material and different rock samples, and measuring results were compared with results applying traditional steady-state divided bar and an independent line-source method. All measurements show highly consistent results and with excellent reproducibility and high accuracy. For conductivity the obtained uncertainty is typically 1-3 per cent, and for diffusivity uncertainty may be reduced to about 3-5 per cent. The main uncertainty originates from the presence of thermal contact resistance associated with the internal interfaces in the bar. These are not resolved during inversion and it is imperative that they are minimized. The proposed procedure is simple and may quite easily be implemented to the many steady-state divided bar systems in operation. A thermally controlled bath, as applied here, may not be needed. Simpler systems, such as applying temperature-controlled water directly from a tap, may also be applied.
Physically active academic lessons: acceptance, barriers and facilitators for implementation.
Dyrstad, Sindre M; Kvalø, Silje E; Alstveit, Marianne; Skage, Ingrid
2018-03-06
To improve health and academic learning in schoolchildren, the Active School programme in Stavanger, Norway has introduced physically active academic lessons. This is a teaching method combining physical activity with academic content. The purpose of this paper was to evaluate the response to the physically active lessons and identify facilitators and barriers for implementation of such an intervention. Five school leaders (principals or vice-principals), 13 teachers and 30 children from the five intervention schools were interviewed about their experiences with the 10-month intervention, which consisted of weekly minimum 2 × 45 minutes of physically active academic lessons, and the factors affecting its implementation. All interviews were transcribed and analysed using the qualitative data analysis program NVivo 10 (QSR international, London, UK). In addition, weekly teacher's intervention delivery logs were collected and analysed. On average, the physically active academic lessons in 18 of the 34 weeks (53%) were reported in the teacher logs. The number of delivered physically active academic lessons covered 73% of the schools' planned activity. Physically active lessons were well received among school leaders, teachers and children. The main facilitators for implementation of the physically active lessons were active leadership and teacher support, high self-efficacy regarding mastering the intervention, ease of organizing physically active lessons, inclusion of physically active lessons into the lesson curricula, and children's positive reception of the intervention. The main barriers were unclear expectations, lack of knowledge and time to plan the physiclly active lessons, and the length of the physically active lessons (15-20 min lessons were preferred over the 45 min lessons). Physically active academic lessons were considered an appropriate pedagogical method for creating positive variation, and were highly appreciated among both teachers and children. Both the principal and the teachers should be actively involved the implementation, which could be strengthened by including physical activity into the school's strategy. Barriers for implementing physically active lessons in schools could be lowered by increasing implementation clarity and introducing the teachers to high quality and easily organized lessons. Clinicaltrail.gov ID identifier: NCT03436355 . Retrospectively registered: 16th of Feb, 2018.
Learning Probabilistic Features for Robotic Navigation Using Laser Sensors
Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José
2014-01-01
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N 2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377
Learning probabilistic features for robotic navigation using laser sensors.
Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José
2014-01-01
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.
Improving microstructural quantification in FIB/SEM nanotomography.
Taillon, Joshua A; Pellegrinelli, Christopher; Huang, Yi-Lin; Wachsman, Eric D; Salamanca-Riba, Lourdes G
2018-01-01
FIB/SEM nanotomography (FIB-nt) is a powerful technique for the determination and quantification of the three-dimensional microstructure in subsurface features. Often times, the microstructure of a sample is the ultimate determiner of the overall performance of a system, and a detailed understanding of its properties is crucial in advancing the materials engineering of a resulting device. While the FIB-nt technique has developed significantly in the 15 years since its introduction, advanced nanotomographic analysis is still far from routine, and a number of challenges remain in data acquisition and post-processing. In this work, we present a number of techniques to improve the quality of the acquired data, together with easy-to-implement methods to obtain "advanced" microstructural quantifications. The techniques are applied to a solid oxide fuel cell cathode of interest to the electrochemistry community, but the methodologies are easily adaptable to a wide range of material systems. Finally, results from an analyzed sample are presented as a practical example of how these techniques can be implemented. Copyright © 2017 Elsevier B.V. All rights reserved.
Performance of low-rank QR approximation of the finite element Biot-Savart law
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D; Fasenfest, B
2006-10-16
In this paper we present a low-rank QR method for evaluating the discrete Biot-Savart law. Our goal is to develop an algorithm that is easily implemented on parallel computers. It is assumed that the known current density and the unknown magnetic field are both expressed in a finite element expansion, and we wish to compute the degrees-of-freedom (DOF) in the basis function expansion of the magnetic field. The matrix that maps the current DOF to the field DOF is full, but if the spatial domain is properly partitioned the matrix can be written as a block matrix, with blocks representingmore » distant interactions being low rank and having a compressed QR representation. While an octree partitioning of the matrix may be ideal, for ease of parallel implementation we employ a partitioning based on number of processors. The rank of each block (i.e. the compression) is determined by the specific geometry and is computed dynamically. In this paper we provide the algorithmic details and present computational results for large-scale computations.« less
PrismTech Data Distribution Service Java API Evaluation
NASA Technical Reports Server (NTRS)
Riggs, Cortney
2008-01-01
My internship duties with Launch Control Systems required me to start performance testing of an Object Management Group's (OMG) Data Distribution Service (DDS) specification implementation by PrismTech Limited through the Java programming language application programming interface (API). DDS is a networking middleware for Real-Time Data Distribution. The performance testing involves latency, redundant publishers, extended duration, redundant failover, and read performance. Time constraints allowed only for a data throughput test. I have designed the testing applications to perform all performance tests when time is allowed. Performance evaluation data such as megabits per second and central processing unit (CPU) time consumption were not easily attainable through the Java programming language; they required new methods and classes created in the test applications. Evaluation of this product showed the rate that data can be sent across the network. Performance rates are better on Linux platforms than AIX and Sun platforms. Compared to previous C++ programming language API, the performance evaluation also shows the language differences for the implementation. The Java API of the DDS has a lower throughput performance than the C++ API.
System for Automated Geoscientific Analyses (SAGA) v. 2.1.4
NASA Astrophysics Data System (ADS)
Conrad, O.; Bechtel, B.; Bock, M.; Dietrich, H.; Fischer, E.; Gerlitz, L.; Wehberg, J.; Wichmann, V.; Böhner, J.
2015-02-01
The System for Automated Geoscientific Analyses (SAGA) is an open-source Geographic Information System (GIS), mainly licensed under the GNU General Public License. Since its first release in 2004, SAGA has rapidly developed from a specialized tool for digital terrain analysis to a comprehensive and globally established GIS platform for scientific analysis and modeling. SAGA is coded in C++ in an object oriented design and runs under several operating systems including Windows and Linux. Key functional features of the modular organized software architecture comprise an application programming interface for the development and implementation of new geoscientific methods, an easily approachable graphical user interface with many visualization options, a command line interpreter, and interfaces to scripting and low level programming languages like R and Python. The current version 2.1.4 offers more than 700 tools, which are implemented in dynamically loadable libraries or shared objects and represent the broad scopes of SAGA in numerous fields of geoscientific endeavor and beyond. In this paper, we inform about the system's architecture, functionality, and its current state of development and implementation. Further, we highlight the wide spectrum of scientific applications of SAGA in a review of published studies with special emphasis on the core application areas digital terrain analysis, geomorphology, soil science, climatology and meteorology, as well as remote sensing.
Disk mass determination through CO isotopologues
NASA Astrophysics Data System (ADS)
Miotello, Anna; Kama, Mihkel; van Dishoeck, Ewine
2015-08-01
One of the key properties for understanding how disks evolve to planetary systems is their overall mass, combined with their surface density distribution. So far, virtually all disk mass determinations are based on observations of the millimeter continuum dust emission.To derive the total gas + dust disk mass from these data involves however several big assumptions. The alternative method is to directly derive the gas mass through the detection of carbon monoxide (CO) and its less abundant isotopologues. CO chemistry is well studied and easily implemented in chemical models, provided that isotope-selective processes are properly accounted for.CO isotope-selective photodissociation was implemented for the first time in a full physical-chemical code in Miotello et al. (2014). The main result is that if isotope-selective effects are not considered in the data analysis, disk masses can be underestimated by an order of magnitude or more. For example, the mass discrepancy found for the renowned TW Hya disk may be explained or at least mitigated by this implementation. In this poster, we present new results for a large grid of disk models. We derive mass correction factors for different disk, stellar and grain properties in order to account for isotope-selective effects in analyzing ALMA data of CO isotopologues (Miotello et al., in prep.).
Template Authoring Environment for the Automatic Generation of Narrative Content
ERIC Educational Resources Information Center
Caropreso, Maria Fernanda; Inkpen, Diana; Keshtkar, Fazel; Khan, Shahzad
2012-01-01
Natural Language Generation (NLG) systems can make data accessible in an easily digestible textual form; but using such systems requires sophisticated linguistic and sometimes even programming knowledge. We have designed and implemented an environment for creating and modifying NLG templates that requires no programming knowledge, and can operate…
Interactive Digital Image Manipulation System (IDIMS)
NASA Technical Reports Server (NTRS)
Fleming, M. D.
1981-01-01
The implementation of an interactive digital image manipulation system (IDIMS) is described. The system is run on an HP-3000 Series 3 minicomputer. The IDIMS system provides a complete image geoprocessing capability for raster formatted data in a self-contained system. It is easily installed, documentation is provided, and vendor support is available.
ERIC Educational Resources Information Center
Nelson, Barbara J., Comp.; Wallner, Barbara K., Comp.; Powers, Myra L. Ed.; Hartley, Nancy K., Ed.
This publication is a compilation of examples of practical, easily implemented activities to help mathematics, science, and education faculty duplicate efforts by the Rocky Mountain Teacher Education Collaborative (RMTEC) to reform and revise curriculum for preservice educators. Activities are organized by content areas: mathematics; geology,…
ERIC Educational Resources Information Center
Schlenker, Richard M.; And Others
1995-01-01
Describes the use of constructivism in teaching human anatomy. Provides directions for constructing arm-hand and leg-foot models that include extensor and flexor muscles and that are easily and cheaply constructed. Lists resources that provide ideas for using such models depending upon the curriculum implemented in a school or the course that is…
Directly executable formal models of middleware for MANET and Cloud Networking and Computing
NASA Astrophysics Data System (ADS)
Pashchenko, D. V.; Sadeq Jaafar, Mustafa; Zinkin, S. A.; Trokoz, D. A.; Pashchenko, T. U.; Sinev, M. P.
2016-04-01
The article considers some “directly executable” formal models that are suitable for the specification of computing and networking in the cloud environment and other networks which are similar to wireless networks MANET. These models can be easily programmed and implemented on computer networks.
The Importance of Attendance in an Introductory Textile Science Course
ERIC Educational Resources Information Center
Marcketti, Sara B.; Wang, Xinxin; Greder, Kate
2013-01-01
At Iowa State University, the introductory textile science course is a required 4-credit class for all undergraduate students enrolled in the Apparel, Merchandising, and Design Program. Frustrated by a perceived gap between students who easily comprehended course material and those who complained and struggled, the instructor implemented an…
Some recent developments in headspace gas chromatography
J.Y. Zhu; X.-S. Chai
2005-01-01
In this study, recent developments in headspace gas chromatography (HSGC) are briefly reviewed. Several novel HSGC techniques developed recently are presented in detail. These techniques were developed using the unique characteristics of the headspace sampling process implemented in commercial HSGC systems and therefore can be easily applied in laboratory and...
77 FR 7041 - Changes to Implement Inter Partes Review Proceedings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-10
... comments with the public is more easily accomplished. Electronic comments are preferred to be submitted in... public inspection at the Board of Patent Appeals and Interferences, currently located in Madison East..., on, or after the effective date. DATES: The Office solicits comments from the public on this proposed...
Parent Book Talk to Accelerate Spanish Content Vocabulary Knowledge
ERIC Educational Resources Information Center
Pollard-Durodola, Sharolyn D.; Gonzalez, Jorge E.; Satterfield, Teresa; Benki, José R.; Vaquero, Juana; Ungco, Camille
2017-01-01
This article bridges research to practice by summarizing an interactive content-enriched shared book reading approach that Spanish-speaking parents of preschool-age children can easily use in the home to accelerate content vocabulary knowledge in Spanish. The approach was implemented in preschool classrooms using a transitional bilingual education…
Improving Learning Object Quality: Moodle HEODAR Implementation
ERIC Educational Resources Information Center
Munoz, Carlos; Garcia-Penalvo, Francisco J.; Morales, Erla Mariela; Conde, Miguel Angel; Seoane, Antonio M.
2012-01-01
Automation toward efficiency is the aim of most intelligent systems in an educational context in which results calculation automation that allows experts to spend most of their time on important tasks, not on retrieving, ordering, and interpreting information. In this paper, the authors provide a tool that easily evaluates Learning Objects quality…
Researching the Complexity of Classrooms
ERIC Educational Resources Information Center
Turvey, Anne
2012-01-01
In recent years, it has become fashionable to demand of research that it produces "evidence" that can be turned into easily generalisable findings. Ever more elaborate sets of managerial standards and pre-defined learning outcomes have been imposed, and English teachers are encouraged to see their practice as merely an implementation of…
Design Assessment: "Consumer Reports" Style
ERIC Educational Resources Information Center
Kelley, Todd R.
2010-01-01
Novices to the design process often struggle at first to understand the various stages of design. Learning to design is a process not easily mastered, and therefore requires multiple levels of exposure to the design process. It is helpful if teachers are able to implement various entry-level design assignments such as reverse-engineering…
NASA Technical Reports Server (NTRS)
Song, Y. T.
1998-01-01
A Jacobian formulation of the pressure gradient force for use in models with topography following coordinates is proposed. It can be used in conjunction with any vertical coordinate system and is easily implemented.
Design of Knowledge Bases for Plant Gene Regulatory Networks.
Mukundi, Eric; Gomez-Cano, Fabio; Ouma, Wilberforce Zachary; Grotewold, Erich
2017-01-01
Developing a knowledge base that contains all the information necessary for the researcher studying gene regulation in a particular organism can be accomplished in four stages. This begins with defining the data scope. We describe here the necessary information and resources, and outline the methods for obtaining data. The second stage consists of designing the schema, which involves defining the entire arrangement of the database in a systematic plan. The third stage is the implementation, defined by actualization of the database by using software according to a predefined schema. The final stage is development, where the database is made available to users in a web-accessible system. The result is a knowledgebase that integrates all the information pertaining to gene regulation, and which is easily expandable and transferable.
Moisture Forecast Bias Correction in GEOS DAS
NASA Technical Reports Server (NTRS)
Dee, D.
1999-01-01
Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.
Interaction Models for Functional Regression.
Usset, Joseph; Staicu, Ana-Maria; Maity, Arnab
2016-02-01
A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data.
Graded Multiple Choice Questions: Rewarding Understanding and Preventing Plagiarism
NASA Astrophysics Data System (ADS)
Denyer, G. S.; Hancock, D.
2002-08-01
This paper describes an easily implemented method that allows the generation and analysis of graded multiple-choice examinations. The technique, which uses standard functions in user-end software (Microsoft Excel 5+), can also produce several different versions of an examination that can be employed to prevent the reward of plagarism. The manuscript also discusses the advantages of having a graded marking system for the elimination of ambiguities, use in multi-step calculation questions, and questions that require extrapolation or reasoning. The advantages of the scrambling strategy, which maintains the same question order, is discussed with reference to student equity. The system provides a non-confrontational mechanism for dealing with cheating in large-class multiple-choice examinations, as well as providing a reward for problem solving over surface learning.
Wang, Zhuo; Camino, Acner; Hagag, Ahmed M; Wang, Jie; Weleber, Richard G; Yang, Paul; Pennesi, Mark E; Huang, David; Li, Dengwang; Jia, Yali
2018-05-01
Optical coherence tomography (OCT) can demonstrate early deterioration of the photoreceptor integrity caused by inherited retinal degeneration diseases (IRDs). A machine learning method based on random forests was developed to automatically detect continuous areas of preserved ellipsoid zone structure (an easily recognizable part of the photoreceptors on OCT) in 16 eyes of patients with choroideremia (a type of IRD). Pseudopodial extensions protruding from the preserved ellipsoid zone areas are detected separately by a local active contour routine. The algorithm is implemented on en face images with minimum segmentation requirements, only needing delineation of the Bruch's membrane, thus evading the inaccuracies and technical challenges associated with automatic segmentation of the ellipsoid zone in eyes with severe retinal degeneration. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1986-01-01
A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.
GAMBIT: the global and modular beyond-the-standard-model inference tool
NASA Astrophysics Data System (ADS)
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-11-01
We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org.
Sanz, J M; Saiz, J M; González, F; Moreno, F
2011-07-20
In this research, the polar decomposition (PD) method is applied to experimental Mueller matrices (MMs) measured on two-dimensional microstructured surfaces. Polarization information is expressed through a set of parameters of easier physical interpretation. It is shown that evaluating the first derivative of the retardation parameter, δ, a clear indication of the presence of defects either built on or dug in the scattering flat surface (a silicon wafer in our case) can be obtained. Although the rule of thumb thus obtained is established through PD, it can be easily implemented on conventional surface polarimetry. These results constitute an example of the capabilities of the PD approach to MM analysis, and show a direct application in surface characterization. © 2011 Optical Society of America
Enhancing the Remote Variable Operations in NPSS/CCDK
NASA Technical Reports Server (NTRS)
Sang, Janche; Follen, Gregory; Kim, Chan; Lopez, Isaac; Townsend, Scott
2001-01-01
Many scientific applications in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase the code reusability. The remote variable scheme provided in NPSS/CCDK helps programmers easily migrate the Fortran codes towards a client-server platform. This scheme gives the client the capability of accessing the variables at the server site. In this paper, we review and enhance the remote variable scheme by using the operator overloading features in C++. The enhancement enables NPSS programmers to use remote variables in much the same way as traditional variables. The remote variable scheme adopts the lazy update approach and the prefetch method. The design strategies and implementation techniques are described in details. Preliminary performance evaluation shows that communication overhead can be greatly reduced.
Review on Metallic and Plastic Flexible Dye Sensitized Solar Cell
NASA Astrophysics Data System (ADS)
Yugis, A. R.; Mansa, R. F.; Sipaut, C. S.
2015-04-01
Dye sensitized solar cells (DSSCs) are a promising alternative for the development of a new generation of photovoltaic devices. DSSCs have promoted intense research due to their low cost and eco-friendly advantage over conventional silicon-based crystalline solar cells. In recent years, lightweight flexible types of DSSCs have attracted much intention because of drastic reduction in production cost and more extensive application. The substrate that used as electrode of the DSSCs has a dominant impact on the methods and materials that can be applied to the cell and consequently on the resulting performance of DSSCs. Furthermore, the substrates influence significantly the stability of the device. Although the power conversion efficiency still low compared to traditional glass based DSSCs, flexible DSSCs still have potential to be the most efficient and easily implemented technology.
A tunable optical Kerr switch based on a nanomechanical resonator coupled to a quantum dot.
Li, Jin-Jin; Zhu, Ka-Di
2010-05-21
We have theoretically demonstrated the large enhancement of the optical Kerr effect in a scheme of a nanomechanical resonator coupled to a quantum dot and shown that this phenomenon can be used to realize a fast optical Kerr switch by turning the control field on or off. Due to the vibration of the nanoresonator, as we pump on the strong control beam, the optical spectrum shows that the magnitude of this optical Kerr effect is proportional to the intensity of the control field. In this case, a fast and tunable optical Kerr switch can be implemented easily by an intensity-adjustable laser. Based on this tunable optical Kerr switch, we also provide a detection method to measure the frequency of the nanomechanical resonator in this coupled system.
External validation of risk prediction models for incident colorectal cancer using UK Biobank
Usher-Smith, J A; Harshfield, A; Saunders, C L; Sharp, S J; Emery, J; Walter, F M; Muir, K; Griffin, S J
2018-01-01
Background: This study aimed to compare and externally validate risk scores developed to predict incident colorectal cancer (CRC) that include variables routinely available or easily obtainable via self-completed questionnaire. Methods: External validation of fourteen risk models from a previous systematic review in 373 112 men and women within the UK Biobank cohort with 5-year follow-up, no prior history of CRC and data for incidence of CRC through linkage to national cancer registries. Results: There were 1719 (0.46%) cases of incident CRC. The performance of the risk models varied substantially. In men, the QCancer10 model and models by Tao, Driver and Ma all had an area under the receiver operating characteristic curve (AUC) between 0.67 and 0.70. Discrimination was lower in women: the QCancer10, Wells, Tao, Guesmi and Ma models were the best performing with AUCs between 0.63 and 0.66. Assessment of calibration was possible for six models in men and women. All would require country-specific recalibration if estimates of absolute risks were to be given to individuals. Conclusions: Several risk models based on easily obtainable data have relatively good discrimination in a UK population. Modelling studies are now required to estimate the potential health benefits and cost-effectiveness of implementing stratified risk-based CRC screening. PMID:29381683
Sparse principal component analysis in medical shape modeling
NASA Astrophysics Data System (ADS)
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
User-friendly solutions for microarray quality control and pre-processing on ArrayAnalysis.org
Eijssen, Lars M. T.; Jaillard, Magali; Adriaens, Michiel E.; Gaj, Stan; de Groot, Philip J.; Müller, Michael; Evelo, Chris T.
2013-01-01
Quality control (QC) is crucial for any scientific method producing data. Applying adequate QC introduces new challenges in the genomics field where large amounts of data are produced with complex technologies. For DNA microarrays, specific algorithms for QC and pre-processing including normalization have been developed by the scientific community, especially for expression chips of the Affymetrix platform. Many of these have been implemented in the statistical scripting language R and are available from the Bioconductor repository. However, application is hampered by lack of integrative tools that can be used by users of any experience level. To fill this gap, we developed a freely available tool for QC and pre-processing of Affymetrix gene expression results, extending, integrating and harmonizing functionality of Bioconductor packages. The tool can be easily accessed through a wizard-like web portal at http://www.arrayanalysis.org or downloaded for local use in R. The portal provides extensive documentation, including user guides, interpretation help with real output illustrations and detailed technical documentation. It assists newcomers to the field in performing state-of-the-art QC and pre-processing while offering data analysts an integral open-source package. Providing the scientific community with this easily accessible tool will allow improving data quality and reuse and adoption of standards. PMID:23620278
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method
Chen, Meizhu; Li, Jichun; Zhang, Encai
2017-01-01
As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122
TBDQ: A Pragmatic Task-Based Method to Data Quality Assessment and Improvement
Vaziri, Reza; Mohsenzadeh, Mehran; Habibi, Jafar
2016-01-01
Organizations are increasingly accepting data quality (DQ) as a major key to their success. In order to assess and improve DQ, methods have been devised. Many of these methods attempt to raise DQ by directly manipulating low quality data. Such methods operate reactively and are suitable for organizations with highly developed integrated systems. However, there is a lack of a proactive DQ method for businesses with weak IT infrastructure where data quality is largely affected by tasks that are performed by human agents. This study aims to develop and evaluate a new method for structured data, which is simple and practical so that it can easily be applied to real world situations. The new method detects the potentially risky tasks within a process, and adds new improving tasks to counter them. To achieve continuous improvement, an award system is also developed to help with the better selection of the proposed improving tasks. The task-based DQ method (TBDQ) is most appropriate for small and medium organizations, and simplicity in implementation is one of its most prominent features. TBDQ is case studied in an international trade company. The case study shows that TBDQ is effective in selecting optimal activities for DQ improvement in terms of cost and improvement. PMID:27192547
Stol, Daphne M; Hollander, Monika; Nielen, Markus M J; Badenbroek, Ilse F; Schellevis, François G; de Wit, Niek J
2018-03-01
Current guidelines acknowledge the need for cardiometabolic disease (CMD) prevention and recommend five-yearly screening of a targeted population. In recent years programs for selective CMD-prevention have been developed, but implementation is challenging. The question arises if general practices are adequately prepared. Therefore, the aim of this study is to assess the organizational preparedness of Dutch general practices and the facilitators and barriers for performing CMD-prevention in practices currently implementing selective CMD-prevention. Observational study. Dutch primary care. General practices. Organizational characteristics. General practices implementing selective CMD-prevention are more often organized as a group practice (49% vs. 19%, p = .000) and are better organized regarding chronic disease management compared to reference practices. They are motivated for performing CMD-prevention and can be considered as 'frontrunners' of Dutch general practices with respect to their practice organization. The most important reported barriers are a limited availability of staff (59%) and inadequate funding (41%). The organizational infrastructure of Dutch general practices is considered adequate for performing most steps of selective CMD-prevention. Implementation of prevention programs including easily accessible lifestyle interventions needs attention. All stakeholders involved share the responsibility to realize structural funding for programmed CMD-prevention. Aforementioned conditions should be taken into account with respect to future implementation of selective CMD-prevention. Key Points There is need for adequate CMD prevention. Little is known about the organization of selective CMD prevention in general practices. • The organizational infrastructure of Dutch general practices is adequate for performing most steps of selective CMD prevention. • Implementation of selective CMD prevention programs including easily accessible services for lifestyle support should be the focus of attention. • Policy makers, health insurance companies and healthcare professionals share the responsibility to realize structural funding for selective CMD prevention.
Nishi, Mineo; Makishima, Hideo
1996-01-01
A composition for forming anti-reflection film on resist surface which comprises an aqueous solution of a water soluble fluorine compound, and a pattern formation method which comprises the steps of coating a photoresist composition on a substrate; coating the above-mentioned composition for forming anti-reflection film; exposing the coated film to form a specific pattern; and developing the photoresist, are provided. Since the composition for forming anti-reflection film can be coated on the photoresist in the form of an aqueous solution, not only the anti-reflection film can be formed easily, but also, the film can be removed easily by rinsing with water or alkali development. Therefore, by the pattern formation method according to the present invention, it is possible to form a pattern easily with a high dimensional accuracy.
Design and Implementation of an Environmental Mercury Database for Northeastern North America
NASA Astrophysics Data System (ADS)
Clair, T. A.; Evers, D.; Smith, T.; Goodale, W.; Bernier, M.
2002-12-01
An important issue faced when attempting to interpret geochemical variability studies across large regions, is the accumulation, access and consistent display of data from a large number of sources. We were given the opportunity to provide a regional assessment of mercury distribution in surface waters, sediments, invertebrates, fish, and birds in a region extending from New York State to the Island of Newfoundland. We received over 20 individual databases from State, Provincial, and Federal governments, as well as university researchers from both Canada and the United States. These databases came in a variety of formats and sizes. Our challenge was to find a way of accumulating and presenting the large amounts of acquired data, in a consistent, easily accessible fashion, which could then be more easily interpreted. Moreover, the database had to be portable and easily distributable to the large number of study participants. We developed a static database structure using a web-based approach which we were then able to mount on a server which was accessible to all project participants. The site also contained all the necessary documentation related to the data, its acquisition, as well as the methods used in its analysis and interpretation. We then copied the complete web site on CDROM's which we then distributed to all project participants, funding agencies, and other interested parties. The CDROM formed a permanent record of the project and was issued ISSN and ISBN numbers so that the information remained accessible to researchers in perpetuity. Here we present an overview of the CDROM and data structures, of the information accumulated over the first year of the study, and initial interpretation of the results.
NASA Astrophysics Data System (ADS)
Pasquini, Lorena; Twyman, Chasca; Wainwright, John
2010-11-01
There has been increasing recognition within systematic conservation planning of the need to include social data alongside biophysical assessments. However, in the approaches to identify potential conservation sites, there remains much room for improvement in the treatment of social data. In particular, few rigorous methods to account for the diversity of less-easily quantifiable social attributes that influence the implementation success of conservation sites (such as willingness to conserve) have been developed. We use a case-study analysis of private conservation areas within the Little Karoo, South Africa, as a practical example of the importance of incorporating social data into the process of selecting potential conservation sites to improve their implementation likelihood. We draw on extensive data on the social attributes of our case study obtained from a combination of survey questionnaires and semi-structured interviews. We discuss the need to determine the social attributes that are important for achieving the chosen implementation strategy by offering four tested examples of important social attributes in the Little Karoo: the willingness of landowners to take part in a stewardship arrangement, their willingness to conserve, their capacity to conserve, and the social capital among private conservation area owners. We then discuss the process of using an implementation likelihood ratio (derived from a combined measure of the social attributes) to assist the choice of potential conservation sites. We conclude by summarizing our discussion into a simple conceptual framework for identifying biophysically-valuable sites which possess a high likelihood that the desired implementation strategy will be realized on them.
RNAblueprint: flexible multiple target nucleic acid sequence design.
Hammer, Stefan; Tschiatschek, Birgit; Flamm, Christoph; Hofacker, Ivo L; Findeiß, Sven
2017-09-15
Realizing the value of synthetic biology in biotechnology and medicine requires the design of molecules with specialized functions. Due to its close structure to function relationship, and the availability of good structure prediction methods and energy models, RNA is perfectly suited to be synthetically engineered with predefined properties. However, currently available RNA design tools cannot be easily adapted to accommodate new design specifications. Furthermore, complicated sampling and optimization methods are often developed to suit a specific RNA design goal, adding to their inflexibility. We developed a C ++ library implementing a graph coloring approach to stochastically sample sequences compatible with structural and sequence constraints from the typically very large solution space. The approach allows to specify and explore the solution space in a well defined way. Our library also guarantees uniform sampling, which makes optimization runs performant by not only avoiding re-evaluation of already found solutions, but also by raising the probability of finding better solutions for long optimization runs. We show that our software can be combined with any other software package to allow diverse RNA design applications. Scripting interfaces allow the easy adaption of existing code to accommodate new scenarios, making the whole design process very flexible. We implemented example design approaches written in Python to demonstrate these advantages. RNAblueprint , Python implementations and benchmark datasets are available at github: https://github.com/ViennaRNA . s.hammer@univie.ac.at, ivo@tbi.univie.ac.at or sven@tbi.univie.ac.at. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
van der Ploeg, Eva S; Mbakile, Tapiwa; Genovesi, Sandra; O'Connor, Daniel W
2012-11-01
Advanced dementia may be accompanied by behavioral and psychological symptoms of dementia (BPSD). BPSD stemming from pain, depression, or psychosis benefit from treatment with drugs, but in other cases, medications have limited efficacy and may elicit adverse effects. Therefore, more attention has been paid to non-pharmacological interventions, which have fewer risks and can be successful in reducing agitation and negative mood. However, these interventions are frequently not implemented in nursing homes due to staffing constraints. This study explores the potential of volunteers to further assist staff. We interviewed 18 staff members and 39 volunteers in 17 aged care facilities in southeast Melbourne, Australia. Three-quarters of the facilities in this region worked with at least one regular volunteer. Both self-interest and altruistic reasons were identified as motives for volunteering. Volunteers were perceived by facility representatives as helpful to residents through provision of stimulation and company. However, they were discouraged from engaging with individuals with prominent BPSD. A majority of facility representatives and volunteers had experienced some difficulties in negotiating working relationships but most were easily resolved. A large majority of volunteers expressed an interest in learning new methods of interacting with residents. Despite their beneficial effects for agitated residents, non-pharmacological interventions are often not implemented in aged care facilities. Staff members often lack time but current volunteers in the sector are available, experienced, and interested in learning new methods of interacting. Volunteers therefore potentially are a valuable resource to assist with the application of new treatments.
Maity, Arnab; Carroll, Raymond J; Mammen, Enno; Chatterjee, Nilanjan
2009-01-01
Motivated from the problem of testing for genetic effects on complex traits in the presence of gene-environment interaction, we develop score tests in general semiparametric regression problems that involves Tukey style 1 degree-of-freedom form of interaction between parametrically and non-parametrically modelled covariates. We find that the score test in this type of model, as recently developed by Chatterjee and co-workers in the fully parametric setting, is biased and requires undersmoothing to be valid in the presence of non-parametric components. Moreover, in the presence of repeated outcomes, the asymptotic distribution of the score test depends on the estimation of functions which are defined as solutions of integral equations, making implementation difficult and computationally taxing. We develop profiled score statistics which are unbiased and asymptotically efficient and can be performed by using standard bandwidth selection methods. In addition, to overcome the difficulty of solving functional equations, we give easy interpretations of the target functions, which in turn allow us to develop estimation procedures that can be easily implemented by using standard computational methods. We present simulation studies to evaluate type I error and power of the method proposed compared with a naive test that does not consider interaction. Finally, we illustrate our methodology by analysing data from a case-control study of colorectal adenoma that was designed to investigate the association between colorectal adenoma and the candidate gene NAT2 in relation to smoking history.
Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves.
Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek
2015-07-21
There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.
Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves
Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek
2015-01-01
Background There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Methods Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). Results The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). Conclusions We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design. PMID:26197321
The enhancement of friction ridge detail on brass ammunition casings using cold patination fluid.
James, Richard Michael; Altamimi, Mohamad Jamal
2015-12-01
Brass ammunition is commonly found at firearms related crime scenes. For this reason, many studies have focused on evidence that can be obtained from brass ammunition such as DNA, gunshot residue and fingerprints. Latent fingerprints on ammunition can provide good forensic evidence, however; fingerprint development on ammunition casings has proven to be difficult. A method using cold patination fluid is described as a potential tool to enhance friction ridge detail on brass ammunition casings. Current latent fingerprint development methods for brass ammunition have either failed to provide the necessary quality of friction ridge detail or can be very time consuming and require expensive equipment. In this study, the enhancement of fingerprints on live ammunition has been achieved with a good level of detail whilst the development on spent casings has to an extent also been possible. Development with cold patination fluid has proven to be a quick, simple and cost-effective method for fingerprint development on brass ammunition that can be easily implemented for routine police work. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.
Separation in Logistic Regression: Causes, Consequences, and Control.
Mansournia, Mohammad Ali; Geroldinger, Angelika; Greenland, Sander; Heinze, Georg
2018-04-01
Separation is encountered in regression models with a discrete outcome (such as logistic regression) where the covariates perfectly predict the outcome. It is most frequent under the same conditions that lead to small-sample and sparse-data bias, such as presence of a rare outcome, rare exposures, highly correlated covariates, or covariates with strong effects. In theory, separation will produce infinite estimates for some coefficients. In practice, however, separation may be unnoticed or mishandled because of software limits in recognizing and handling the problem and in notifying the user. We discuss causes of separation in logistic regression and describe how common software packages deal with it. We then describe methods that remove separation, focusing on the same penalized-likelihood techniques used to address more general sparse-data problems. These methods improve accuracy, avoid software problems, and allow interpretation as Bayesian analyses with weakly informative priors. We discuss likelihood penalties, including some that can be implemented easily with any software package, and their relative advantages and disadvantages. We provide an illustration of ideas and methods using data from a case-control study of contraceptive practices and urinary tract infection.
Point-Mass Aircraft Trajectory Prediction Using a Hierarchical, Highly-Adaptable Software Design
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Woods, Sharon E.; Wing, David J.
2017-01-01
A highly adaptable and extensible method for predicting four-dimensional trajectories of civil aircraft has been developed. This method, Behavior-Based Trajectory Prediction, is based on taxonomic concepts developed for the description and comparison of trajectory prediction software. A hierarchical approach to the "behavioral" layer of a point-mass model of aircraft flight, a clear separation between the "behavioral" and "mathematical" layers of the model, and an abstraction of the methods of integrating differential equations in the "mathematical" layer have been demonstrated to support aircraft models of different types (in particular, turbojet vs. turboprop aircraft) using performance models at different levels of detail and in different formats, and promise to be easily extensible to other aircraft types and sources of data. The resulting trajectories predict location, altitude, lateral and vertical speeds, and fuel consumption along the flight path of the subject aircraft accurately and quickly, accounting for local conditions of wind and outside air temperature. The Behavior-Based Trajectory Prediction concept was implemented in NASA's Traffic Aware Planner (TAP) flight-optimizing cockpit software application.
Lai, Keke; Kelley, Ken
2011-06-01
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association
Scaling Laws Applied to a Modal Formulation of the Aeroservoelastic Equations
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.
2002-01-01
A method of scaling is described that easily converts the aeroelastic equations of motion of a full-sized aircraft into ones of a wind-tunnel model. To implement the method, a set of rules is provided for the conversion process involving matrix operations with scale factors. In addition, a technique for analytically incorporating a spring mounting system into the aeroelastic equations is also presented. As an example problem, a finite element model of a full-sized aircraft is introduced from the High Speed Research (HSR) program to exercise the scaling method. With a set of scale factor values, a brief outline is given of a procedure to generate the first-order aeroservoelastic analytical model representing the wind-tunnel model. To verify the scaling process as applied to the example problem, the root-locus patterns from the full-sized vehicle and the wind-tunnel model are compared to see if the root magnitudes scale with the frequency scale factor value. Selected time-history results are given from a numerical simulation of an active-controlled wind-tunnel model to demonstrate the utility of the scaling process.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Drawing Sensors with Ball-Milled Blends of Metal-Organic Frameworks and Graphite
Ko, Michael; Aykanat, Aylin; Smith, Merry K.
2017-01-01
The synthetically tunable properties and intrinsic porosity of conductive metal-organic frameworks (MOFs) make them promising materials for transducing selective interactions with gaseous analytes in an electrically addressable platform. Consequently, conductive MOFs are valuable functional materials with high potential utility in chemical detection. The implementation of these materials, however, is limited by the available methods for device incorporation due to their poor solubility and moderate electrical conductivity. This manuscript describes a straightforward method for the integration of moderately conductive MOFs into chemiresistive sensors by mechanical abrasion. To improve electrical contacts, blends of MOFs with graphite were generated using a solvent-free ball-milling procedure. While most bulk powders of pure conductive MOFs were difficult to integrate into devices directly via mechanical abrasion, the compressed solid-state MOF/graphite blends were easily abraded onto the surface of paper substrates equipped with gold electrodes to generate functional sensors. This method was used to prepare an array of chemiresistors, from four conductive MOFs, capable of detecting and differentiating NH3, H2S and NO at parts-per-million concentrations. PMID:28946624
Improved spring model-based collaborative indoor visible light positioning
NASA Astrophysics Data System (ADS)
Luo, Zhijie; Zhang, WeiNan; Zhou, GuoFu
2016-06-01
Gaining accuracy with indoor positioning of individuals is important as many location-based services rely on the user's current position to provide them with useful services. Many researchers have studied indoor positioning techniques based on WiFi and Bluetooth. However, they have disadvantages such as low accuracy or high cost. In this paper, we propose an indoor positioning system in which visible light radiated from light-emitting diodes is used to locate the position of receivers. Compared with existing methods using light-emitting diode light, we present a high-precision and simple implementation collaborative indoor visible light positioning system based on an improved spring model. We first estimate coordinate position information using the visible light positioning system, and then use the spring model to correct positioning errors. The system can be employed easily because it does not require additional sensors and the occlusion problem of visible light would be alleviated. We also describe simulation experiments, which confirm the feasibility of our proposed method.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
Diagonal dominance for the multivariable Nyquist array using function minimization
NASA Technical Reports Server (NTRS)
Leininger, G. G.
1977-01-01
A new technique for the design of multivariable control systems using the multivariable Nyquist array method was developed. A conjugate direction function minimization algorithm is utilized to achieve a diagonal dominant condition over the extended frequency range of the control system. The minimization is performed on the ratio of the moduli of the off-diagonal terms to the moduli of the diagonal terms of either the inverse or direct open loop transfer function matrix. Several new feedback design concepts were also developed, including: (1) dominance control parameters for each control loop; (2) compensator normalization to evaluate open loop conditions for alternative design configurations; and (3) an interaction index to determine the degree and type of system interaction when all feedback loops are closed simultaneously. This new design capability was implemented on an IBM 360/75 in a batch mode but can be easily adapted to an interactive computer facility. The method was applied to the Pratt and Whitney F100 turbofan engine.
A reconfigurable visual-programming library for real-time closed-loop cellular electrophysiology
Biró, István; Giugliano, Michele
2015-01-01
Most of the software platforms for cellular electrophysiology are limited in terms of flexibility, hardware support, ease of use, or re-configuration and adaptation for non-expert users. Moreover, advanced experimental protocols requiring real-time closed-loop operation to investigate excitability, plasticity, dynamics, are largely inaccessible to users without moderate to substantial computer proficiency. Here we present an approach based on MATLAB/Simulink, exploiting the benefits of LEGO-like visual programming and configuration, combined to a small, but easily extendible library of functional software components. We provide and validate several examples, implementing conventional and more sophisticated experimental protocols such as dynamic-clamp or the combined use of intracellular and extracellular methods, involving closed-loop real-time control. The functionality of each of these examples is demonstrated with relevant experiments. These can be used as a starting point to create and support a larger variety of electrophysiological tools and methods, hopefully extending the range of default techniques and protocols currently employed in experimental labs across the world. PMID:26157385
Nonlinear analysis of switched semi-active controlled systems
NASA Astrophysics Data System (ADS)
Eslaminasab, Nima; Vahid A., Orang; Golnaraghi, Farid
2011-02-01
Semi-active systems improve suspension performance of the vehicles more effectively than conventional passive systems by simultaneously improving ride comfort and road handling. Also, because of size, weight, price and performance advantages, they have gained more interest over the active as well as passive systems. Probably the most neglected aspect of the semi-active on-off control systems and strategies is the effects of the added nonlinearities of those systems, which are introduced and analysed in this paper. To do so, numerical techniques, analytical method of averaging and experimental analysis are deployed. In this paper, a new method to analyse, calculate and compare the performances of the semi-active controlled systems is proposed; further, a new controller based on the observations of actual test data is proposed to eliminate the adverse effects of added nonlinearities. The significance of the proposed new system is the simplicity of the algorithm and ease of implementation. In fact, this new semi-active control strategy could be easily adopted and used with most of the existing semi-active control systems.
Infinitely dilute partial molar properties of proteins from computer simulation.
Ploetz, Elizabeth A; Smith, Paul E
2014-11-13
A detailed understanding of temperature and pressure effects on an infinitely dilute protein's conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method's feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages.
NASA Astrophysics Data System (ADS)
Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu
2018-04-01
A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.
Call progress time measurement in IP telephony
NASA Astrophysics Data System (ADS)
Khasnabish, Bhumip
1999-11-01
Usually a voice call is established through multiple stages in IP telephony. In the first stage, a phone number is dialed to reach a near-end or call-originating IP-telephony gateway. The next stages involve user identification through delivering an m-digit user-id to the authentication and/or billing server, and then user authentication by using an n- digit PIN. After that, the caller is allowed (last stage dial tone is provided) to dial a destination phone number provided that authentication is successful. In this paper, we present a very flexible method for measuring call progress time in IP telephony. The proposed technique can be used to measure the system response time at every stage. It is flexible, so that it can be easily modified to include new `tone' or a set of tones, or `voice begin' can be used in every stage to detect the system's response. The proposed method has been implemented using scripts written in Hammer visual basic language for testing with a few commercially available IP telephony gateways.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bates, Cameron Russell; Mckigney, Edward Allen
The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.
Zeng, Qiang; Li, Tao; Song, Xinbing; Zhang, Xiangdong
2016-04-18
We propose and experimentally demonstrate an optimized setup to implement quantum controlled-NOT operation using polarization and orbital angular momentum qubits. This device is more adaptive to inputs with various polarizations, and can work both in classical and quantum single-photon regime. The logic operations performed by such a setup not only possess high stability and polarization-free character, they can also be easily extended to deal with multi-qubit input states. As an example, the experimental implementation of generalized three-qubit Toffoli gate has been presented.
Design of Dual-Mode Local Oscillators Using CMOS Technology for Motion Detection Sensors.
Ha, Keum-Won; Lee, Jeong-Yun; Kim, Jeong-Geun; Baek, Donghyun
2018-04-01
Recently, studies have been actively carried out to implement motion detecting sensors by applying radar techniques. Doppler radar or frequency-modulated continuous wave (FMCW) radar are mainly used, but each type has drawbacks. In Doppler radar, no signal is detected when the movement is stopped. Also, FMCW radar cannot function when the detection object is near the sensor. Therefore, by implementing a single continuous wave (CW) radar for operating in dual-mode, the disadvantages in each mode can be compensated for. In this paper, a dual mode local oscillator (LO) is proposed that makes a CW radar operate as a Doppler or FMCW radar. To make the dual-mode LO, a method that controls the division ratio of the phase locked loop (PLL) is used. To support both radar mode easily, the proposed LO is implemented by adding a frequency sweep generator (FSG) block to a fractional-N PLL. The operation mode of the LO is determined by according to whether this block is operating or not. Since most radar sensors are used in conjunction with microcontroller units (MCUs), the proposed architecture is capable of dual-mode operation by changing only the input control code. In addition, all components such as VCO, LDO, and loop filter are integrated into the chip, so complexity and interface issues can be solved when implementing radar sensors. Thus, the proposed dual-mode LO is suitable as a radar sensor.
Design of Dual-Mode Local Oscillators Using CMOS Technology for Motion Detection Sensors
Lee, Jeong-Yun; Kim, Jeong-Geun
2018-01-01
Recently, studies have been actively carried out to implement motion detecting sensors by applying radar techniques. Doppler radar or frequency-modulated continuous wave (FMCW) radar are mainly used, but each type has drawbacks. In Doppler radar, no signal is detected when the movement is stopped. Also, FMCW radar cannot function when the detection object is near the sensor. Therefore, by implementing a single continuous wave (CW) radar for operating in dual-mode, the disadvantages in each mode can be compensated for. In this paper, a dual mode local oscillator (LO) is proposed that makes a CW radar operate as a Doppler or FMCW radar. To make the dual-mode LO, a method that controls the division ratio of the phase locked loop (PLL) is used. To support both radar mode easily, the proposed LO is implemented by adding a frequency sweep generator (FSG) block to a fractional-N PLL. The operation mode of the LO is determined by according to whether this block is operating or not. Since most radar sensors are used in conjunction with microcontroller units (MCUs), the proposed architecture is capable of dual-mode operation by changing only the input control code. In addition, all components such as VCO, LDO, and loop filter are integrated into the chip, so complexity and interface issues can be solved when implementing radar sensors. Thus, the proposed dual-mode LO is suitable as a radar sensor. PMID:29614777
Tan, Xin; Tahini, Hassan A; Smith, Sean C
2016-12-07
Electrocatalytic, switchable hydrogen storage promises both tunable kinetics and facile reversibility without the need for specific catalysts. The feasibility of this approach relies on having materials that are easy to synthesize, possessing good electrical conductivities. Graphitic carbon nitride (g-C 4 N 3 ) has been predicted to display charge-responsive binding with molecular hydrogen-the only such conductive sorbent material that has been discovered to date. As yet, however, this conductive variant of graphitic carbon nitride is not readily synthesized by scalable methods. Here, we examine the possibility of conductive and easily synthesized boron-doped graphene nanosheets (B-doped graphene) as sorbent materials for practical applications of electrocatalytically switchable hydrogen storage. Using first-principle calculations, we find that the adsorption energy of H 2 molecules on B-doped graphene can be dramatically enhanced by removing electrons from and thereby positively charging the adsorbent. Thus, by controlling charge injected or depleted from the adsorbent, one can effectively tune the storage/release processes which occur spontaneously without any energy barriers. At full hydrogen coverage, the positively charged BC 5 achieves high storage capacities up to 5.3 wt %. Importantly, B-doped graphene, such as BC 49 , BC 7 , and BC 5 , have good electrical conductivity and can be easily synthesized by scalable methods, which positions this class of material as a very good candidate for charge injection/release. These predictions pave the route for practical implementation of electrocatalytic systems with switchable storage/release capacities that offer high capacity for hydrogen storage.
Peters, James F.; Ramanna, Sheela; Tozzi, Arturo; İnan, Ebubekir
2017-01-01
We introduce a novel method for the measurement of information level in fMRI (functional Magnetic Resonance Imaging) neural data sets, based on image subdivision in small polygons equipped with different entropic content. We show how this method, called maximal nucleus clustering (MNC), is a novel, fast and inexpensive image-analysis technique, independent from the standard blood-oxygen-level dependent signals. MNC facilitates the objective detection of hidden temporal patterns of entropy/information in zones of fMRI images generally not taken into account by the subjective standpoint of the observer. This approach befits the geometric character of fMRIs. The main purpose of this study is to provide a computable framework for fMRI that not only facilitates analyses, but also provides an easily decipherable visualization of structures. This framework commands attention because it is easily implemented using conventional software systems. In order to evaluate the potential applications of MNC, we looked for the presence of a fourth dimension's distinctive hallmarks in a temporal sequence of 2D images taken during spontaneous brain activity. Indeed, recent findings suggest that several brain activities, such as mind-wandering and memory retrieval, might take place in the functional space of a four dimensional hypersphere, which is a double donut-like structure undetectable in the usual three dimensions. We found that the Rényi entropy is higher in MNC areas than in the surrounding ones, and that these temporal patterns closely resemble the trajectories predicted by the possible presence of a hypersphere in the brain. PMID:28203153
A high resolution spatial population database of Somalia for disease risk mapping.
Linard, Catherine; Alegana, Victor A; Noor, Abdisalan M; Snow, Robert W; Tatem, Andrew J
2010-09-14
Millions of Somali have been deprived of basic health services due to the unstable political situation of their country. Attempts are being made to reconstruct the health sector, in particular to estimate the extent of infectious disease burden. However, any approach that requires the use of modelled disease rates requires reasonable information on population distribution. In a low-income country such as Somalia, population data are lacking, are of poor quality, or become outdated rapidly. Modelling methods are therefore needed for the production of contemporary and spatially detailed population data. Here land cover information derived from satellite imagery and existing settlement point datasets were used for the spatial reallocation of populations within census units. We used simple and semi-automated methods that can be implemented with free image processing software to produce an easily updatable gridded population dataset at 100 × 100 meters spatial resolution. The 2010 population dataset was matched to administrative population totals projected by the UN. Comparison tests between the new dataset and existing population datasets revealed important differences in population size distributions, and in population at risk of malaria estimates. These differences are particularly important in more densely populated areas and strongly depend on the settlement data used in the modelling approach. The results show that it is possible to produce detailed, contemporary and easily updatable settlement and population distribution datasets of Somalia using existing data. The 2010 population dataset produced is freely available as a product of the AfriPop Project and can be downloaded from: http://www.afripop.org.
A high resolution spatial population database of Somalia for disease risk mapping
2010-01-01
Background Millions of Somali have been deprived of basic health services due to the unstable political situation of their country. Attempts are being made to reconstruct the health sector, in particular to estimate the extent of infectious disease burden. However, any approach that requires the use of modelled disease rates requires reasonable information on population distribution. In a low-income country such as Somalia, population data are lacking, are of poor quality, or become outdated rapidly. Modelling methods are therefore needed for the production of contemporary and spatially detailed population data. Results Here land cover information derived from satellite imagery and existing settlement point datasets were used for the spatial reallocation of populations within census units. We used simple and semi-automated methods that can be implemented with free image processing software to produce an easily updatable gridded population dataset at 100 × 100 meters spatial resolution. The 2010 population dataset was matched to administrative population totals projected by the UN. Comparison tests between the new dataset and existing population datasets revealed important differences in population size distributions, and in population at risk of malaria estimates. These differences are particularly important in more densely populated areas and strongly depend on the settlement data used in the modelling approach. Conclusions The results show that it is possible to produce detailed, contemporary and easily updatable settlement and population distribution datasets of Somalia using existing data. The 2010 population dataset produced is freely available as a product of the AfriPop Project and can be downloaded from: http://www.afripop.org. PMID:20840751
Peters, James F; Ramanna, Sheela; Tozzi, Arturo; İnan, Ebubekir
2017-01-01
We introduce a novel method for the measurement of information level in fMRI (functional Magnetic Resonance Imaging) neural data sets, based on image subdivision in small polygons equipped with different entropic content. We show how this method, called maximal nucleus clustering (MNC), is a novel, fast and inexpensive image-analysis technique, independent from the standard blood-oxygen-level dependent signals. MNC facilitates the objective detection of hidden temporal patterns of entropy/information in zones of fMRI images generally not taken into account by the subjective standpoint of the observer. This approach befits the geometric character of fMRIs. The main purpose of this study is to provide a computable framework for fMRI that not only facilitates analyses, but also provides an easily decipherable visualization of structures. This framework commands attention because it is easily implemented using conventional software systems. In order to evaluate the potential applications of MNC, we looked for the presence of a fourth dimension's distinctive hallmarks in a temporal sequence of 2D images taken during spontaneous brain activity. Indeed, recent findings suggest that several brain activities, such as mind-wandering and memory retrieval, might take place in the functional space of a four dimensional hypersphere, which is a double donut-like structure undetectable in the usual three dimensions. We found that the Rényi entropy is higher in MNC areas than in the surrounding ones, and that these temporal patterns closely resemble the trajectories predicted by the possible presence of a hypersphere in the brain.
Technology Integration in Science Classrooms: Framework, Principles, and Examples
ERIC Educational Resources Information Center
Kim, Minchi C.; Freemyer, Sarah
2011-01-01
A great number of technologies and tools have been developed to support science learning and teaching. However, science teachers and researchers point out numerous challenges to implementing such tools in science classrooms. For instance, guidelines, lesson plans, Web links, and tools teachers can easily find through Web-based search engines often…
Intensifying Innovation Adoption in Educational eHealth
ERIC Educational Resources Information Center
Rissanen, M. K.
2014-01-01
In demanding innovation areas such as eHealth, the primary emphasis is easily placed on the product and process quality aspects in the design phase. Customer quality may receive adequate attention when the target audience is well-defined. But if the multidimensional evaluative focus does not get enough space until the implementation phase, this…
ERIC Educational Resources Information Center
DePasquale, Nicole; Hill-Briggs, Felicia; Darrell, Linda; Boyer, LaPricia Lewis; Ephraim, Patti; Boulware, L. Ebony
2012-01-01
Live kidney transplantation (LKT) is underused by patients with end-stage renal disease. Easily implementable and effective interventions to improve patients' early consideration of LKT are needed. The Talking About Live Kidney Donation (TALK) social worker intervention (SWI) improved consideration and pursuit of LKT among patients with…
Implementing the 40 Gallon Challenge to Increase Water Conservation
ERIC Educational Resources Information Center
Sheffield, Mary Carol; Bauske, Ellen; Pugliese, Paul; Kolich, Heather; Boellstorff, Diane
2016-01-01
The 40 Gallon Challenge is an easy-to-use, comprehensive indoor and outdoor water conservation educational tool. It can be used nationwide and easily incorporated into existing educational programs. Promotional materials and pledge cards are available on the 40 Gallon Challenge website and can be modified by educators. The website displays data…
Design and Calibration of an Inexpensive Digital Anemometer
ERIC Educational Resources Information Center
Hernandez-Walls, R.; Rojas-Mayoral, E.; Baez-Castillo, L.; Rojas-Mayoral, B.
2008-01-01
An inexpensive and easily implemented device to measure wind velocity is proposed. This prototype has the advantage of being able to measure both the speed and the direction of the wind in two dimensions. The device utilizes a computational interface commonly referred to as a "mouse." The mouse proposed for this prototype contains an…
Running into Trouble with the Time-Dependent Propagation of a Wavepacket
ERIC Educational Resources Information Center
Garriz, Abel E.; Sztrajman, Alejandro; Mitnik, Dario
2010-01-01
The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations.…
Developing Interactive Educational Engineering Software for the World Wide Web with Java.
ERIC Educational Resources Information Center
Reed, John A.; Afjeh, Abdollah A.
1998-01-01
Illustrates the design and implementation of a Java applet for use in educational propulsion engineering curricula. The Java Gas Turbine Simulator applet provides an interactive graphical environment which allows the rapid, efficient construction and analysis of arbitrary gas turbine systems. The simulator can be easily accessed from the World…
Stevens, Simon
2005-02-17
Control of the tariff system should be handed over to an arms-length technical agency similar to the U.S's Medicare Payments Advisory Commission, according to HSJ columnist and former prime ministerial adviser Simon Stevens. He also warns that the 'understandable decision' to slow implementation of the new system means that the elective tariff 'will be easily gamed'.
ERIC Educational Resources Information Center
Kanawha County Board of Education, Charleston, WV.
This booklet introduces secondary grade students to the criminal laws of West Virginia. It can easily be adapted and used by educators in other states. The authors believe that young people must recognize and understand these laws and the mechanisms which society uses to implement and enforce them if they are to function as an integral, important,…
Sensitive Technique For Detecting Alignment Of Seed Laser
NASA Technical Reports Server (NTRS)
Barnes, Norman P.
1994-01-01
Frequency response near resonance measured. Improved technique for detection and quantification of alignment of injection-seeding laser with associated power-oscillator laser proposed. Particularly useful in indicating alignment at spectral purity greater than 98 percent because it becomes more sensitive as perfect alignment approached. In addition, implemented relatively easily, without turning on power-oscillator laser.
Modified signed-digit trinary arithmetic by using optical symbolic substitution.
Awwal, A A; Islam, M N; Karim, M A
1992-04-10
Carry-free addition and borrow-free subtraction of modified signed-digit trinary numbers with optical symbolic substitution are presented. The proposed two-step and three-step algorithms can be easily implemented by using phase-only holograms, optical content-addressable memories, a multichannel correlator, or a polarization-encoded optical shadow-casting system.
Modified signed-digit trinary arithmetic by using optical symbolic substitution
NASA Astrophysics Data System (ADS)
Awwal, A. A. S.; Islam, M. N.; Karim, M. A.
1992-04-01
Carry-free addition and borrow-free subtraction of modified signed-digit trinary numbers with optical symbolic substitution are presented. The proposed two-step and three-step algorithms can be easily implemented by using phase-only holograms, optical content-addressable memories, a multichannel correlator, or a polarization-encoded optical shadow-casting system.
Improving the Work Performance of Mentally Retarded Clients in a Sheltered Workshop.
ERIC Educational Resources Information Center
Starke, Mary C.; Wright, Jaice
1986-01-01
Twenty-three mentally retarded workshop clients were assigned to experimental or control groups in a 10-day intervention. The introduction of music as a reinforcer and elimination of distractions were among easily implemented, cost-effective program changes which resulted in significantly higher productivity rates for the experimental groups.…
Another Intuitive Approach to Stirling's Formula. Classroom Notes
ERIC Educational Resources Information Center
Osler, Thomas J.
2004-01-01
An intuitive derivation of Stirling's formula is presented, together with a modification that greatly improves its accuracy. The derivation is based on the closed form evaluation of the gamma function at an integer plus one-half. The modification is easily implemented on a hand-held calculator and often triples the number of significant digits…
Accountability in Education: An Imperative for Service Delivery in Nigerian School Systems
ERIC Educational Resources Information Center
Usman, Yunusa Dangara
2016-01-01
Schools and other educational institutions are established, maintained and sustained essentially to achieve certain assured objectives. The goals of such establishment cannot be easily achieved without putting in place certain mechanisms towards ensuring the success of implementation of its policies and programmes. In the education system, one of…
Building Higher-Order Markov Chain Models with EXCEL
ERIC Educational Resources Information Center
Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.
2004-01-01
Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…
ERIC Educational Resources Information Center
Witt, Autumn Song
2010-01-01
This dissertation follows an oral language assessment tool from initial design and implementation to validity analysis. The specialized variables of this study are the population: international teaching assistants and the purpose: spoken assessment as a hiring prerequisite. However, the process can easily be applied to other populations and…
Benefits of Using Online Student Response Systems in Japanese EFL Classrooms
ERIC Educational Resources Information Center
Mork, Cathrine-Mette
2014-01-01
Online student response systems (OSRSs) are fast replacing classroom response systems (CRSs), also known as personal or audience response systems or "clickers". OSRSs can more easily be implemented in the classroom because they are web-based and allow students to use any browser and device to do the "clicking" required to…
2015-16 Guide to Calculating School and District Grades
ERIC Educational Resources Information Center
Florida Department of Education, 2016
2016-01-01
School grades provide an easily understandable metric to measure the performance of a school. Parents and the general public can use the school grade and its associated components to understand how well each school is serving its students. The school grades calculation was revised substantially for the 2014-15 school year to implement statutory…
1985-06-01
competitive commercial items such as automobiles and aircraft. 1.3 Implementation Considerations. 1.3.1 Technical Considerations. The major technical...and easily reprogrammable discs; and integrated portable videocomputer devices will become available. 13 139 1 1.2 Projected Performance of the Target
Dye Degradation by Fungi: An Exercise in Applied Science for Biology Students
ERIC Educational Resources Information Center
Lefebvre, Daniel D.; Chenaux, Peter; Edwards, Maureen
2005-01-01
An easily implemented practical exercise in applied science for biology students is presented that uses fungi to degrade an azo-dye. This is an example of bioremediation, the employment of living organisms to detoxify or contain pollutants. Its interdisciplinary nature widens students' perspectives of biology by exposing them to a chemical…
Strategies for Minimizing Variability in Progress Monitoring of Oral Reading Fluency
ERIC Educational Resources Information Center
Bundock, Kaitlin; O'Keeffe, Breda V.; Stokes, Kristen; Kladis, Kristin
2018-01-01
Research has shown that: (1) Curriculum-based monitoring (CBM) can be easily implemented and interpreted by teachers (e.g., Fuchs, Deno, & Mirkin, 1984); (2) student outcomes have improved when teachers use CBM to inform instructional decision making (e.g., Fuchs, Fuchs, Hamlett, & Stecker, 1991); (3) reliable and valid measures have been…
Burnishing Systems: a Short Survey of the State-of-the-art
NASA Astrophysics Data System (ADS)
Bobrovskij, I. N.
2018-01-01
The modern technological solutions allowing to implement a new technology of surface plastic deformation are considered. The technological device allowing to implement the technology of hyper productive surface plastic deformation or wide burnishing (machining time is up to 2-3 revolutions of workpiece) is presented. The device provides the constant force of instruments regardless the beating, non-roundness and other surface shape defects; usable and easily controlled force adjustment; precise installation of instruments and holders toward the along the worpieces axis; automation of the supply and retraction of instruments. Also the device allowing to implement the technology of nanostructuring burnishing is presented. The design of the device allows to eliminate the effect of auto-oscillations.
Meeting the DHCP Challenge: A Model for Implementing a Decentralized Hospital Computer Program
Catellier, Julie; Benway, Paula K.; Perez, Kathleen
1987-01-01
The James A. Haley Veterans' Hospital in Tampa has been a consistent leader in the implementation of automated systems within the VA. Our approach has been essentially to focus on obtaining maximum user involvement and contribution to the automation program within the Medical Center. Since clinical acceptance is vital to a viable program, a great deal of our efforts have been aimed at maximizing the training and participation of physicians, nurses and other clinical staff. The following is a description of our organization structure relative to this topic. We believe it to be a highly workable approach which can be easily implemented structurally at any hospital — public or private.
Method for auto-alignment of digital optical phase conjugation systems based on digital propagation
Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei
2014-01-01
Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5∘, and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems. PMID:24977504
Method for auto-alignment of digital optical phase conjugation systems based on digital propagation.
Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei
2014-06-16
Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5° and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems.
Menéndez González, Manuel; Suárez-Sanmartin, Esther; García, Ciara; Martínez-Camblor, Pablo; Westman, Eric; Simmons, Andy
2016-03-26
Though a disproportionate rate of atrophy in the medial temporal lobe (MTA) represents a reliable marker of Alzheimer's disease (AD) pathology, measurement of the MTA is not currently widely used in daily clinical practice. This is mainly because the methods available to date are sophisticated and difficult to implement in clinical practice (volumetric methods), are poorly explored (linear and planimetric methods), or lack objectivity (visual rating). Here, we aimed to compare the results of a manual planimetric measure (the yearly rate of absolute atrophy of the medial temporal lobe, 2D-yrA-MTL) with the results of an automated volumetric measure (the yearly rate of atrophy of the hippocampus, 3D-yrA-H). A series of 1.5T MRI studies on 290 subjects in the age range of 65-85 years, including patients with AD (n = 100), mild cognitive impairment (MCI) (n = 100), and matched controls (n = 90) from the AddNeuroMed study, were examined by two independent subgroups of researchers: one in charge of volumetric measures and the other in charge of planimetric measures. The means of both methods were significantly different between AD and the other two diagnostic groups. In the differential diagnosis of AD against controls, 3D-yrA-H performed significantly better than 2D-yrA-MTL while differences were not statistically significant in the differential diagnosis of AD against MCI. Automated volumetry of the hippocampus is superior to manual planimetry of the MTL in the diagnosis of AD. Nevertheless, the 2D-yrAMTL is a simpler method that could be easily implemented in clinical practice when volumetry is not available.
An Immersed-Boundary Method for Fluid-Structure Interaction in the Human Larynx
NASA Astrophysics Data System (ADS)
Luo, Haoxiang; Zheng, Xudong; Mittal, Rajat; Bielamowicz, Steven
2006-11-01
We describe a novel and accurate computational methodology for modeling the airflow and vocal fold dynamics in human larynx. The model is useful in helping us gain deeper insight into the complicated bio-physics of phonation, and may have potential clinical application in design and placement of synthetic implant in vocal fold surgery. The numerical solution of the airflow employs a previously developed immersed-boundary solver. However, in order to incorporate the vocal fold into the model, we have developed a new immersed-boundary method that can simulate the dynamics of the multi-layered, viscoelastic solids. In this method, a finite-difference scheme is used to approximate the derivatives and ghost cells are defined near the boundary. To impose the traction boundary condition, a third-order polynomial is obtained using the weighted least squares fitting to approximate the function locally. Like its analogue for the flow solver, this immersed-boundary method for the solids has the advantage of simple grid generation, and may be easily implemented on parallel computers. In the talk, we will present the simulation results on both the specified vocal fold motion and the flow-induced vocal fold vibration. Supported by NIDCD Grant R01 DC007125-01A1.
SlideSort: all pairs similarity search for short reads
Shimizu, Kana; Tsuda, Koji
2011-01-01
Motivation: Recent progress in DNA sequencing technologies calls for fast and accurate algorithms that can evaluate sequence similarity for a huge amount of short reads. Searching similar pairs from a string pool is a fundamental process of de novo genome assembly, genome-wide alignment and other important analyses. Results: In this study, we designed and implemented an exact algorithm SlideSort that finds all similar pairs from a string pool in terms of edit distance. Using an efficient pattern growth algorithm, SlideSort discovers chains of common k-mers to narrow down the search. Compared to existing methods based on single k-mers, our method is more effective in reducing the number of edit distance calculations. In comparison to backtracking methods such as BWA, our method is much faster in finding remote matches, scaling easily to tens of millions of sequences. Our software has an additional function of single link clustering, which is useful in summarizing short reads for further processing. Availability: Executable binary files and C++ libraries are available at http://www.cbrc.jp/~shimizu/slidesort/ for Linux and Windows. Contact: slidesort@m.aist.go.jp; shimizu-kana@aist.go.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21148542
NASA Technical Reports Server (NTRS)
Gurgiolo, Chris; Vinas, Adolfo F.
2009-01-01
This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.
Modelling and analysis of the sugar cataract development process using stochastic hybrid systems.
Riley, D; Koutsoukos, X; Riley, K
2009-05-01
Modelling and analysis of biochemical systems such as sugar cataract development (SCD) are critical because they can provide new insights into systems, which cannot be easily tested with experiments; however, they are challenging problems due to the highly coupled chemical reactions that are involved. The authors present a stochastic hybrid system (SHS) framework for modelling biochemical systems and demonstrate the approach for the SCD process. A novel feature of the framework is that it allows modelling the effect of drug treatment on the system dynamics. The authors validate the three sugar cataract models by comparing trajectories computed by two simulation algorithms. Further, the authors present a probabilistic verification method for computing the probability of sugar cataract formation for different chemical concentrations using safety and reachability analysis methods for SHSs. The verification method employs dynamic programming based on a discretisation of the state space and therefore suffers from the curse of dimensionality. To analyse the SCD process, a parallel dynamic programming implementation that can handle large, realistic systems was developed. Although scalability is a limiting factor, this work demonstrates that the proposed method is feasible for realistic biochemical systems.
Mi, Si; Lim, David W; Turner, Justine M; Wales, Paul W; Curtis, Jonathan M
2016-03-01
An LC/MS/MS-based method was developed for the determination of individual bile acids (BA) and their conjugates in porcine bile samples. The C18-based solid-phase extraction (SPE) procedure was optimized so that all 19 target BA and their glycine and taurine conjugates were collected with high recoveries for standards (89.1-100.2%). Following this, all 19 compounds were separated and quantified in a single 12 min chromatographic run. The method was validated in terms of linearity, sensitivity, accuracy, precision, and recovery. An LOD in the low ppb range with measured precisions in the range of 0.5-9.3% was achieved. The recoveries for all of the 19 analytes in bile samples were all >80%. The validated method was successfully applied to the profiling of BA and their conjugates in the bile from piglets treated with exogenous glucagon-like peptide-2 (GLP-2) in a preclinical model of neonatal parenteral nutrition-associated liver disease (PNALD). The method developed is rapid and could be easily implemented for routine analysis of BA and their conjugates in other biofluids or tissues.
A Method for Rapid Measurement of Contrast Sensitivity on Mobile Touch-Screens
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2016-01-01
Touch-screen displays in cell phones and tablet computers are now pervasive, making them an attractive option for vision testing outside of the laboratory or clinic. Here we de- scribe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. A proto- type has been developed for Apple Computer's iPad/iPod/iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing, http://hsi.arc.nasa.gov/groups/scanpath/research.php). Preliminary data show good agreement with estimates obtained from traditional psychophysical methods as well as newer rapid estimation techniques. Issues relating to device calibration are also discussed.
Reduction of patulin in apple cider by UV radiation.
Dong, Qingfang; Manns, David C; Feng, Guoping; Yue, Tianli; Churey, John J; Worobo, Randy W
2010-01-01
The presence of the mycotoxin patulin in processed apple juice and cider presents a continual challenge to the food industry as both consumer health and product quality issues. Although several methods for control and/or elimination of patulin have been proposed, no unifying method has been commercially successful for reducing patulin burdens while maintaining product quality. In the present study, exposure to germicidal UV radiation was evaluated as a possible commercially viable alternative for the reduction and possible elimination of the patulin mycotoxin in fresh apple cider. UV exposure of 14.2 to 99.4 mJ/cm(2) resulted in a significant and nearly linear decrease in patulin levels while producing no quantifiable changes in the chemical composition (i.e., pH, Brix, and total acids) or organoleptic properties of the cider. For the range of UV doses tested, patulin levels decreased by 9.4 to 43.4%; the greatest reduction was achieved after less than 15 s of UV exposure. The method of UV radiation (the CiderSure 3500 system) is an easily implemented, high-throughput, and cost-effective method that offers simultaneous UV pasteurization of cider and juice products and reduction and/or elimination of patulin without unwanted alterations in the final product.
Aryee, Martin J.; Jaffe, Andrew E.; Corrada-Bravo, Hector; Ladd-Acosta, Christine; Feinberg, Andrew P.; Hansen, Kasper D.; Irizarry, Rafael A.
2014-01-01
Motivation: The recently released Infinium HumanMethylation450 array (the ‘450k’ array) provides a high-throughput assay to quantify DNA methylation (DNAm) at ∼450 000 loci across a range of genomic features. Although less comprehensive than high-throughput sequencing-based techniques, this product is more cost-effective and promises to be the most widely used DNAm high-throughput measurement technology over the next several years. Results: Here we describe a suite of computational tools that incorporate state-of-the-art statistical techniques for the analysis of DNAm data. The software is structured to easily adapt to future versions of the technology. We include methods for preprocessing, quality assessment and detection of differentially methylated regions from the kilobase to the megabase scale. We show how our software provides a powerful and flexible development platform for future methods. We also illustrate how our methods empower the technology to make discoveries previously thought to be possible only with sequencing-based methods. Availability and implementation: http://bioconductor.org/packages/release/bioc/html/minfi.html. Contact: khansen@jhsph.edu; rafa@jimmy.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24478339
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Rapid detection of AAC(6')-Ib-cr production using a MALDI-TOF MS strategy.
Pardo, C-A; Tan, R N; Hennequin, C; Beyrouthy, R; Bonnet, R; Robin, F
2016-12-01
Plasmid-mediated quinolone resistance mechanisms have become increasingly prevalent among Enterobacteriaceae strains since the 1990s. Among these mechanisms, AAC(6')-Ib-cr is the most difficult to detect. Different detection methods have been developed, but they require expensive procedures such as Sanger sequencing, pyrosequencing, polymerase chain reaction (PCR) restriction, or the time-consuming phenotypic method of Wachino. In this study, we describe a simple matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) method which can be easily implemented in clinical laboratories that use the MALDI-TOF technique for bacterial identification. We tested 113 strains of Enterobacteriaceae, of which 64 harbored the aac(6')-Ib-cr gene. We compared two MALDI-TOF strategies, which differed by their norfloxacin concentration (0.03 vs. 0.5 g/L), and the method of Wachino with the PCR and sequencing strategy used as the reference. The MALDI-TOF strategy, performed with 0.03 g/L norfloxacin, and the method of Wachino yielded the same high performances (Se = 98 %, Sp = 100 %), but the turnaround time of the MALDI-TOF strategy was faster (<5 h), simpler, and inexpensive (<1 Euro). Our study shows that the MALDI-TOF strategy has the potential to become a major method for the detection of many different enzymatic resistance mechanisms.
GneimoSim: A Modular Internal Coordinates Molecular Dynamics Simulation Package
Larsen, Adrien B.; Wagner, Jeffrey R.; Kandel, Saugat; Salomon-Ferrer, Romelia; Vaidehi, Nagarajan; Jain, Abhinandan
2014-01-01
The Generalized Newton Euler Inverse Mass Operator (GNEIMO) method is an advanced method for internal coordinates molecular dynamics (ICMD). GNEIMO includes several theoretical and algorithmic advancements that address longstanding challenges with ICMD simulations. In this paper we describe the GneimoSim ICMD software package that implements the GNEIMO method. We believe that GneimoSim is the first software package to include advanced features such as the equipartition principle derived for internal coordinates, and a method for including the Fixman potential to eliminate systematic statistical biases introduced by the use of hard constraints. Moreover, by design, GneimoSim is extensible and can be easily interfaced with third party force field packages for ICMD simulations. Currently, GneimoSim includes interfaces to LAMMPS, OpenMM, Rosetta force field calculation packages. The availability of a comprehensive Python interface to the underlying C++ classes and their methods provides a powerful and versatile mechanism for users to develop simulation scripts to configure the simulation and control the simulation flow. GneimoSim has been used extensively for studying the dynamics of protein structures, refinement of protein homology models, and for simulating large scale protein conformational changes with enhanced sampling methods. GneimoSim is not limited to proteins and can also be used for the simulation of polymeric materials. PMID:25263538
GneimoSim: a modular internal coordinates molecular dynamics simulation package.
Larsen, Adrien B; Wagner, Jeffrey R; Kandel, Saugat; Salomon-Ferrer, Romelia; Vaidehi, Nagarajan; Jain, Abhinandan
2014-12-05
The generalized Newton-Euler inverse mass operator (GNEIMO) method is an advanced method for internal coordinates molecular dynamics (ICMD). GNEIMO includes several theoretical and algorithmic advancements that address longstanding challenges with ICMD simulations. In this article, we describe the GneimoSim ICMD software package that implements the GNEIMO method. We believe that GneimoSim is the first software package to include advanced features such as the equipartition principle derived for internal coordinates, and a method for including the Fixman potential to eliminate systematic statistical biases introduced by the use of hard constraints. Moreover, by design, GneimoSim is extensible and can be easily interfaced with third party force field packages for ICMD simulations. Currently, GneimoSim includes interfaces to LAMMPS, OpenMM, and Rosetta force field calculation packages. The availability of a comprehensive Python interface to the underlying C++ classes and their methods provides a powerful and versatile mechanism for users to develop simulation scripts to configure the simulation and control the simulation flow. GneimoSim has been used extensively for studying the dynamics of protein structures, refinement of protein homology models, and for simulating large scale protein conformational changes with enhanced sampling methods. GneimoSim is not limited to proteins and can also be used for the simulation of polymeric materials. © 2014 Wiley Periodicals, Inc.
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less
On-axis programmable microscope using liquid crystal spatial light modulator
NASA Astrophysics Data System (ADS)
García-Martínez, Pascuala; Martínez, José Luís.; Moreno, Ignacio
2017-06-01
Spatial light modulators (SLM) are currently used in many applications in optical microscopy and imaging. One of the most promising methods is the use of liquid crystal displays (LCD) as programmable phase diffractive optical elements (DOE) placed in the Fourier plane giving access to the spatial frequencies which can be phased shifted individually, allowing to emulate a wealth of contrast enhancing methods for both amplitude and phase samples. We use phase and polarization modulation of LCD to implement an on-axis microscope optical system. The LCD used are Hamamatsu liquid crystal on silicon (LCOS) SLM free of flicker, thus showing a full profit of the SLM space bandwidth, as opposed to optical systems in the literature forced to work off-axis due to the strong zero-order component. Taking benefits of the phase modulation of the LCOS we have implemented different microscopic imaging operations, such as high-pass and low-pass filtering in parallel using programmable blazed gratings. Moreover, we are able to control polarization modulation to display two orthogonal linear state of polarization images than can be subtracted or added by changing the period of the blazed grating. In that sense, Differential Interference Contrast (DIC) microscopy can be easily done by generating two images exploiting the polarization splitting properties when a blazed grating is displayed in the SLM. Biological microscopy samples are also used.
Community medicine teaching and evaluation: scope of betterment.
Gopalakrishnan, S; Kumar, P Ganesh
2015-01-01
There have been rapid and extensive changes in the way assessment is conducted in medical education. Assessment brings about standardization of the manner in which the syllabus is to be implemented and also gives guidelines regarding the teaching pattern, especially when the student is going to rotate through various departments in a medical college. Community Medicine is an important branch of medicine concerned with the health of populations. Existing forms of assessment of community medicine education mainly consists of internal [formative] assessment and final (summative) examination. Advantages of the present system is the similarity of the methods used for internal assessments and final examinations and is relatively easily done since only the knowledge application and recall ability of the student in theory and practical are assessed. Disadvantages in the current evaluation system are neglecting the assessment of psychomotor, affective and communication skills. Evaluation systems can be improved by implementing techniques to assess psychomotor skills, presentation and communication skills, organizational skills and the student's ability to work in a team. Regular feedback from students should be taken periodically for the betterment of Community Medicine education. This article is meant to sensitise the academic experts in medical education to plan better need based methods of assessment in the subject of Community Medicine, in relation to the new MCI 2012 Regulations, in order to make it a better learning experience for the students.
Community Medicine Teaching and Evaluation: Scope of Betterment
Kumar, P. Ganesh
2015-01-01
There have been rapid and extensive changes in the way assessment is conducted in medical education. Assessment brings about standardization of the manner in which the syllabus is to be implemented and also gives guidelines regarding the teaching pattern, especially when the student is going to rotate through various departments in a medical college. Community Medicine is an important branch of medicine concerned with the health of populations. Existing forms of assessment of community medicine education mainly consists of internal [formative] assessment and final (summative) examination. Advantages of the present system is the similarity of the methods used for internal assessments and final examinations and is relatively easily done since only the knowledge application and recall ability of the student in theory and practical are assessed. Disadvantages in the current evaluation system are neglecting the assessment of psychomotor, affective and communication skills. Evaluation systems can be improved by implementing techniques to assess psychomotor skills, presentation and communication skills, organizational skills and the student’s ability to work in a team. Regular feedback from students should be taken periodically for the betterment of Community Medicine education. This article is meant to sensitise the academic experts in medical education to plan better need based methods of assessment in the subject of Community Medicine, in relation to the new MCI 2012 Regulations, in order to make it a better learning experience for the students. PMID:25738009
A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects
NASA Astrophysics Data System (ADS)
Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.
2016-05-01
Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.
Asynchronous machine rotor speed estimation using a tabulated numerical approach
NASA Astrophysics Data System (ADS)
Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane
2017-12-01
This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Micro-balance sensor integrated with atomic layer deposition chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinson, Alex B. F.; Libera, Joseph A.; Elam, Jeffrey W.
The invention is directed to QCM measurements in monitoring ALD processes. Previously, significant barriers remain in the ALD processes and accurate execution. To turn this exclusively dedicated in situ technique into a routine characterization method, an integral QCM fixture was developed. This new design is easily implemented on a variety of ALD tools, allows rapid sample exchange, prevents backside deposition, and minimizes both the footprint and flow disturbance. Unlike previous QCM designs, the fast thermal equilibration enables tasks such as temperature-dependent studies and ex situ sample exchange, further highlighting the feasibility of this QCM design for day-to-day use. Finally, themore » in situ mapping of thin film growth rates across the ALD reactor was demonstrated in a popular commercial tool operating in both continuous and quasi-static ALD modes.« less
A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.
Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin
2014-01-01
This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.
A WebGL Tool for Visualizing the Topology of the Sun's Coronal Magnetic Field
NASA Astrophysics Data System (ADS)
Duffy, A.; Cheung, C.; DeRosa, M. L.
2012-12-01
We present a web-based, topology-viewing tool that allows users to visualize the geometry and topology of the Sun's 3D coronal magnetic field in an interactive manner. The tool is implemented using, open-source, mature, modern web technologies including WebGL, jQuery, HTML 5, and CSS 3, which are compatible with nearly all modern web browsers. As opposed to the traditional method of visualization, which involves the downloading and setup of various software packages-proprietary and otherwise-the tool presents a clean interface that allows the user to easily load and manipulate the model, while also offering great power to choose which topological features are displayed. The tool accepts data encoded in the JSON open format that has libraries available for nearly every major programming language, making it simple to generate the data.
NASA Astrophysics Data System (ADS)
Castanier, Eric; Paterne, Loic; Louis, Céline
2017-09-01
In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.
An electrostatic autoresonant ion trap mass spectrometer.
Ermakov, A V; Hinch, B J
2010-01-01
A new method for ion extraction from an anharmonic electrostatic trap is introduced. Anharmonicity is a common feature of electrostatic traps which can be used for small scale spatial confinement of ions, and this feature is also necessary for autoresonant ion extraction. With the aid of ion trajectory simulations, novel autoresonant trap mass spectrometers (ART-MSs) have been designed based on these very simple principles. A mass resolution approximately 60 is demonstrated for the prototypes discussed here. We report also on the pressure dependencies, and the (mV) rf field strength dependencies of the ART-MS sensitivity. Importantly the new MS designs do not require heavy magnets, tight manufacturing tolerances, introduction of buffer gases, high power rf sources, nor complicated electronics. The designs described here are very inexpensive to implement relative to other instruments, and can be easily miniaturized. Possible applications are discussed.
Engineering empty space between Si nanoparticles for lithium-ion battery anodes.
Wu, Hui; Zheng, Guangyuan; Liu, Nian; Carney, Thomas J; Yang, Yuan; Cui, Yi
2012-02-08
Silicon is a promising high-capacity anode material for lithium-ion batteries yet attaining long cycle life remains a significant challenge due to pulverization of the silicon and unstable solid-electrolyte interphase (SEI) formation during the electrochemical cycles. Despite significant advances in nanostructured Si electrodes, challenges including short cycle life and scalability hinder its widespread implementation. To address these challenges, we engineered an empty space between Si nanoparticles by encapsulating them in hollow carbon tubes. The synthesis process used low-cost Si nanoparticles and electrospinning methods, both of which can be easily scaled. The empty space around the Si nanoparticles allowed the electrode to successfully overcome these problems Our anode demonstrated a high gravimetric capacity (~1000 mAh/g based on the total mass) and long cycle life (200 cycles with 90% capacity retention). © 2012 American Chemical Society
An innovative approach to synthesize highly-ordered TiO2 nanotubes.
Isimjan, Tayirjan T; Yang, D Q; Rohani, Sohrab; Ray, Ajay K
2011-02-01
An innovative route to prepare highly-ordered and dimensionally controlled TiO2 nanotubes has been proposed using a mild sonication method. The nanotube arrays were prepared by the anodization of titanium in an electrolyte containing 3% NH4F and 5% H2O in glycerol. It is demonstrated that the TiO2 nanostructures has two layers: the top layer is TiO2 nanowire and underneath is well-ordered TiO2 nanotubes. The top layer can easily fall off and form nanowires bundles by implementing a mild sonication after a short annealing time. We found that the dimensions of the TiO2 nanotubes were only dependent on the anodizing condition. The proposed technique may be extended to fabricate reproducible well-ordered TiO2 nanotubes with large area on other metals.
GRACKLE: a chemistry and cooling library for astrophysics
NASA Astrophysics Data System (ADS)
Smith, Britton D.; Bryan, Greg L.; Glover, Simon C. O.; Goldbaum, Nathan J.; Turk, Matthew J.; Regan, John; Wise, John H.; Schive, Hsi-Yu; Abel, Tom; Emerick, Andrew; O'Shea, Brian W.; Anninos, Peter; Hummels, Cameron B.; Khochfar, Sadegh
2017-04-01
We present the GRACKLE chemistry and cooling library for astrophysical simulations and models. GRACKLE provides a treatment of non-equilibrium primordial chemistry and cooling for H, D and He species, including H2 formation on dust grains; tabulated primordial and metal cooling; multiple ultraviolet background models; and support for radiation transfer and arbitrary heat sources. The library has an easily implementable interface for simulation codes written in C, C++ and FORTRAN as well as a PYTHON interface with added convenience functions for semi-analytical models. As an open-source project, GRACKLE provides a community resource for accessing and disseminating astrochemical data and numerical methods. We present the full details of the core functionality, the simulation and PYTHON interfaces, testing infrastructure, performance and range of applicability. GRACKLE is a fully open-source project and new contributions are welcome.
E-Portfolio Web-based for Students’ Internship Program Activities
NASA Astrophysics Data System (ADS)
Juhana, A.; Abdullah, A. G.; Somantri, M.; Aryadi, S.; Zakaria, D.; Amelia, N.; Arasid, W.
2018-02-01
Internship program is an important part in vocational education process to improve the quality of competent graduates. The complete work documentation process in electronic portfolio (e-Portfolio) platform will facilitate students in reporting the results of their work to both university and industry supervisor. The purpose of this research is to create a more easily accessed e-Portfolio which is appropriate for students and supervisors’ need in documenting their work and monitoring process. The method used in this research is fundamental research. This research is focused on the implementation of internship e-Portfolio features by demonstrating them to students who have conducted internship program. The result of this research is to create a proper web-based e-Portfolio which can be used to facilitate students in documenting the results of their work and aid supervisors in monitoring process during internship.
NASA Astrophysics Data System (ADS)
Gittinger, Jaxon M.; Jimenez, Edward S.; Holswade, Erica A.; Nunna, Rahul S.
2017-02-01
This work will demonstrate the implementation of a traditional and non-traditional visualization of x-ray images for aviation security applications that will be feasible with open system architecture initiatives such as the Open Threat Assessment Platform (OTAP). Anomalies of interest to aviation security are fluid, where characteristic signals of anomalies of interest can evolve rapidly. OTAP is a limited scope open architecture baggage screening prototype that intends to allow 3rd-party vendors to develop and easily implement, integrate, and deploy detection algorithms and specialized hardware on a field deployable screening technology [13]. In this study, stereoscopic images were created using an unmodified, field-deployed system and rendered on the Oculus Rift, a commercial virtual reality video gaming headset. The example described in this work is not dependent on the Oculus Rift, and is possible using any comparable hardware configuration capable of rendering stereoscopic images. The depth information provided from viewing the images will aid in the detection of characteristic signals from anomalies of interest. If successful, OTAP has the potential to allow for aviation security to become more fluid in its adaptation to the evolution of anomalies of interest. This work demonstrates one example that is easily implemented using the OTAP platform, that could lead to the future generation of ATR algorithms and data visualization approaches.
Breimaier, Helga E; Heckemann, Birgit; Halfens, Ruud J G; Lohrmann, Christa
2015-01-01
Implementing clinical practice guidelines (CPGs) in healthcare settings is a complex intervention involving both independent and interdependent components. Although the Consolidated Framework for Implementation Research (CFIR) has never been evaluated in a practical context, it appeared to be a suitable theoretical framework to guide an implementation process. The aim of this study was to evaluate the comprehensiveness, applicability and usefulness of the CFIR in the implementation of a fall-prevention CPG in nursing practice to improve patient care in an Austrian university teaching hospital setting. The evaluation of the CFIR was based on (1) team-meeting minutes, (2) the main investigator's research diary, containing a record of a before-and-after, mixed-methods study design embedded in a participatory action research (PAR) approach for guideline implementation, and (3) an analysis of qualitative and quantitative data collected from graduate and assistant nurses in two Austrian university teaching hospital departments. The CFIR was used to organise data per and across time point(s) and assess their influence on the implementation process, resulting in implementation and service outcomes. Overall, the CFIR could be demonstrated to be a comprehensive framework for the implementation of a guideline into a hospital-based nursing practice. However, the CFIR did not account for some crucial factors during the planning phase of an implementation process, such as consideration of stakeholder aims and wishes/needs when implementing an innovation, pre-established measures related to the intended innovation and pre-established strategies for implementing an innovation. For the CFIR constructs reflecting & evaluating and engaging, a more specific definition is recommended. The framework and its supplements could easily be used by researchers, and their scope was appropriate for the complexity of a prospective CPG-implementation project. The CFIR facilitated qualitative data analysis and provided a structure that allowed project results to be organised and viewed in a broader context to explain the main findings. The CFIR was a valuable and helpful framework for (1) the assessment of the baseline, process and final state of the implementation process and influential factors, (2) the content analysis of qualitative data collected throughout the implementation process, and (3) explaining the main findings.
Synthesis method for ultrananocrystalline diamond in powder employing a coaxial arc plasma gun
NASA Astrophysics Data System (ADS)
Naragino, Hiroshi; Tominaga, Aki; Hanada, Kenji; Yoshitake, Tsuyoshi
2015-07-01
A new method that enables us to synthesize ultrananocrystalline diamond (UNCD) in powder is proposed. Highly energetic carbon species ejected from a graphite cathode of a coaxial arc plasma gun were provided on a quartz plate at a high density by repeated arc discharge in a compact vacuum chamber, and resultant films automatically peeled from the plate were aggregated and powdered. The grain size was easily controlled from 2.4 to 15.0 nm by changing the arc discharge energy. It was experimentally demonstrated that the proposed method is a new and promising method that enables us to synthesize UNCD in powder easily and controllably.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... more easily identify the bacteria and ``toxins'' that are controlled under this CCL entry, this rule alphabetizes and renumbers the lists of bacteria and ``toxins'' in the entry. DATES: This rule is effective... renumbers the lists of bacteria and toxins contained in ECCN 1C351.c and .d, respectively. Consistent with...
Image-based information, communication, and retrieval
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A. L.
1980-01-01
IBIS/VICAR system combines video image processing and information management. Flexible programs require user to supply only parameters specific to particular application. Special-purpose input/output routines transfer image data with reduced memory requirements. New application programs are easily incorporated. Program is written in FORTRAN IV, Assembler, and OS JCL for batch execution and has been implemented on IBM 360.
Quantitative Analysis of Sulfate in Water by Indirect EDTA Titration
ERIC Educational Resources Information Center
Belle-Oudry, Deirdre
2008-01-01
The determination of sulfate concentration in water by indirect EDTA titration is an instructive experiment that is easily implemented in an analytical chemistry laboratory course. A water sample is treated with excess barium chloride to precipitate sulfate ions as BaSO[subscript 4](s). The unprecipitated barium ions are then titrated with EDTA.…
ATEE Interactive Co-ordination and Educational Monitoring of Socrates Comenius Action 3 Projects.
ERIC Educational Resources Information Center
Libotton, Arno; Van Braak, Johan; Garofalo, Mara
2002-01-01
Asserts that although the Comenius Action 3 courses were well-accepted and high quality, there is a need for a structure for easily monitoring and evaluating these projects. This article presents a pilot project designed with this purpose, which may be useful in implementing a system of coordination and communication among the different projects…
ERIC Educational Resources Information Center
Yeo, Shelley; Chien, Robyn
2007-01-01
Procedures for responding consistently to plagiarism incidents are neither clear-cut nor easily implemented and yet inequitable treatment is intrinsically unfair. Classifying the seriousness of a plagiarism incident is problematic and penalties recommended for a given incident can vary greatly. This paper describes the development and testing of a…
Creating a Lean, Green, Library Machine: Easy Eco-Friendly Habits for Your Library
ERIC Educational Resources Information Center
Blaine, Amy S.
2010-01-01
For some library media specialists, implementing the three Rs of recycling, reducing, and reusing comes easily; they've been environmentally conscious well before the concept of going green made its way into the vernacular. Yet for some of library media specialists, the thought of greening their library, let alone the entire school, can seem…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boumaaraf, Abdelâali, E-mail: aboumaaraf@yahoo.fr; University of Farhat Abbas Setif1, Sétif, 19000; Mohamadi, Tayeb
In this paper, we present the FPGA implementation of the multiple pulse width modulation (MPWM) signal generation with repetition of data segments, applied to the variable frequency variable voltage systems and specially at to the photovoltaic water pumping system, in order to generate a signal command very easily between 10 Hz to 60 Hz with a small frequency and reduce the cost of the control system.
Finding P-Values for F Tests of Hypothesis on a Spreadsheet.
ERIC Educational Resources Information Center
Rochowicz, John A., Jr.
The calculation of the F statistic for a one-factor analysis of variance (ANOVA) and the construction of an ANOVA tables are easily implemented on a spreadsheet. This paper describes how to compute the p-value (observed significance level) for a particular F statistic on a spreadsheet. Decision making on a spreadsheet and applications to the…
ERIC Educational Resources Information Center
Walters, Kirk; Smith, Toni; Leinwand, Steve; Ford, Jennifer; Scheopner Torres, Aubrey
2015-01-01
This study was designed in response to a request from rural educators in the Northeast for support in identifying high-quality online resources to implement the Common Core State Standards for Mathematics (CCSSM). The process for identifying online resources included selecting resources that had an easily navigable CCSSM organizational structure…
Science of Security Lablet - Scalability and Usability
2014-12-16
mobile computing [19]. However, the high-level infrastructure design and our own implementation (both described throughout this paper) can easily...critical and infrastructural systems demands high levels of sophistication in the technical aspects of cybersecurity, software and hardware design...Forget, S. Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang. "Security Behavior Observatory: Infrastructure for Long-term
Truancy Programs: Are the Effects Too Easily Washed Away?
ERIC Educational Resources Information Center
Huck, Jennifer L.
2011-01-01
Truancy has been identified as a risk factor of criminal behavior but results are mixed as to the best means to reduce this school-based concern. The Truancy Prevention Initiative has been implemented in New Orleans post-Hurricane Katrina under the direction of the Recovery School District to reduce levels of truancy, increase graduation rates,…
The Grass Isn't Always Greener: Perceptions of and Performance on Open-Note Exams
ERIC Educational Resources Information Center
Sato, Brian K.; He, Wenliang; Warschauer, Mark; Kadandale, Pavan
2015-01-01
Undergraduate biology education is often viewed as being focused on memorization rather than development of students' critical-thinking abilities. We speculated that open-note testing would be an easily implemented change that would emphasize higher-order thinking. As open-note testing is not commonly used in the biological sciences and the…
Life Journey through Autism: An Educator's Guide to Asperger Syndrome
ERIC Educational Resources Information Center
Myles, Brenda Smith; Hagen, Kristen; Holverstott, Jeanne; Hubbard, Anastasia; Adreon, Diane; Trautman, Melissa
2005-01-01
The purpose of this guide is to help educators understand and be able to respond effectively to the needs of children with Asperger Syndrome in an inclusive classroom setting. This guide is meant to orient educators to the challenges and skills of students with Asperger Syndrome and outline strategies that can be easily implemented to meet their…
ERIC Educational Resources Information Center
Huang, Chenn-Jung; Chu, San-Shine; Guan, Chih-Tai
2007-01-01
In recent years, designing useful learning diagnosis systems has become a hot research topic in the literature. In order to help teachers easily analyze students' profiles in intelligent tutoring system, it is essential that students' portfolios can be transformed into some useful information to reflect the extent of students' participation in the…