General Nature of Multicollinearity in Multiple Regression Analysis.
ERIC Educational Resources Information Center
Liu, Richard
1981-01-01
Discusses multiple regression, a very popular statistical technique in the field of education. One of the basic assumptions in regression analysis requires that independent variables in the equation should not be highly correlated. The problem of multicollinearity and some of the solutions to it are discussed. (Author)
Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.
Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J
2018-06-01
Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.
Kwon, Tae-Ho; Kim, Jai-Eun; Kim, Ki-Doo
2018-05-14
In the field of communication, synchronization is always an important issue. The communication between a light-emitting diode (LED) array (LEA) and a camera is known as visual multiple-input multiple-output (MIMO), for which the data transmitter and receiver must be synchronized for seamless communication. In visual-MIMO, LEDs generally have a faster data rate than the camera. Hence, we propose an effective time-sharing-based synchronization technique with its color-independent characteristics providing the key to overcome this synchronization problem in visual-MIMO communication. We also evaluated the performance of our synchronization technique by varying the distance between the LEA and camera. A graphical analysis is also presented to compare the symbol error rate (SER) at different distances.
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Modal control of an oblique wing aircraft
NASA Technical Reports Server (NTRS)
Phillips, James D.
1989-01-01
A linear modal control algorithm is applied to the NASA Oblique Wing Research Aircraft (OWRA). The control law is evaluated using a detailed nonlinear flight simulation. It is shown that the modal control law attenuates the coupling and nonlinear aerodynamics of the oblique wing and remains stable during control saturation caused by large command inputs or large external disturbances. The technique controls each natural mode independently allowing single-input/single-output techniques to be applied to multiple-input/multiple-output systems.
The Impact of Multiple Endpoint Dependency on "Q" and "I"[superscript 2] in Meta-Analysis
ERIC Educational Resources Information Center
Thompson, Christopher Glen; Becker, Betsy Jane
2014-01-01
A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures "Q" and…
NASA IKONOS Radiometric Characterization
NASA Technical Reports Server (NTRS)
Pagnutti, Mary; Frisbee, Troy; Zanoni, Vicki; Blonski, Slawek; Daehler, Erik; Grant, Brennan; Holekamp, Kara; Ryan, Robert; Sellers, Richard; Smith, Charles
2002-01-01
The objective of this program: Perform radiometric vicarious calibrations of IKQNOS imagery and compare with Space Imaging calibration coefficients The approach taken: utilize multiple well-characterized sites which are widely used by the NASA science community for radiometric characterization of airborne and spaceborne sensors; and to Perform independent characterizations with independent teams. Each team has slightly different measurement techniques and data processing methods.
Mahmood, Zahid; Al Benna, Sammy; Nkere, Udim; Murday, Andrew
2006-01-01
Objectives The objective of this study was to compare the morbidity associated with long saphenous vein harvesting using the traditional open technique (A) against a minimally invasive technique using the Mayo vein stripper (B) that involves multiple short incisions. Design We conducted a prospective randomized controlled study in 80 patients undergoing first time coronary artery bypass grafting. Pain and healing was assessed on each postoperative day. Rings of long saphenous vein were subjected to organ-bath evaluation of endothelium-dependent and endothelium-independent relaxation. Results Three patients were excluded from the study, leaving 38 patients in Group A and 39 in Group B. With respect to operative procedure, Group A had a greater length of vein harvested than Group B. There was no statistical difference in pain scores and endothelium-dependent or endothelium-independent relaxation between the two groups. However there were significantly more infections in Group A compared with Group B. Conclusion Harvesting vein through multiple incisions using the Mayo vein stripper is quicker, results in fewer infections and has no deleterious effect on endothelial function compared to open technique. PMID:16759395
ERIC Educational Resources Information Center
Kilgo, Cindy A.; Pascarella, Ernest T.
2016-01-01
This study examines the effects of undergraduate students participating in independent research with faculty members on four-year graduation and graduate/professional degree aspirations. We analyzed four-year longitudinal data from the Wabash National Study of Liberal Arts Education using multiple analytic techniques. The findings support the…
QuickBird and OrbView-3 Geopositional Accuracy Assessment
NASA Technical Reports Server (NTRS)
Helder, Dennis; Ross, Kenton
2006-01-01
Objective: Compare vendor-provided image coordinates with known references visible in the imagery. Approach: Use multiple, well-characterized sites with >40 ground control points (GCPs); sites that are a) Well distributed; b) Accurately surveyed; and c) Easily found in imagery. Perform independent assessments with independent teams. Each team has slightly different measurement techniques and data processing methods. NASA Stennis Space Center. South Dakota State University.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.
2009-01-01
Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575
Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D
2008-01-01
Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.
Technique for estimating the magnitude and frequency of floods in Texas.
DOT National Transportation Integrated Search
1977-01-01
Drainage area, slope, and mean annual precipitation were the only : factors that were statistically significant at the 95-percent confidence : level when the characteristics of the drainage basins were used as independent variables in a multiple-regr...
NASA Astrophysics Data System (ADS)
Wang, Andong; Jiang, Lan; Li, Xiaowei; Wang, Zhi; Du, Kun; Lu, Yongfeng
2018-05-01
Ultrafast laser pulse temporal shaping has been widely applied in various important applications such as laser materials processing, coherent control of chemical reactions, and ultrafast imaging. However, temporal pulse shaping has been limited to only-in-lab technique due to the high cost, low damage threshold, and polarization dependence. Herein we propose a novel design of ultrafast laser pulse train generation device, which consists of multiple polarization-independent parallel-aligned thin films. Various pulse trains with controllable temporal profile can be generated flexibly by multi-reflections within the splitting films. Compared with other pulse train generation techniques, this method has advantages of compact structure, low cost, high damage threshold and polarization independence. These advantages endow it with high potential for broad utilization in ultrafast applications.
NASA Astrophysics Data System (ADS)
Gopinath, T.; Veglia, Gianluigi
2013-05-01
We propose a general method that enables the acquisition of multiple 2D and 3D solid-state NMR spectra for U-13C, 15N-labeled proteins. This method, called MEIOSIS (Multiple ExperIments via Orphan SpIn operatorS), makes it possible to detect four coherence transfer pathways simultaneously, utilizing orphan (i.e., neglected) spin operators of nuclear spin polarization generated during 15N-13C cross polarization (CP). In the MEIOSIS experiments, two phase-encoded free-induction decays are decoded into independent nuclear polarization pathways using Hadamard transformations. As a proof of principle, we show the acquisition of multiple 2D and 3D spectra of U-13C, 15N-labeled microcrystalline ubiquitin. Hadamard decoding of CP coherences into multiple independent spin operators is a new concept in solid-state NMR and is extendable to many other multidimensional experiments. The MEIOSIS method will increase the throughput of solid-state NMR techniques for microcrystalline proteins, membrane proteins, and protein fibrils.
MCAID--A Generalized Text Driver.
ERIC Educational Resources Information Center
Ahmed, K.; Dickinson, C. J.
MCAID is a relatively machine-independent technique for writing computer-aided instructional material consisting of descriptive text, multiple choice questions, and the ability to call compiled subroutines to perform extensive calculations. It was specially developed to incorporate test-authoring around complex mathematical models to explore a…
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
An Approach to Economic Dispatch with Multiple Fuels Based on Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Sriyanyong, Pichet
2011-06-01
Particle Swarm Optimization (PSO), a stochastic optimization technique, shows superiority to other evolutionary computation techniques in terms of less computation time, easy implementation with high quality solution, stable convergence characteristic and independent from initialization. For this reason, this paper proposes the application of PSO to the Economic Dispatch (ED) problem, which occurs in the operational planning of power systems. In this study, ED problem can be categorized according to the different characteristics of its cost function that are ED problem with smooth cost function and ED problem with multiple fuels. Taking the multiple fuels into account will make the problem more realistic. The experimental results show that the proposed PSO algorithm is more efficient than previous approaches under consideration as well as highly promising in real world applications.
SIRE: a MIMO radar for landmine/IED detection
NASA Astrophysics Data System (ADS)
Ojowu, Ode; Wu, Yue; Li, Jian; Nguyen, Lam
2013-05-01
Multiple-input multiple-output (MIMO) radar systems have been shown to have significant performance improvements over their single-input multiple-output (SIMO) counterparts. For transmit and receive elements that are collocated, the waveform diversity afforded by this radar is exploited for performance improvements. These improvements include but are not limited to improved target detection, improved parameter identifiability and better resolvability. In this paper, we present the Synchronous Impulse Reconstruction Radar (SIRE) Ultra-wideband (UWB) radar designed by the Army Research Lab (ARL) for landmine and improvised explosive device (IED) detection as a 2 by 16 MIMO radar (with collocated antennas). Its improvement over its SIMO counterpart in terms of beampattern/cross range resolution are discussed and demonstrated using simulated data herein. The limitations of this radar for Radio Frequency Interference (RFI) suppression are also discussed in this paper. A relaxation method (RELAX) combined with averaging of multiple realizations of the measured data is presented for RFI suppression; results show no noticeable target signature distortion after suppression. In this paper, the back-projection (delay and sum) data independent method is used for generating SAR images. A side-lobe minimization technique called recursive side-lobe minimization (RSM) is also discussed for reducing side-lobes in this data independent approach. We introduce a data-dependent sparsity based spectral estimation technique called Sparse Learning via Iterative Minimization (SLIM) as well as a data-dependent CLEAN approach for generating SAR images for the SIRE radar. These data-adaptive techniques show improvement in side-lobe reduction and resolution for simulated data for the SIRE radar.
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
Kashiwayanagi, M; Shimano, K; Kurihara, K
1996-11-04
The responses of single bullfrog olfactory neurons to various odorants were measured with the whole-cell patch clamp which offers direct information on cellular events and with the ciliary recording technique to obtain stable quantitative data from many neurons. A large portion of single olfactory neurons (about 64% and 79% in the whole-cell recording and in the ciliary recording, respectively) responded to many odorants with quite diverse molecular structures, including both odorants previously indicated to be cAMP-dependent (increasing) and independent odorants. One odorant elicited a response in many cells; e.g. hedione and citralva elicited the response in 100% and 92% of total neurons examined with the ciliary recording technique. To confirm that a single neuron carries different receptors or transduction pathways, the cross-adaptation technique was applied to single neurons. Application of hedione to a single neuron after desensitization of the current in response to lyral or citralva induced an inward current with a similar magnitude to that applied alone. It was suggested that most single olfactory neurons carry multiple receptors and at least dual transduction pathways.
Errorless-based techniques can improve route finding in early Alzheimer's disease: a case study.
Provencher, Véronique; Bier, Nathalie; Audet, Thérèse; Gagnon, Lise
2008-01-01
Topographical disorientation is a common and early manifestation of dementia of Alzheimer type, which threatens independence in activities of daily living. Errorless-based techniques appear to be effective in helping patients with amnesia to learn routes, but little is known about their effectiveness in early dementia of Alzheimer type. A 77-year-old woman with dementia of Alzheimer type had difficulty in finding her way around her seniors residence, which reduced her social activities. This study used an ABA design (A is the baseline and B is the intervention) with multiple baselines across routes for going to the rosary (target), laundry, and game rooms (controls). The errorless-based technique intervention was applied to 2 of the 3 routes. Analyses showed significant improvement only for the routes learned with errorless-based techniques. Following the study, the participant increased her topographical knowledge of her surroundings. Route learning interventions based on errorless-based techniques appear to be a promising approach for improving the independence in early dementia of Alzheimer type.
Rapid Vision Correction by Special Operations Forces.
Reynolds, Mark E
This report describes a rapid method of vision correction used by Special Operations Medics in multiple operational engagements. Between 2011 and 2015, Special Operations Medics used an algorithm- driven refraction technique. A standard block of instruction was provided to the medics, along with a packaged kit. The technique was used in multiple operational engagements with host nation military and civilians. Data collected for program evaluation were later analyzed to assess the utility of the technique. Glasses were distributed to 230 patients with complaints of either decreased distance or near (reading). Most patients (84%) with distance complaints achieved corrected binocular vision of 20/40 or better, and 97% of patients with near-vision complaints achieved corrected near-binocular vision of 20/40 or better. There was no statistically significant difference between the percentages of patients achieving 20/40 when medics used the technique under direct supervision versus independent use. A basic refraction technique using a designed kit allows for meaningful improvement in distance and/or near vision at austere locations. Special Operations Medics can leverage this approach after specific training with minimal time commitment. It can serve as a rapid, effective intervention with multiple applications in diverse operational environments. 2017.
Multiple signal classification algorithm for super-resolution fluorescence microscopy
Agarwal, Krishna; Macháň, Radek
2016-01-01
Single-molecule localization techniques are restricted by long acquisition and computational times, or the need of special fluorophores or biologically toxic photochemical environments. Here we propose a statistical super-resolution technique of wide-field fluorescence microscopy we call the multiple signal classification algorithm which has several advantages. It provides resolution down to at least 50 nm, requires fewer frames and lower excitation power and works even at high fluorophore concentrations. Further, it works with any fluorophore that exhibits blinking on the timescale of the recording. The multiple signal classification algorithm shows comparable or better performance in comparison with single-molecule localization techniques and four contemporary statistical super-resolution methods for experiments of in vitro actin filaments and other independently acquired experimental data sets. We also demonstrate super-resolution at timescales of 245 ms (using 49 frames acquired at 200 frames per second) in samples of live-cell microtubules and live-cell actin filaments imaged without imaging buffers. PMID:27934858
NASA Astrophysics Data System (ADS)
Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Zi, Yanyang; Yan, Ruqiang
2017-09-01
The gearbox of a wind turbine (WT) has dominant failure rates and highest downtime loss among all WT subsystems. Thus, gearbox health assessment for maintenance cost reduction is of paramount importance. The concurrence of multiple faults in gearbox components is a common phenomenon due to fault induction mechanism. This problem should be considered before planning to replace the components of the WT gearbox. Therefore, the key fault patterns should be reliably identified from noisy observation data for the development of an effective maintenance strategy. However, most of the existing studies focusing on multiple fault diagnosis always suffer from inappropriate division of fault information in order to satisfy various rigorous decomposition principles or statistical assumptions, such as the smooth envelope principle of ensemble empirical mode decomposition and the mutual independence assumption of independent component analysis. Thus, this paper presents a joint subspace learning-based multiple fault detection (JSL-MFD) technique to construct different subspaces adaptively for different fault patterns. Its main advantage is its capability to learn multiple fault subspaces directly from the observation signal itself. It can also sparsely concentrate the feature information into a few dominant subspace coefficients. Furthermore, it can eliminate noise by simply performing coefficient shrinkage operations. Consequently, multiple fault patterns are reliably identified by utilizing the maximum fault information criterion. The superiority of JSL-MFD in multiple fault separation and detection is comprehensively investigated and verified by the analysis of a data set of a 750 kW WT gearbox. Results show that JSL-MFD is superior to a state-of-the-art technique in detecting hidden fault patterns and enhancing detection accuracy.
Real-time optical holographic tracking of multiple objects
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Liu, Hua-Kuang
1989-01-01
A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.
Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei
2017-09-11
Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.
Chantler, C T; Islam, M T; Rae, N A; Tran, C Q; Glover, J L; Barnea, Z
2012-03-01
An extension of the X-ray extended-range technique is described for measuring X-ray mass attenuation coefficients by introducing absolute measurement of a number of foils - the multiple independent foil technique. Illustrating the technique with the results of measurements for gold in the 38-50 keV energy range, it is shown that its use enables selection of the most uniform and well defined of available foils, leading to more accurate measurements; it allows one to test the consistency of independently measured absolute values of the mass attenuation coefficient with those obtained by the thickness transfer method; and it tests the linearity of the response of the counter and counting chain throughout the range of X-ray intensities encountered in a given experiment. In light of the results for gold, the strategy to be ideally employed in measuring absolute X-ray mass attenuation coefficients, X-ray absorption fine structure and related quantities is discussed.
An operator calculus for surface and volume modeling
NASA Technical Reports Server (NTRS)
Gordon, W. J.
1984-01-01
The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
GMTI Direction of Arrival Measurements from Multiple Phase Centers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.; Bickel, Douglas L.
2015-03-01
Ground Moving Target Indicator (GMTI) radar attempts to detect and locate targets with unknown motion. Very slow-moving targets are difficult to locate in the presence of surrounding clutter. This necessitates multiple antenna phase centers (or equivalent) to offer independent Direction of Arrival (DOA) measurements. DOA accuracy and precision generally remains dependent on target Signal-to-Noise Ratio (SNR), Clutter-toNoise Ratio (CNR), scene topography, interfering signals, and a number of antenna parameters. This is true even for adaptive techniques like Space-Time-AdaptiveProcessing (STAP) algorithms.
NASA Technical Reports Server (NTRS)
Vazirani, P.
1995-01-01
The process of combining telemetry signals received at multiple antennas, commonly referred to as arraying, can be used to improve communication link performance in the Deep Space Network (DSN). By coherently adding telemetry from multiple receiving sites, arraying produces an enhancement in signal-to-noise ratio (SNR) over that achievable with any single antenna in the array. A number of different techniques for arraying have been proposed and their performances analyzed in past literature. These analyses have compared different arraying schemes under the assumption that the signals contain additive white Gaussian noise (AWGN) and that the noise observed at distinct antennas is independent. In situations where an unwanted background body is visible to multiple antennas in the array, however, the assumption of independent noises is no longer applicable. A planet with significant radiation emissions in the frequency band of interest can be one such source of correlated noise. For example, during much of Galileo's tour of Jupiter, the planet will contribute significantly to the total system noise at various ground stations. This article analyzes the effects of correlated noise on two arraying schemes currently being considered for DSN applications: full-spectrum combining (FSC) and complex-symbol combining (CSC). A framework is presented for characterizing the correlated noise based on physical parameters, and the impact of the noise correlation on the array performance is assessed for each scheme.
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
NASA Astrophysics Data System (ADS)
Oishi, Masaki; Shinozaki, Tomohisa; Hara, Hikaru; Yamamoto, Kazunuki; Matsusue, Toshio; Bando, Hiroyuki
2018-05-01
The elliptical polarization dependence of the two-photon absorption coefficient β in InP has been measured by the extended Z-scan technique for thick materials in the wavelength range from 1640 to 1800 nm. The analytical formula of the Z-scan technique has been extended with consideration of multiple reflections. The Z-scan results have been fitted very well by the formula and β has been evaluated accurately. The three independent elements of the third-order nonlinear susceptibility tensor in InP have also been determined accurately from the elliptical polarization dependence of β.
Hsu, Guo-Liang; Tang, Jung-Chang; Hwang, Wu-Yuin
2014-08-01
The one-more-than technique is an effective strategy for individuals with intellectual disabilities (ID) to use when making purchases. However, the heavy cognitive demands of money counting skills potentially limit how individuals with ID shop. This study employed a multiple-probe design across participants and settings, via the assistance of a mobile purchasing assistance system (MPAS), to assess the effectiveness of the one-more-than technique on independent purchases for items with prices beyond the participants' money counting skills. Results indicated that the techniques with the MPAS could effectively convert participants' initial money counting problems into useful advantages for successfully promoting the independent purchasing skills of three secondary school students with ID. Also noteworthy is the fact that mobile technologies could be a permanent prompt for those with ID to make purchases in their daily lives. The treatment effects could be maintained for eight weeks and generalized across three community settings. Implications for practice and future studies are provided. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simplified Phased-Mission System Analysis for Systems with Independent Component Repairs
NASA Technical Reports Server (NTRS)
Somani, Arun K.
1996-01-01
Accurate analysis of reliability of system requires that it accounts for all major variations in system's operation. Most reliability analyses assume that the system configuration, success criteria, and component behavior remain the same. However, multiple phases are natural. We present a new computationally efficient technique for analysis of phased-mission systems where the operational states of a system can be described by combinations of components states (such as fault trees or assertions). Moreover, individual components may be repaired, if failed, as part of system operation but repairs are independent of the system state. For repairable systems Markov analysis techniques are used but they suffer from state space explosion. That limits the size of system that can be analyzed and it is expensive in computation. We avoid the state space explosion. The phase algebra is used to account for the effects of variable configurations, repairs, and success criteria from phase to phase. Our technique yields exact (as opposed to approximate) results. We demonstrate our technique by means of several examples and present numerical results to show the effects of phases and repairs on the system reliability/availability.
Surface albedo from bidirectional reflectance
NASA Technical Reports Server (NTRS)
Ranson, K. J.; Irons, J. R.; Daughtry, C. S. T.
1991-01-01
The validity of integrating over discrete wavelength bands is examined to estimate total shortwave bidirectional reflectance of vegetated and bare soil surfaces. Methods for estimating albedo from multiple angle, discrete wavelength band radiometer measurements are studied. These methods include a numerical integration technique and the integration of an empirically derived equation for bidirectional reflectance. It is concluded that shortwave albedos estimated through both techniques agree favorably with the independent pyranometer measurements. Absolute rms errors are found to be 0.5 percent or less for both grass sod and bare soil surfaces.
NASA Technical Reports Server (NTRS)
DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
NASA Technical Reports Server (NTRS)
DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
Determining the metallicity of the solar envelope using seismic inversion techniques
NASA Astrophysics Data System (ADS)
Buldgen, G.; Salmon, S. J. A. J.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.
2017-11-01
The solar metallicity issue is a long-lasting problem of astrophysics, impacting multiple fields and still subject to debate and uncertainties. While spectroscopy has mostly been used to determine the solar heavy elements abundance, helioseismologists attempted providing a seismic determination of the metallicity in the solar convective envelope. However, the puzzle remains since two independent groups provided two radically different values for this crucial astrophysical parameter. We aim at providing an independent seismic measurement of the solar metallicity in the convective envelope. Our main goal is to help provide new information to break the current stalemate amongst seismic determinations of the solar heavy element abundance. We start by presenting the kernels, the inversion technique and the target function of the inversion we have developed. We then test our approach in multiple hare-and-hounds exercises to assess its reliability and accuracy. We then apply our technique to solar data using calibrated solar models and determine an interval of seismic measurements for the solar metallicity. We show that our inversion can indeed be used to estimate the solar metallicity thanks to our hare-and-hounds exercises. However, we also show that further dependencies in the physical ingredients of solar models lead to a low accuracy. Nevertheless, using various physical ingredients for our solar models, we determine metallicity values between 0.008 and 0.014.
Improving Systematic Constraint-driven Analysis Using Incremental and Parallel Techniques
2012-05-01
and modeling latency of a cloud based subsystem. Members of my research group provided useful comments and ideas on my work in group meetings and...122 5.7.1 One structurally complex argument . . . . . . . . . . . . . . 122 5.7.2 Multiple independent arguments...Subject Tools . . . . . . . . . . . . . . . . . 131 6.1.1.1 JPF — Model Checker . . . . . . . . . . . . . . . . 131 6.1.1.2 Alloy — Using a SAT
Chin, Ki Jinn; Alakkad, Husni; Cubillos, Javier E
2013-08-08
Regional anaesthesia comprising axillary block of the brachial plexus is a common anaesthetic technique for distal upper limb surgery. This is an update of a review first published in 2006 and updated in 2011. To compare the relative effects (benefits and harms) of three injection techniques (single, double and multiple) of axillary block of the brachial plexus for distal upper extremity surgery. We considered these effects primarily in terms of anaesthetic effectiveness; the complication rate (neurological and vascular); and pain and discomfort caused by performance of the block. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library), MEDLINE, EMBASE and reference lists of trials. We contacted trial authors. The date of the last search was March 2013 (updated from March 2011). We included randomized controlled trials that compared double with single-injection techniques, multiple with single-injection techniques, or multiple with double-injection techniques for axillary block in adults undergoing surgery of the distal upper limb. We excluded trials using ultrasound-guided techniques. Independent study selection, risk of bias assessment and data extraction were performed by at least two investigators. We undertook meta-analysis. The 21 included trials involved a total of 2148 participants who received regional anaesthesia for hand, wrist, forearm or elbow surgery. Risk of bias assessment indicated that trial design and conduct were generally adequate; the most common areas of weakness were in blinding and allocation concealment.Eight trials comparing double versus single injections showed a statistically significant decrease in primary anaesthesia failure (risk ratio (RR 0.51), 95% confidence interval (CI) 0.30 to 0.85). Subgroup analysis by method of nerve location showed that the effect size was greater when neurostimulation was used rather than the transarterial technique.Eight trials comparing multiple with single injections showed a statistically significant decrease in primary anaesthesia failure (RR 0.25, 95% CI 0.14 to 0.44) and of incomplete motor block (RR 0.61, 95% CI 0.39 to 0.96) in the multiple injection group.Eleven trials comparing multiple with double injections showed a statistically significant decrease in primary anaesthesia failure (RR 0.28, 95% CI 0.20 to 0.40) and of incomplete motor block (RR 0.55, 95% CI 0.36 to 0.85) in the multiple injection group.Tourniquet pain was significantly reduced with multiple injections compared with double injections (RR 0.53, 95% CI 0.33 to 0.84). Otherwise there were no statistically significant differences between groups in any of the three comparisons on secondary analgesia failure, complications and patient discomfort. The time for block performance was significantly shorter for single and double injections compared with multiple injections. This review provides evidence that multiple-injection techniques using nerve stimulation for axillary plexus block produce more effective anaesthesia than either double or single-injection techniques. However, there was insufficient evidence for a significant difference in other outcomes, including safety.
The use of XFEM to assess the influence of intra-cortical porosity on crack propagation.
Rodriguez-Florez, Naiara; Carriero, Alessandra; Shefelbine, Sandra J
2017-03-01
This study aimed at using eXtended finite element method (XFEM) to characterize crack growth through bone's intra-cortical pores. Two techniques were compared using Abaqus: (1) void material properties were assigned to pores; (2) multiple enrichment regions with independent crack-growth possibilities were employed. Both were applied to 2D models of transverse images of mouse bone with differing porous structures. Results revealed that assigning multiple enrichment regions allows for multiple cracks to be initiated progressively, which cannot be captured when the voids are filled. Therefore, filling pores with one enrichment region in the model will not create realistic fracture patterns in Abaqus-XFEM.
Joint Channel and Phase Noise Estimation in MIMO-OFDM Systems
NASA Astrophysics Data System (ADS)
Ngebani, I. M.; Chuma, J. M.; Zibani, I.; Matlotse, E.; Tsamaase, K.
2017-05-01
The combination of multiple-input multiple-output (MIMO) techniques with orthogonal frequency division multiplexing (OFDM), MIMO-OFDM, is a promising way of achieving high spectral efficiency in wireless communication systems. However, the performance of MIMO-ODFM systems is highly degraded by radio frequency (RF) impairments such as phase noise. Similar to the single-input single-output (SISO) case, phase noise in MIMO-OFDM systems results in a common phase error (CPE) and inter carrier interference (ICI). In this paper the problem of joint channel and phase noise estimation in a system with multiple transmit and receive antennas where each antenna is equipped with its own independent oscillator is tackled. The technique employed makes use of a novel placement of pilot carriers in the preamble and data portion of the MIMO-OFDM frame. Numerical results using a 16 and 64 quadrature amplitude modulation QAM schemes are provided to illustrate the effectiveness of the proposed scheme for MIMO-OFDM systems.
QCD tests with SLD and polarized beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, M.G.
1994-12-01
The author presents a measurement of the strong coupling {alpha}{sub s} derived from multijet rates using data collected by the SLD experiment at SLAC and find that {alpha}{sub s}(M{sub Z}{sup 2}) = 0.118 {+-} 0.002(stat.) {+-} 0.003(syst.) {+-} 0.010(theory). He presents tests of the flavor independence of strong interactions via preliminary measurements of the ratios {alpha}{sub s}(b)/{alpha}{sub s}(udsc) and {alpha}{sub s}(uds)/{alpha}{sub s}(bc). In addition, the group has measured the difference in charged particle multiplicity between Z{sup 0} {yields} b{bar b} and Z{sup 0} {yields} u{bar u}, d{bar d}, s{bar s} events, and find that it supports the prediction of perturbativemore » QCD that the multiplicity difference be independent of center-of-mass energy. Finally, the group has made a preliminary study of jet polarization using the jet handedness technique.« less
Automated Track Recognition and Event Reconstruction in Nuclear Emulsion
NASA Technical Reports Server (NTRS)
Deines-Jones, P.; Cherry, M. L.; Dabrowska, A.; Holynski, R.; Jones, W. V.; Kolganova, E. D.; Kudzia, D.; Nilsen, B. S.; Olszewski, A.; Pozharova, E. A.;
1998-01-01
The major advantages of nuclear emulsion for detecting charged particles are its submicron position resolution and sensitivity to minimum ionizing particles. These must be balanced, however, against the difficult manual microscope measurement by skilled observers required for the analysis. We have developed an automated system to acquire and analyze the microscope images from emulsion chambers. Each emulsion plate is analyzed independently, allowing coincidence techniques to be used in order to reject back- ground and estimate error rates. The system has been used to analyze a sample of high-multiplicity Pb-Pb interactions (charged particle multiplicities approx. 1100) produced by the 158 GeV/c per nucleon Pb-208 beam at CERN. Automatically reconstructed track lists agree with our best manual measurements to 3%. We describe the image analysis and track reconstruction techniques, and discuss the measurement and reconstruction uncertainties.
Tactics, Techniques, and Procedures (TTP) for Migrant Camp Operations
1995-04-15
interchangeable and, therefore, usually do not degrade a combat unit if tasked to deploy independently. Also, the Air Force frequently tasks composite...Prime BEEF teams from multiple bases rather than degrade a combat unit’s capabilities. (2) Horizontal construction capabilities, usually airfield or...special understanding and sympathy. They should receive all necessary assistance, and they should not be subject to cruel , inhumane, or degrading
Fast matrix multiplication and its algebraic neighbourhood
NASA Astrophysics Data System (ADS)
Pan, V. Ya.
2017-11-01
Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.
An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Watts, Stephen R.; Garg, Sanjay
1995-01-01
This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.
Hardcastle, Sarah J; Fortier, Michelle; Blake, Nicola; Hagger, Martin S
2017-03-01
Motivational interviewing (MI) is a complex intervention comprising multiple techniques aimed at changing health-related motivation and behaviour. However, MI techniques have not been systematically isolated and classified. This study aimed to identify the techniques unique to MI, classify them as content-related or relational, and evaluate the extent to which they overlap with techniques from the behaviour change technique taxonomy version 1 [BCTTv1; Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., … Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46, 81-95]. Behaviour change experts (n = 3) content-analysed MI techniques based on Miller and Rollnick's [(2013). Motivational interviewing: Preparing people for change (3rd ed.). New York: Guildford Press] conceptualisation. Each technique was then coded for independence and uniqueness by independent experts (n = 10). The experts also compared each MI technique to those from the BCTTv1. Experts identified 38 distinct MI techniques with high agreement on clarity, uniqueness, preciseness, and distinctiveness ratings. Of the identified techniques, 16 were classified as relational techniques. The remaining 22 techniques were classified as content based. Sixteen of the MI techniques were identified as having substantial overlap with techniques from the BCTTv1. The isolation and classification of MI techniques will provide researchers with the necessary tools to clearly specify MI interventions and test the main and interactive effects of the techniques on health behaviour. The distinction between relational and content-based techniques within MI is also an important advance, recognising that changes in motivation and behaviour in MI is a function of both intervention content and the interpersonal style in which the content is delivered.
Willner, Alan E; Ren, Yongxiong; Xie, Guodong; Yan, Yan; Li, Long; Zhao, Zhe; Wang, Jian; Tur, Moshe; Molisch, Andreas F; Ashrafi, Solyman
2017-02-28
There is a continuing growth in the demand for data bandwidth, and the multiplexing of multiple independent data streams has the potential to provide the needed data capacity. One technique uses the spatial domain of an electromagnetic (EM) wave, and space division multiplexing (SDM) has become increasingly important for increased transmission capacity and spectral efficiency of a communication system. A subset of SDM is mode division multiplexing (MDM), in which multiple orthogonal beams each on a different mode can be multiplexed. A potential modal basis set to achieve MDM is to use orbital angular momentum (OAM) of EM waves. In such a system, multiple OAM beams each carrying an independent data stream are multiplexed at the transmitter, propagate through a common medium and are demultiplexed at the receiver. As a result, the total capacity and spectral efficiency of the communication system can be multiplied by a factor equal to the number of transmitted OAM modes. Over the past few years, progress has been made in understanding the advantages and limitations of using multiplexed OAM beams for communication systems. In this review paper, we highlight recent advances in the use of OAM multiplexing for high-capacity free-space optical and millimetre-wave communications. We discuss different technical challenges (e.g. atmospheric turbulence and crosstalk) as well as potential techniques to mitigate such degrading effects.This article is part of the themed issue 'Optical orbital angular momentum'. © 2017 The Author(s).
Ren, Yongxiong; Xie, Guodong; Yan, Yan; Li, Long; Zhao, Zhe; Wang, Jian; Tur, Moshe; Molisch, Andreas F.; Ashrafi, Solyman
2017-01-01
There is a continuing growth in the demand for data bandwidth, and the multiplexing of multiple independent data streams has the potential to provide the needed data capacity. One technique uses the spatial domain of an electromagnetic (EM) wave, and space division multiplexing (SDM) has become increasingly important for increased transmission capacity and spectral efficiency of a communication system. A subset of SDM is mode division multiplexing (MDM), in which multiple orthogonal beams each on a different mode can be multiplexed. A potential modal basis set to achieve MDM is to use orbital angular momentum (OAM) of EM waves. In such a system, multiple OAM beams each carrying an independent data stream are multiplexed at the transmitter, propagate through a common medium and are demultiplexed at the receiver. As a result, the total capacity and spectral efficiency of the communication system can be multiplied by a factor equal to the number of transmitted OAM modes. Over the past few years, progress has been made in understanding the advantages and limitations of using multiplexed OAM beams for communication systems. In this review paper, we highlight recent advances in the use of OAM multiplexing for high-capacity free-space optical and millimetre-wave communications. We discuss different technical challenges (e.g. atmospheric turbulence and crosstalk) as well as potential techniques to mitigate such degrading effects. This article is part of the themed issue ‘Optical orbital angular momentum’. PMID:28069770
Anandhakumar, Jayamani; Moustafa, Yara W.; Chowdhary, Surabhi; Kainth, Amoldeep S.
2016-01-01
Mediator is an evolutionarily conserved coactivator complex essential for RNA polymerase II transcription. Although it has been generally assumed that in Saccharomyces cerevisiae, Mediator is a stable trimodular complex, its structural state in vivo remains unclear. Using the “anchor away” (AA) technique to conditionally deplete select subunits within Mediator and its reversibly associated Cdk8 kinase module (CKM), we provide evidence that Mediator's tail module is highly dynamic and that a subcomplex consisting of Med2, Med3, and Med15 can be independently recruited to the regulatory regions of heat shock factor 1 (Hsf1)-activated genes. Fluorescence microscopy of a scaffold subunit (Med14)-anchored strain confirmed parallel cytoplasmic sequestration of core subunits located outside the tail triad. In addition, and contrary to current models, we provide evidence that Hsf1 can recruit the CKM independently of core Mediator and that core Mediator has a role in regulating postinitiation events. Collectively, our results suggest that yeast Mediator is not monolithic but potentially has a dynamic complexity heretofore unappreciated. Multiple species, including CKM-Mediator, the 21-subunit core complex, the Med2-Med3-Med15 tail triad, and the four-subunit CKM, can be independently recruited by activated Hsf1 to its target genes in AA strains. PMID:27185874
Dumas, Anne Marie; Girard, Raphaële; Ayzac, Louis; Caillat-Vallet, Emmanuelle; Tissot-Guerraz, Françoise; Vincent-Bouletreau, Agnès; Berland, Michel
2009-12-01
Our purpose was to evaluate maternal nosocomial infection rates according to the incision technique used for caesarean delivery, in a routine surveillance study. This was a prospective study of 5123 cesarean deliveries (43.2% Joel-Cohen, 56.8% Pfannenstiel incisions) in 35 maternity units (Mater Sud Est network). Data on routine surveillance variables, operative duration, and three additional variables (manual removal of the placenta, uterine exteriorization, and/or cleaning of the parieto-colic gutter) were collected. Multiple logistic regression analysis was used to identify independent risk factors for infection. The overall nosocomial infection and endometritis rates were higher for the Joel-Cohen than Pfannenstiel incision (4.5% vs. 3.3%, 0.8% vs. 0.3%, respectively). The higher rate of nosocomial infections with the Joel-Cohen incision was due to a greater proportion of patients presenting risk factors (i.e., emergency delivery, primary cesarean, blood loss > or =800 mL, no manual removal of the placenta and no uterine exteriorization). However, the Joel-Cohen technique was an independent risk factor for endometritis. The Joel-Cohen technique is faster than the Pfannenstiel technique but is associated with a higher incidence of endometritis.
NASA Astrophysics Data System (ADS)
Thornton, P. E.; Nacp Site Synthesis Participants
2010-12-01
The North American Carbon Program (NACP) synthesis effort includes an extensive intercomparison of modeled and observed ecosystem states and fluxes preformed with multiple models across multiple sites. The participating models span a range of complexity and intended application, while the participating sites cover a broad range of natural and managed ecosystems in North America, from the subtropics to arctic tundra, and coastal to interior climates. A unique characteristic of this collaborative effort is that multiple independent observations are available at all sites: fluxes are measured with the eddy covariance technique, and standard biometric and field sampling methods provide estimates of standing stock and annual production in multiple categories. In addition, multiple modeling approaches are employed to make predictions at each site, varying, for example, in the use of diagnostic vs. prognostic leaf area index. Given multiple independent observational constraints and multiple classes of model, we evaluate the internal consistency of observations at each site, and use this information to extend previously derived estimates of uncertainty in the flux observations. Model results are then compared with all available observations and models are ranked according to their consistency with each type of observation (high frequency flux measurement, carbon stock, annual production). We demonstrate a range of internal consistency across the sites, and show that some models which perform well against one observational metric perform poorly against others. We use this analysis to construct a hypothesis for combining eddy covariance, biometrics, and other standard physiological and ecological measurements which, as data collection proceeded over several years, would present an increasingly challenging target for next generation models.
Tomographic phase microscopy and its biological applications
NASA Astrophysics Data System (ADS)
Choi, Wonshik
2012-12-01
Conventional interferometric microscopy techniques such as digital holographic microscopy and quantitative phase microscopy are often classified as 3D imaging techniques because a recorded complex field image can be numerically propagated to a different depth. In a strict sense, however, a single complex field image contains only 2D information on a specimen. The measured 2D image is only a subset of the 3D structure. For the 3D mapping of an object, multiple independent 2D images are to be taken, for example at multiple incident angles or wavelengths, and then combined by the so-called optical diffraction tomography (ODT). In this Letter, tomographic phase microscopy (TPM) is reviewed that experimentally realizes the concept of the ODT for the 3D mapping of biological cells in their native state, and some of its interesting biological and biomedical applications are introduced. [Figure not available: see fulltext.
Riffle, Michael; Merrihew, Gennifer E; Jaschob, Daniel; Sharma, Vagisha; Davis, Trisha N; Noble, William S; MacCoss, Michael J
2015-11-01
Regulation of protein abundance is a critical aspect of cellular function, organism development, and aging. Alternative splicing may give rise to multiple possible proteoforms of gene products where the abundance of each proteoform is independently regulated. Understanding how the abundances of these distinct gene products change is essential to understanding the underlying mechanisms of many biological processes. Bottom-up proteomics mass spectrometry techniques may be used to estimate protein abundance indirectly by sequencing and quantifying peptides that are later mapped to proteins based on sequence. However, quantifying the abundance of distinct gene products is routinely confounded by peptides that map to multiple possible proteoforms. In this work, we describe a technique that may be used to help mitigate the effects of confounding ambiguous peptides and multiple proteoforms when quantifying proteins. We have applied this technique to visualize the distribution of distinct gene products for the whole proteome across 11 developmental stages of the model organism Caenorhabditis elegans. The result is a large multidimensional dataset for which web-based tools were developed for visualizing how translated gene products change during development and identifying possible proteoforms. The underlying instrument raw files and tandem mass spectra may also be downloaded. The data resource is freely available on the web at http://www.yeastrc.org/wormpes/ . Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Riffle, Michael; Merrihew, Gennifer E.; Jaschob, Daniel; Sharma, Vagisha; Davis, Trisha N.; Noble, William S.; MacCoss, Michael J.
2015-11-01
Regulation of protein abundance is a critical aspect of cellular function, organism development, and aging. Alternative splicing may give rise to multiple possible proteoforms of gene products where the abundance of each proteoform is independently regulated. Understanding how the abundances of these distinct gene products change is essential to understanding the underlying mechanisms of many biological processes. Bottom-up proteomics mass spectrometry techniques may be used to estimate protein abundance indirectly by sequencing and quantifying peptides that are later mapped to proteins based on sequence. However, quantifying the abundance of distinct gene products is routinely confounded by peptides that map to multiple possible proteoforms. In this work, we describe a technique that may be used to help mitigate the effects of confounding ambiguous peptides and multiple proteoforms when quantifying proteins. We have applied this technique to visualize the distribution of distinct gene products for the whole proteome across 11 developmental stages of the model organism Caenorhabditis elegans. The result is a large multidimensional dataset for which web-based tools were developed for visualizing how translated gene products change during development and identifying possible proteoforms. The underlying instrument raw files and tandem mass spectra may also be downloaded. The data resource is freely available on the web at http://www.yeastrc.org/wormpes/.
The meaning of "independence" for older people in different residential settings.
Hillcoat-Nallétamby, Sarah
2014-05-01
Drawing on older people's understandings of "independence" and Collopy's work on autonomy, the article elaborates an interpretive framework of the concept in relation to 3 residential settings-the private dwelling-home, the extra-care, and the residential-care settings. Data include 91 qualitative interviews with frail, older people living in each setting, collected as part of a larger Welsh study. Thematic analysis techniques were employed to identify patterns in meanings of independence across settings and then interpreted using Collopy's conceptualizations of autonomy, as well as notions of space and interdependencies. Independence has multiple meanings for older people, but certain meanings are common to all settings: Accepting help at hand; doing things alone; having family, friends, and money as resources; and preserving physical and mental capacities. Concepts of delegated, executional, authentic, decisional, and consumer autonomy, as well as social interdependencies and spatial and social independence, do provide appropriate higher order interpretive constructs of these meanings across settings. A broader interpretive framework of "independence" should encompass concepts of relative independence, autonomy(ies), as well as spatial and social independence, and can provide more nuanced interpretations of structured dependency and institutionalization theories when applied to different residential settings.
Multilateral Telecoordinated Control of Multiple Robots With Uncertain Kinematics.
Zhai, Di-Hua; Xia, Yuanqing
2017-06-06
This paper addresses the telecoordinated control of multiple robots in the simultaneous presence of asymmetric time-varying delays, nonpassive external forces, and uncertain kinematics/dynamics. To achieve the control objective, a neuroadaptive controller with utilizing prescribed performance control and switching control technique is developed, where the basic idea is to employ the concept of motion synchronization in each pair of master-slave robots and among all slave robots. By using the multiple Lyapunov-Krasovskii functionals method, the state-independent input-to-output practical stability of the closed-loop system is established. Compared with the previous approaches, the new design is straightforward and easier to implement and is applicable to a wider area. Simulation results on three pairs of three degrees-of-freedom robots confirm the theoretical findings.
NASA Astrophysics Data System (ADS)
Şahiner, Eren; Meriç, Niyazi; Polymeris, George S.
2017-02-01
Equivalent dose estimation (De) constitutes the most important part of either trap-charge dating techniques or dosimetry applications. In the present work, multiple, independent equivalent dose estimation approaches were adopted, using both luminescence and ESR techniques; two different minerals were studied, namely quartz as well as feldspathic polymineral samples. The work is divided into three independent parts, depending on the type of signal employed. Firstly, different De estimation approaches were carried out on both polymineral and contaminated quartz, using single aliquot regenerative dose protocols employing conventional OSL and IRSL signals, acquired at different temperatures. Secondly, ESR equivalent dose estimations using the additive dose procedure both at room temperature and at 90 K were discussed. Lastly, for the first time in the literature, a single aliquot regenerative protocol employing a thermally assisted OSL signal originating from Very Deep Traps was applied for natural minerals. Rejection criteria such as recycling and recovery ratios are also presented. The SAR protocol, whenever applied, provided with compatible De estimations with great accuracy, independent on either the type of mineral or the stimulation temperature. Low temperature ESR signals resulting from Al and Ti centers indicate very large De values due to bleaching in-ability, associated with large uncertainty values. Additionally, dose saturation of different approaches was investigated. For the signal arising from Very Deep Traps in quartz saturation is extended almost by one order of magnitude. It is interesting that most of De values yielded using different luminescence signals agree with each other and ESR Ge center has very large D0 values. The results presented above highly support the argument that the stability and the initial ESR signal of the Ge center is highly sample-dependent, without any instability problems for the cases of quartz resulting from fault gouge.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
Multimodal Neuroimaging: Basic Concepts and Classification of Neuropsychiatric Diseases.
Tulay, Emine Elif; Metin, Barış; Tarhan, Nevzat; Arıkan, Mehmet Kemal
2018-06-01
Neuroimaging techniques are widely used in neuroscience to visualize neural activity, to improve our understanding of brain mechanisms, and to identify biomarkers-especially for psychiatric diseases; however, each neuroimaging technique has several limitations. These limitations led to the development of multimodal neuroimaging (MN), which combines data obtained from multiple neuroimaging techniques, such as electroencephalography, functional magnetic resonance imaging, and yields more detailed information about brain dynamics. There are several types of MN, including visual inspection, data integration, and data fusion. This literature review aimed to provide a brief summary and basic information about MN techniques (data fusion approaches in particular) and classification approaches. Data fusion approaches are generally categorized as asymmetric and symmetric. The present review focused exclusively on studies based on symmetric data fusion methods (data-driven methods), such as independent component analysis and principal component analysis. Machine learning techniques have recently been introduced for use in identifying diseases and biomarkers of disease. The machine learning technique most widely used by neuroscientists is classification-especially support vector machine classification. Several studies differentiated patients with psychiatric diseases and healthy controls with using combined datasets. The common conclusion among these studies is that the prediction of diseases increases when combining data via MN techniques; however, there remain a few challenges associated with MN, such as sample size. Perhaps in the future N-way fusion can be used to combine multiple neuroimaging techniques or nonimaging predictors (eg, cognitive ability) to overcome the limitations of MN.
Progress in The Semantic Analysis of Scientific Code
NASA Technical Reports Server (NTRS)
Stewart, Mark
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
Quasi-regenerative mode locking in a compact all-polarisation-maintaining-fibre laser
NASA Astrophysics Data System (ADS)
Nyushkov, B. N.; Ivanenko, A. V.; Kobtsev, S. M.; Pivtsov, V. S.; Farnosov, S. A.; Pokasov, P. V.; Korel, I. I.
2017-12-01
A novel technique of mode locking in erbium-doped all-polarisation-maintaining-fibre laser has been developed and preliminary investigated. The proposed quasi-regenerative technique combines the advantages of conventional active mode locking (when an intracavity modulator is driven by an independent RF oscillator) and regenerative mode locking (when a modulator is driven by an intermode beat signal from the laser itself). This scheme is based on intracavity intensity modulation driven by an RF oscillator being phase-locked to the actual intermode frequency of the laser. It features also possibilities of operation at multiple frequencies and harmonic mode-locking operation.
Spin-analyzed SANS for soft matter applications
NASA Astrophysics Data System (ADS)
Chen, W. C.; Barker, J. G.; Jones, R.; Krycka, K. L.; Watson, S. M.; Gagnon, C.; Perevozchivoka, T.; Butler, P.; Gentile, T. R.
2017-06-01
The small angle neutron scattering (SANS) of nearly Q-independent nuclear spin-incoherent scattering from hydrogen present in most soft matter and biology samples may raise an issue in structure determination in certain soft matter applications. This is true at high wave vector transfer Q where coherent scattering is much weaker than the nearly Q-independent spin-incoherent scattering background. Polarization analysis is capable of separating coherent scattering from spin-incoherent scattering, hence potentially removing the nearly Q-independent background. Here we demonstrate SANS polarization analysis in conjunction with the time-of-flight technique for separation of coherent and nuclear spin-incoherent scattering for a sample of silver behenate back-filled with light water. We describe a complete procedure for SANS polarization analysis for separating coherent from incoherent scattering for soft matter samples that show inelastic scattering. Polarization efficiency correction and subsequent separation of the coherent and incoherent scattering have been done with and without a time-of-flight technique for direct comparisons. In addition, we have accounted for the effect of multiple scattering from light water to determine the contribution of nuclear spin-incoherent scattering in both the spin flip channel and non-spin flip channel when performing SANS polarization analysis. We discuss the possible gain in the signal-to-noise ratio for the measured coherent scattering signal using polarization analysis with the time-of-flight technique compared with routine unpolarized SANS measurements.
Anandhakumar, Jayamani; Moustafa, Yara W; Chowdhary, Surabhi; Kainth, Amoldeep S; Gross, David S
2016-07-15
Mediator is an evolutionarily conserved coactivator complex essential for RNA polymerase II transcription. Although it has been generally assumed that in Saccharomyces cerevisiae, Mediator is a stable trimodular complex, its structural state in vivo remains unclear. Using the "anchor away" (AA) technique to conditionally deplete select subunits within Mediator and its reversibly associated Cdk8 kinase module (CKM), we provide evidence that Mediator's tail module is highly dynamic and that a subcomplex consisting of Med2, Med3, and Med15 can be independently recruited to the regulatory regions of heat shock factor 1 (Hsf1)-activated genes. Fluorescence microscopy of a scaffold subunit (Med14)-anchored strain confirmed parallel cytoplasmic sequestration of core subunits located outside the tail triad. In addition, and contrary to current models, we provide evidence that Hsf1 can recruit the CKM independently of core Mediator and that core Mediator has a role in regulating postinitiation events. Collectively, our results suggest that yeast Mediator is not monolithic but potentially has a dynamic complexity heretofore unappreciated. Multiple species, including CKM-Mediator, the 21-subunit core complex, the Med2-Med3-Med15 tail triad, and the four-subunit CKM, can be independently recruited by activated Hsf1 to its target genes in AA strains. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Multiple-Beam Detection of Fast Transient Radio Sources
NASA Technical Reports Server (NTRS)
Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.
2011-01-01
A method has been designed for using multiple independent stations to discriminate fast transient radio sources from local anomalies, such as antenna noise or radio frequency interference (RFI). This can improve the sensitivity of incoherent detection for geographically separated stations such as the very long baseline array (VLBA), the future square kilometer array (SKA), or any other coincident observations by multiple separated receivers. The transients are short, broadband pulses of radio energy, often just a few milliseconds long, emitted by a variety of exotic astronomical phenomena. They generally represent rare, high-energy events making them of great scientific value. For RFI-robust adaptive detection of transients, using multiple stations, a family of algorithms has been developed. The technique exploits the fact that the separated stations constitute statistically independent samples of the target. This can be used to adaptively ignore RFI events for superior sensitivity. If the antenna signals are independent and identically distributed (IID), then RFI events are simply outlier data points that can be removed through robust estimation such as a trimmed or Winsorized estimator. The alternative "trimmed" estimator is considered, which excises the strongest n signals from the list of short-beamed intensities. Because local RFI is independent at each antenna, this interference is unlikely to occur at many antennas on the same step. Trimming the strongest signals provides robustness to RFI that can theoretically outperform even the detection performance of the same number of antennas at a single site. This algorithm requires sorting the signals at each time step and dispersion measure, an operation that is computationally tractable for existing array sizes. An alternative uses the various stations to form an ensemble estimate of the conditional density function (CDF) evaluated at each time step. Both methods outperform standard detection strategies on a test sequence of VLBA data, and both are efficient enough for deployment in real-time, online transient detection applications.
Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K
2018-03-21
Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.
Systematic procedure for designing processes with multiple environmental objectives.
Kim, Ki-Joo; Smith, Raymond L
2005-04-01
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.
Taking Halo-Independent Dark Matter Methods Out of the Bin
Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew
2014-10-30
We develop a new halo-independent strategy for analyzing emerging DM hints, utilizing the method of extended maximum likelihood. This approach does not require the binning of events, making it uniquely suited to the analysis of emerging DM direct detection hints. It determines a preferred envelope, at a given confidence level, for the DM velocity integral which best fits the data using all available information and can be used even in the case of a single anomalous scattering event. All of the halo-independent information from a direct detection result may then be presented in a single plot, allowing simple comparisons betweenmore » multiple experiments. This results in the halo-independent analogue of the usual mass and cross-section plots found in typical direct detection analyses, where limit curves may be compared with best-fit regions in halo-space. The method is straightforward to implement, using already-established techniques, and its utility is demonstrated through the first unbinned halo-independent comparison of the three anomalous events observed in the CDMS-Si detector with recent limits from the LUX experiment.« less
The Characterization of Biosignatures in Caves Using an Instrument Suite
NASA Astrophysics Data System (ADS)
Uckert, Kyle; Chanover, Nancy J.; Getty, Stephanie; Voelz, David G.; Brinckerhoff, William B.; McMillan, Nancy; Xiao, Xifeng; Boston, Penelope J.; Li, Xiang; McAdam, Amy; Glenar, David A.; Chavez, Arriana
2017-12-01
The search for life and habitable environments on other Solar System bodies is a major motivator for planetary exploration. Due to the difficulty and significance of detecting extant or extinct extraterrestrial life in situ, several independent measurements from multiple instrument techniques will bolster the community's confidence in making any such claim. We demonstrate the detection of subsurface biosignatures using a suite of instrument techniques including IR reflectance spectroscopy, laser-induced breakdown spectroscopy, and scanning electron microscopy/energy dispersive X-ray spectroscopy. We focus our measurements on subterranean calcium carbonate field samples, whose biosignatures are analogous to those that might be expected on some high-interest astrobiology targets. In this work, we discuss the feasibility and advantages of using each of the aforementioned instrument techniques for the in situ search for biosignatures and present results on the autonomous characterization of biosignatures using multivariate statistical analysis techniques.
The Characterization of Biosignatures in Caves Using an Instrument Suite.
Uckert, Kyle; Chanover, Nancy J; Getty, Stephanie; Voelz, David G; Brinckerhoff, William B; McMillan, Nancy; Xiao, Xifeng; Boston, Penelope J; Li, Xiang; McAdam, Amy; Glenar, David A; Chavez, Arriana
2017-12-01
The search for life and habitable environments on other Solar System bodies is a major motivator for planetary exploration. Due to the difficulty and significance of detecting extant or extinct extraterrestrial life in situ, several independent measurements from multiple instrument techniques will bolster the community's confidence in making any such claim. We demonstrate the detection of subsurface biosignatures using a suite of instrument techniques including IR reflectance spectroscopy, laser-induced breakdown spectroscopy, and scanning electron microscopy/energy dispersive X-ray spectroscopy. We focus our measurements on subterranean calcium carbonate field samples, whose biosignatures are analogous to those that might be expected on some high-interest astrobiology targets. In this work, we discuss the feasibility and advantages of using each of the aforementioned instrument techniques for the in situ search for biosignatures and present results on the autonomous characterization of biosignatures using multivariate statistical analysis techniques. Key Words: Biosignature suites-Caves-Mars-Life detection. Astrobiology 17, 1203-1218.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Nguyen, Quynh C.; Osypuk, Theresa L.; Schmidt, Nicole M.; Glymour, M. Maria; Tchetgen Tchetgen, Eric J.
2015-01-01
Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship between the exposure (treatment) and multiple mediators, conditional on covariates, by regressing exposure on mediators and covariates. The inverse of the covariate-adjusted exposure-mediator odds ratio association is used to weight the primary analytical regression of the outcome on treatment. The treatment coefficient in such a weighted regression estimates the natural direct effect of treatment on the outcome, and indirect effects are identified by subtracting direct effects from total effects. Weighting renders treatment and mediators independent, thereby deactivating indirect pathways of the mediators. This new mediation technique accommodates multiple discrete or continuous mediators. IORW is easily implemented and is appropriate for any standard regression model, including quantile regression and survival analysis. An empirical example is given using data from the Moving to Opportunity (1994–2002) experiment, testing whether neighborhood context mediated the effects of a housing voucher program on obesity. Relevant Stata code (StataCorp LP, College Station, Texas) is provided. PMID:25693776
NASA Astrophysics Data System (ADS)
Cao, L.; Cheng, Q.
2004-12-01
The scale invariant generator technique (SIG) and spectrum-area analysis technique (S-A) were developed independently relevant to the concept of the generalized scale invariance (GSI). The former was developed for characterizing the parameters involved in the GSI for characterizing and simulating multifractal measures whereas the latter was for identifying scaling breaks for decomposition of superimposed multifractal measures caused by multiple geophysical processes. A natural integration of these two techniques may yield a new technique to serve two purposes, on the one hand, that can enrich the power of S-A by increasing the interpretability of decomposed patterns in some applications of S-A and, on the other hand, that can provide a mean to test the uniqueness of multifractality of measures which is essential for application of SIG technique in more complicated environment. The implementation of the proposed technique has been done as a Dynamic Link Library (DLL) in Visual C++. The program can be friendly used for method validation and application in different fields.
Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna
2014-03-01
This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
NASA Technical Reports Server (NTRS)
Buckey, J. C.; Beattie, J. M.; Gaffney, F. A.; Nixon, J. V.; Blomqvist, C. G.
1984-01-01
Accurate, reproducible, and non-invasive means for ventricular volume determination are needed for evaluating cardiovascular function zero-gravity. Current echocardiographic methods, particularly for the right ventricle, suffer from a large standard error. A new mathematical approach, recently described by Watanabe et al., was tested on 1 normal formalin-fixed human hearts suspended in a mineral oil bath. Volumes are estimated from multiple two-dimensional echocardiographic views recorded from a single point at sequential angles. The product of sectional cavity area and center of mass for each view summed over the range of angles (using a trapezoidal rule) gives volume. Multiple (8-14) short axis right ventricle and left ventricle views at 5.0 deg intervals were videotaped. The images were digitized by two independent observers (leading-edge to leading-edge technique) and analyzed using a graphics tablet and microcomputer. Actual volumes were determined by filling the chambers with water. These data were compared to the mean of the two echo measurements.
Leakey, Tatiana I; Zielinski, Jerzy; Siegfried, Rachel N; Siegel, Eric R; Fan, Chun-Yang; Cooney, Craig A
2008-06-01
DNA methylation at cytosines is a widely studied epigenetic modification. Methylation is commonly detected using bisulfite modification of DNA followed by PCR and additional techniques such as restriction digestion or sequencing. These additional techniques are either laborious, require specialized equipment, or are not quantitative. Here we describe a simple algorithm that yields quantitative results from analysis of conventional four-dye-trace sequencing. We call this method Mquant and we compare it with the established laboratory method of combined bisulfite restriction assay (COBRA). This analysis of sequencing electropherograms provides a simple, easily applied method to quantify DNA methylation at specific CpG sites.
Model assessment using a multi-metric ranking technique
NASA Astrophysics Data System (ADS)
Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.
2017-12-01
Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
A guide to onboard checkout. Volume 7: RF communications
NASA Technical Reports Server (NTRS)
1971-01-01
The radio frequency communications subsystem for a space station is considered, with respect to onboard checkout requirements. The subsystem comprises all equipment necessary for transmitting and receiving, tracking and ranging, command, multiple voice and television information, and broadband experiment data. The communications subsystem provides a radio frequency interface between the space station and ground stations, either directly or indirectly, through a data relay satellite system, independent free-flying experiment modules, and logistics vehicles. Reliability, maintenance, and failure analyses are discussed, and computer programming techniques are presented.
Jarvi, S.I.; Farias, M.E.; Lapointe, D.A.; Belcaid, M.; Atkinson, C.T.
2013-01-01
Next-generation 454 sequencing techniques were used to re-examine diversity of mitochondrial cytochrome b lineages of avian malaria (Plasmodium relictum) in Hawaii. We document a minimum of 23 variant lineages of the parasite based on single nucleotide transitional changes, in addition to the previously reported single lineage (GRW4). A new, publicly available portal (Integroomer) was developed for initial parsing of 454 datasets. Mean variant prevalence and frequency was higher in low elevation Hawaii Amakihi (Hemignathus virens) with Avipoxvirus-like lesions (P = 0·001), suggesting that the variants may be biologically distinct. By contrast, variant prevalence and frequency did not differ significantly among mid-elevation Apapane (Himatione sanguinea) with or without lesions (P = 0·691). The low frequency and the lack of detection of variants independent of GRW4 suggest that multiple independent introductions of P. relictum to Hawaii are unlikely. Multiple variants may have been introduced in heteroplasmy with GRW4 or exist within the tandem repeat structure of the mitochondrial genome. The discovery of multiple mitochondrial lineages of P. relictum in Hawaii provides a measure of genetic diversity within a geographically isolated population of this parasite and suggests the origins and evolution of parasite diversity may be more complicated than previously recognized.
Jarvi, S I; Farias, M E; Lapointe, D A; Belcaid, M; Atkinson, C T
2013-12-01
Next-generation 454 sequencing techniques were used to re-examine diversity of mitochondrial cytochrome b lineages of avian malaria (Plasmodium relictum) in Hawaii. We document a minimum of 23 variant lineages of the parasite based on single nucleotide transitional changes, in addition to the previously reported single lineage (GRW4). A new, publicly available portal (Integroomer) was developed for initial parsing of 454 datasets. Mean variant prevalence and frequency was higher in low elevation Hawaii Amakihi (Hemignathus virens) with Avipoxvirus-like lesions (P = 0·001), suggesting that the variants may be biologically distinct. By contrast, variant prevalence and frequency did not differ significantly among mid-elevation Apapane (Himatione sanguinea) with or without lesions (P = 0·691). The low frequency and the lack of detection of variants independent of GRW4 suggest that multiple independent introductions of P. relictum to Hawaii are unlikely. Multiple variants may have been introduced in heteroplasmy with GRW4 or exist within the tandem repeat structure of the mitochondrial genome. The discovery of multiple mitochondrial lineages of P. relictum in Hawaii provides a measure of genetic diversity within a geographically isolated population of this parasite and suggests the origins and evolution of parasite diversity may be more complicated than previously recognized.
A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays
Hughes, Alec; Hynynen, Kullervo
2016-01-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323
A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.
Hughes, Alec; Hynynen, Kullervo
2016-12-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.
Resolvent-Techniques for Multiple Exercise Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no
2015-02-15
We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less
Implementation of collisions on GPU architecture in the Vorpal code
NASA Astrophysics Data System (ADS)
Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John
2017-10-01
The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.
Low, Diana H P; Motakis, Efthymios
2013-10-01
Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.
Data registration for automated non-destructive inspection with multiple data sets
NASA Astrophysics Data System (ADS)
Tippetts, T.; Brierley, N.; Cawley, P.
2013-01-01
In many NDE applications, multiple sources of data are available covering the same region of a part under inspection. These overlapping data can come from intersecting scan patterns, sensors in an array configuration, or repeated inspections at different times. In many cases these data sets are analysed independently, with separate assessments for each channel or data file. It should be possible to improve the overall reliability of the inspection by combining multiple sources of information, simultaneously increasing the Probability of Detection (POD) and decreasing the Probability of False Alarm (PFA). Data registration, i.e. mapping the data to matching coordinates in space, is both an essential prerequisite and a challenging obstacle to this type of data fusion. This paper describes optimization techniques for matching and aligning features in NDE data. Examples from automated ultrasound inspection of aircraft engine discs illustrate the approach.
NASA Astrophysics Data System (ADS)
Lucifredi, A.; Mazzieri, C.; Rossi, M.
2000-05-01
Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.
Studying fission neutrons with 2E-2v and 2E
NASA Astrophysics Data System (ADS)
Al-Adili, Ali; Jansson, Kaj; Tarrío, Diego; Hambsch, Franz-Josef; Göök, Alf; Oberstedt, Stephan; Olivier Frégeau, Marc; Gustavsson, Cecilia; Lantz, Mattias; Mattera, Andrea; Prokofiev, Alexander V.; Rakopoulos, Vasileios; Solders, Andreas; Vidali, Marzio; Österlund, Michael; Pomp, Stephan
2018-03-01
This work aims at measuring prompt-fission neutrons at different excitation energies of the nucleus. Two independent techniques, the 2E-2v and the 2E techniques, are used to map the characteristics of the mass-dependent prompt fission neutron multiplicity,
Knowledge-based system for detailed blade design of turbines
NASA Astrophysics Data System (ADS)
Goel, Sanjay; Lamson, Scott
1994-03-01
A design optimization methodology that couples optimization techniques to CFD analysis for design of airfoils is presented. This technique optimizes 2D airfoil sections of a blade by minimizing the deviation of the actual Mach number distribution on the blade surface from a smooth fit of the distribution. The airfoil is not reverse engineered by specification of a precise distribution of the desired Mach number plot, only general desired characteristics of the distribution are specified for the design. Since the Mach number distribution is very complex, and cannot be conveniently represented by a single polynomial, it is partitioned into segments, each of which is characterized by a different order polynomial. The sum of the deviation of all the segments is minimized during optimization. To make intelligent changes to the airfoil geometry, it needs to be associated with features observed in the Mach number distribution. Associating the geometry parameters with independent features of the distribution is a fairly complex task. Also, for different optimization techniques to work efficiently the airfoil geometry needs to be parameterized into independent parameters, with enough degrees of freedom for adequate geometry manipulation. A high-pressure, low reaction steam turbine blade section was optimized using this methodology. The Mach number distribution was partitioned into pressure and suction surfaces and the suction surface distribution was further subdivided into leading edge, mid section and trailing edge sections. Two different airfoil representation schemes were used for defining the design variables of the optimization problem. The optimization was performed by using a combination of heuristic search and numerical optimization. The optimization results for the two schemes are discussed in the paper. The results are also compared to a manual design improvement study conducted independently by an experienced airfoil designer. The turbine blade optimization system (TBOS) is developed using the described methodology of coupling knowledge engineering with multiple search techniques for blade shape optimization. TBOS removes a major bottleneck in the design cycle by performing multiple design optimizations in parallel, and improves design quality at the same time. TBOS not only improves the design but also the designers' quality of work by taking the mundane repetitive task of design iterations away and leaving them more time for innovative design.
Zhao, B.; Wang, S. X.; Xing, J.; ...
2015-01-30
An innovative extended response surface modeling technique (ERSM v1.0) is developed to characterize the nonlinear response of fine particles (PM₂̣₅) to large and simultaneous changes of multiple precursor emissions from multiple regions and sectors. The ERSM technique is developed based on the conventional response surface modeling (RSM) technique; it first quantifies the relationship between PM₂̣₅ concentrations and the emissions of gaseous precursors from each single region using the conventional RSM technique, and then assesses the effects of inter-regional transport of PM₂̣₅ and its gaseous precursors on PM₂̣₅ concentrations in the target region. We apply this novel technique with a widelymore » used regional chemical transport model (CTM) over the Yangtze River delta (YRD) region of China, and evaluate the response of PM₂̣₅ and its inorganic components to the emissions of 36 pollutant–region–sector combinations. The predicted PM₂̣₅ concentrations agree well with independent CTM simulations; the correlation coefficients are larger than 0.98 and 0.99, and the mean normalized errors (MNEs) are less than 1 and 2% for January and August, respectively. It is also demonstrated that the ERSM technique could reproduce fairly well the response of PM₂̣₅ to continuous changes of precursor emission levels between zero and 150%. Employing this new technique, we identify the major sources contributing to PM₂̣₅ and its inorganic components in the YRD region. The nonlinearity in the response of PM₂̣₅ to emission changes is characterized and the underlying chemical processes are illustrated.« less
Integrated data visualisation: an approach to capture older adults’ wellness
Wilamowska, Katarzyna; Demiris, George; Thompson, Hilaire
2013-01-01
Informatics tools can help support the health and independence of older adults. In this paper, we present an approach towards integrating health-monitoring data and describe several techniques for the assessment and visualisation of integrated health and well-being of older adults. We present three different visualisation techniques to provide distinct alternatives towards display of the same information, focusing on reducing the cognitive load of data interpretation. We demonstrate the feasibility of integrating health-monitoring information into a comprehensive measure of wellness, while also highlighting the challenges of designing visual displays targeted at multiple user groups. These visual displays of wellness can be incorporated into personal health records and can be an effective support for informed decision-making. PMID:23079025
NASA Astrophysics Data System (ADS)
Murshid, Syed H.; Muralikrishnan, Hari P.; Kozaitis, Samuel P.
2012-06-01
Bandwidth increase has always been an important area of research in communications. A novel multiplexing technique known as Spatial Domain Multiplexing (SDM) has been developed at the Optronics Laboratory of Florida Institute of Technology to increase the bandwidth to T-bits/s range. In this technique, space inside the fiber is used effectively to transmit up to four channels of same wavelength at the same time. Experimental and theoretical analysis shows that these channels follow independent helical paths inside the fiber without interfering with each other. Multiple pigtail laser sources of exactly the same wavelength are used to launch light into a single carrier fiber in a fashion that resulting channels follow independent helical trajectories. These helically propagating light beams form optical vortices inside the fiber and carry their own Orbital Angular Momentum (OAM). The outputs of these beams appear as concentric donut shaped rings when projected on a screen. This endeavor presents the experimental outputs and simulated results for a four channel spatially multiplexed system effectively increasing the system bandwidth by a factor of four.
Multiple independent identification decisions: a method of calibrating eyewitness identifications.
Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul
2004-02-01
Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)
NASA Astrophysics Data System (ADS)
Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain
2009-09-01
In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.
Martins-Costa, Marilia T C; Ruiz-López, Manuel F
2017-04-15
We report an enhanced sampling technique that allows to reach the multi-nanosecond timescale in quantum mechanics/molecular mechanics molecular dynamics simulations. The proposed technique, called horsetail sampling, is a specific type of multiple molecular dynamics approach exhibiting high parallel efficiency. It couples a main simulation with a large number of shorter trajectories launched on independent processors at periodic time intervals. The technique is applied to study hydrogen peroxide at the water liquid-vapor interface, a system of considerable atmospheric relevance. A total simulation time of a little more than 6 ns has been attained for a total CPU time of 5.1 years representing only about 20 days of wall-clock time. The discussion of the results highlights the strong influence of the solvation effects at the interface on the structure and the electronic properties of the solute. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Efficient use of bit planes in the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1988-01-01
The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.
Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo
2016-09-01
The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.
Modeling Quasi-Static and Fatigue-Driven Delamination Migration
NASA Technical Reports Server (NTRS)
De Carvalho, N. V.; Ratcliffe, J. G.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Tay, T. E.
2014-01-01
An approach was proposed and assessed for the high-fidelity modeling of progressive damage and failure in composite materials. It combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. Delamination, matrix cracking, and migration were captured failure and migration criteria based on fracture mechanics. Quasi-static and fatigue loading were modeled within the same overall framework. The methodology proposed was illustrated by simulating the delamination migration test, showing good agreement with the available experimental data.
Efficient methods for joint estimation of multiple fundamental frequencies in music signals
NASA Astrophysics Data System (ADS)
Pertusa, Antonio; Iñesta, José M.
2012-12-01
This study presents efficient techniques for multiple fundamental frequency estimation in music signals. The proposed methodology can infer harmonic patterns from a mixture considering interactions with other sources and evaluate them in a joint estimation scheme. For this purpose, a set of fundamental frequency candidates are first selected at each frame, and several hypothetical combinations of them are generated. Combinations are independently evaluated, and the most likely is selected taking into account the intensity and spectral smoothness of its inferred patterns. The method is extended considering adjacent frames in order to smooth the detection in time, and a pitch tracking stage is finally performed to increase the temporal coherence. The proposed algorithms were evaluated in MIREX contests yielding state of the art results with a very low computational burden.
A Novel Implementation of Massively Parallel Three Dimensional Monte Carlo Radiation Transport
NASA Astrophysics Data System (ADS)
Robinson, P. B.; Peterson, J. D. L.
2005-12-01
The goal of our summer project was to implement the difference formulation for radiation transport into Cosmos++, a multidimensional, massively parallel, magneto hydrodynamics code for astrophysical applications (Peter Anninos - AX). The difference formulation is a new method for Symbolic Implicit Monte Carlo thermal transport (Brooks and Szöke - PAT). Formerly, simultaneous implementation of fully implicit Monte Carlo radiation transport in multiple dimensions on multiple processors had not been convincingly demonstrated. We found that a combination of the difference formulation and the inherent structure of Cosmos++ makes such an implementation both accurate and straightforward. We developed a "nearly nearest neighbor physics" technique to allow each processor to work independently, even with a fully implicit code. This technique coupled with the increased accuracy of an implicit Monte Carlo solution and the efficiency of parallel computing systems allows us to demonstrate the possibility of massively parallel thermal transport. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48
NASA Technical Reports Server (NTRS)
Kathong, Monchai; Tiwari, Surendra N.
1988-01-01
In the computation of flowfields about complex configurations, it is very difficult to construct a boundary-fitted coordinate system. An alternative approach is to use several grids at once, each of which is generated independently. This procedure is called the multiple grids or zonal grids approach; its applications are investigated. The method conservative providing conservation of fluxes at grid interfaces. The Euler equations are solved numerically on such grids for various configurations. The numerical scheme used is the finite-volume technique with a three-stage Runge-Kutta time integration. The code is vectorized and programmed to run on the CDC VPS-32 computer. Steady state solutions of the Euler equations are presented and discussed. The solutions include: low speed flow over a sphere, high speed flow over a slender body, supersonic flow through a duct, and supersonic internal/external flow interaction for an aircraft configuration at various angles of attack. The results demonstrate that the multiple grids approach along with the conservative interfacing is capable of computing the flows about the complex configurations where the use of a single grid system is not possible.
Liang, Zhiting; Guan, Yong; Liu, Gang; Chen, Xiangyu; Li, Fahu; Guo, Pengfei; Tian, Yangchao
2016-03-01
The `missing wedge', which is due to a restricted rotation range, is a major challenge for quantitative analysis of an object using tomography. With prior knowledge of the grey levels, the discrete algebraic reconstruction technique (DART) is able to reconstruct objects accurately with projections in a limited angle range. However, the quality of the reconstructions declines as the number of grey levels increases. In this paper, a modified DART (MDART) was proposed, in which each independent region of homogeneous material was chosen as a research object, instead of the grey values. The grey values of each discrete region were estimated according to the solution of the linear projection equations. The iterative process of boundary pixels updating and correcting the grey values of each region was executed alternately. Simulation experiments of binary phantoms as well as multiple grey phantoms show that MDART is capable of achieving high-quality reconstructions with projections in a limited angle range. The interesting advancement of MDART is that neither prior knowledge of the grey values nor the number of grey levels is necessary.
Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.
Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J
2009-11-01
Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.
Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E
2017-09-01
Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Improving wave forecasting by integrating ensemble modelling and machine learning
NASA Astrophysics Data System (ADS)
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
Smucker, Joseph D; Sasso, Rick C
2006-05-15
Independent computer-based literature review of articles pertaining to instrumentation and fusion of junctional injuries of the cervical spine. To review and discuss the evolution of instrumentation techniques and systems used in the treatment of cervical spine junctional injuries. Instrumentation of junctional injuries of the cervical spine has been limited historically by failure to achieve rigid internal fixation in multiple planes. The evolution of these techniques has required increased insight into the morphology and unique biomechanics of the structures to be instrumented. Computer-based literature search of Ovid and PubMed databases. Extensive literature search yielded insights into the evolution of systems initially based on onlay bone graft combined with wiring techniques. Such techniques have come to include systems incorporating rigid, longitudinal struts that accommodate multiplanar screws placed in the lateral masses, pedicles, transarticular regions, and occipital bone. Despite a rapid evolution of techniques and instrumentation technologies, it remains incumbent on the physician to provide the patient with a surgical procedure that balances the likelihood of a favorable outcome with the risk inherent in the implementation of the procedure.
NASA Astrophysics Data System (ADS)
Lane, Christine; Brauer, Achim; Ramsey Christopher, Bronk; Engels, Stefan; Haliuc, Aritina; Hoek, Wim; Hubay, Katalin; Jones, Gwydion; Sachse, Dirk; Staff, Richard; Turner, Falko; Wagner-Cremer, Frederike
2016-04-01
Exploring temporal and spatial variability of environmental response to climatic changes requires the comparison of widespread palaeoenvironmental sequences on their own, independently-derived, age models. High precision age-models can be constructed using statistical methods to combine absolute and relative age estimates measured using a range of techniques. Such an approach may help to highlight otherwise unrecognised uncertainties, where a single dating method has been applied in isolation. Radiocarbon dating, tephrochronology and varve counting have been combined within a Bayesian depositional model to build a chronology for a sediment sequence from Lake Haemelsee (Northern Germany) that continuously covers the entire Lateglacial and early Holocene. Each of the dating techniques used brought its own challenges. Radiocarbon dates provide the only absolute ages measured directly in the record, however a low macrofossil content led to small sample sizes and a limited number of low precision dates. A floating varved interval provided restricted but very precise relative dating for sediments covering the Allerød to Younger Dryas transition. Well-spaced, visible and crypto- tephra layers, including the widespread Laacher See , Vedde Ash, Askja-S and Saksunarvatn tephra layers, allow absolute ages for the tephra layers established in other locations to be imported into the Haemelsee sequence. These layers also provide multiple tie-lines that allow the Haemelsee sequences to be directly compared at particular moments in time, and within particular intervals, to other important Lateglacial archives. However, selecting the "best" published tephra ages to use in the Haemelsee age model is not simple and risks biasing comparison of the palaeoenvironmental record to fit one or another comparative archive. Here we investigate the use of multiple age models for the Haemelsee record, in order to retain an independent approach to investigating the environmental transitions of the Lateglacial to Early Holocene.
Rolison, John M.; Treinen, Kerri C.; McHugh, Kelly C.; ...
2017-11-06
Uranium certified reference materials (CRM) issued by New Brunswick Laboratory were subjected to dating using four independent uranium-series radiochronometers. In all cases, there was acceptable agreement between the model ages calculated using the 231Pa– 235U, 230Th– 234U, 227Ac– 235U or 226Ra– 234U radiochronometers and either the certified 230Th– 234U model date (CRM 125-A and CRM U630), or the known purification date (CRM U050 and CRM U100). Finally, the agreement between the four independent radiochronometers establishes these uranium certified reference materials as ideal informal standards for validating dating techniques utilized in nuclear forensic investigations in the absence of standards with certifiedmore » model ages for multiple radiochronometers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rolison, John M.; Treinen, Kerri C.; McHugh, Kelly C.
Uranium certified reference materials (CRM) issued by New Brunswick Laboratory were subjected to dating using four independent uranium-series radiochronometers. In all cases, there was acceptable agreement between the model ages calculated using the 231Pa– 235U, 230Th– 234U, 227Ac– 235U or 226Ra– 234U radiochronometers and either the certified 230Th– 234U model date (CRM 125-A and CRM U630), or the known purification date (CRM U050 and CRM U100). Finally, the agreement between the four independent radiochronometers establishes these uranium certified reference materials as ideal informal standards for validating dating techniques utilized in nuclear forensic investigations in the absence of standards with certifiedmore » model ages for multiple radiochronometers.« less
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
2017-01-09
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
NASA Technical Reports Server (NTRS)
Hansen, Irving G.
1990-01-01
Electromechanical actuators developed to date have commonly utilized permanent magnet (PM) synchronous motors. More recently switched reluctance (SR) motors have been advocated due to their robust characteristics. Implications of work which utilizes induction motors and advanced control techniques are discussed. When induction motors are operated from an energy source capable of controlling voltages and frequencies independently, drive characteristics are obtained which are superior to either PM or SR motors. By synthesizing the machine frequency from a high frequency carrier (nominally 20 kHz), high efficiencies, low distortion, and rapid torque response are available. At this time multiple horsepower machine drives were demonstrated, and work is on-going to develop a 20 hp average, 40 hp peak class of aerospace actuators. This effort is based upon high frequency power distribution and management techniques developed by NASA for Space Station Freedom.
NASA Technical Reports Server (NTRS)
Hansen, Irving G.
1990-01-01
Electromechanical actuators developed to date have commonly ultilized permanent magnet (PM) synchronous motors. More recently switched reluctance (SR) motors have been advocated due to their robust characteristics. Implications of work which utilized induction motors and advanced control techniques are discussed. When induction motors are operated from an energy source capable of controlling voltages and frequencies independently, drive characteristics are obtained which are superior to either PM or SR motors. By synthesizing the machine frequency from a high-frequency carrier (nominally 20 kHz), high efficiencies, low distortion, and rapid torque response are available. At this time multiple horsepower machine drives were demonstrated, and work is on-going to develop a 20 hp average, 40 hp peak class of aerospace actuators. This effort is based upon high-frequency power distribution and management techniques developed by NASA for Space Station Freedom.
Targeted numerical simulations of binary black holes for GW170104
NASA Astrophysics Data System (ADS)
Healy, J.; Lange, J.; O'Shaughnessy, R.; Lousto, C. O.; Campanelli, M.; Williamson, A. R.; Zlochower, Y.; Calderón Bustillo, J.; Clark, J. A.; Evans, C.; Ferguson, D.; Ghonge, S.; Jani, K.; Khamesra, B.; Laguna, P.; Shoemaker, D. M.; Boyle, M.; García, A.; Hemberger, D. A.; Kidder, L. E.; Kumar, P.; Lovelace, G.; Pfeiffer, H. P.; Scheel, M. A.; Teukolsky, S. A.
2018-03-01
In response to LIGO's observation of GW170104, we performed a series of full numerical simulations of binary black holes, each designed to replicate likely realizations of its dynamics and radiation. These simulations have been performed at multiple resolutions and with two independent techniques to solve Einstein's equations. For the nonprecessing and precessing simulations, we demonstrate the two techniques agree mode by mode, at a precision substantially in excess of statistical uncertainties in current LIGO's observations. Conversely, we demonstrate our full numerical solutions contain information which is not accurately captured with the approximate phenomenological models commonly used to infer compact binary parameters. To quantify the impact of these differences on parameter inference for GW170104 specifically, we compare the predictions of our simulations and these approximate models to LIGO's observations of GW170104.
An Experiment in Scientific Program Understanding
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.; Owen, Karl (Technical Monitor)
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
Optical digital chaos cryptography
NASA Astrophysics Data System (ADS)
Arenas-Pingarrón, Álvaro; González-Marcos, Ana P.; Rivas-Moscoso, José M.; Martín-Pereda, José A.
2007-10-01
In this work we present a new way to mask the data in a one-user communication system when direct sequence - code division multiple access (DS-CDMA) techniques are used. The code is generated by a digital chaotic generator, originally proposed by us and previously reported for a chaos cryptographic system. It is demonstrated that if the user's data signal is encoded with a bipolar phase-shift keying (BPSK) technique, usual in DS-CDMA, it can be easily recovered from a time-frequency domain representation. To avoid this situation, a new system is presented in which a previous dispersive stage is applied to the data signal. A time-frequency domain analysis is performed, and the devices required at the transmitter and receiver end, both user-independent, are presented for the optical domain.
NASA Astrophysics Data System (ADS)
Bhattacharyya, Sidhakam; Bandyopadhyay, Gautam
2010-10-01
The council of most of the Urban Local Bodies (ULBs) has a limited scope for decision making in the absence of appropriate financial control mechanism. The information about expected amount of own fund during a particular period is of great importance for decision making. Therefore, in this paper, efforts are being made to present set of findings and to establish a model of estimating receipts of own sources and payments thereof using multiple regression analysis. Data for sixty months from a reputed ULB in West Bengal have been considered for ascertaining the regression models. This can be used as a part of financial management and control procedure by the council to estimate the effect on own fund. In our study we have considered two models using multiple regression analysis. "Model I" comprises of total adjusted receipt as the dependent variable and selected individual receipts as the independent variables. Similarly "Model II" consists of total adjusted payments as the dependent variable and selected individual payments as independent variables. The resultant of Model I and Model II is the surplus or deficit effecting own fund. This may be applied for decision making purpose by the council.
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
Reichert, Michael S; Höbel, Gerlinde
2018-03-01
Animal signals are inherently complex phenotypes with many interacting parts combining to elicit responses from receivers. The pattern of interrelationships between signal components reflects the extent to which each component is expressed, and responds to selection, either in concert with or independently of others. Furthermore, many species have complex repertoires consisting of multiple signal types used in different contexts, and common morphological and physiological constraints may result in interrelationships extending across the multiple signals in species' repertoires. The evolutionary significance of interrelationships between signal traits can be explored within the framework of phenotypic integration, which offers a suite of quantitative techniques to characterize complex phenotypes. In particular, these techniques allow for the assessment of modularity and integration, which describe, respectively, the extent to which sets of traits covary either independently or jointly. Although signal and repertoire complexity are thought to be major drivers of diversification and social evolution, few studies have explicitly measured the phenotypic integration of signals to investigate the evolution of diverse communication systems. We applied methods from phenotypic integration studies to quantify integration in the two primary vocalization types (advertisement and aggressive calls) in the treefrogs Hyla versicolor , Hyla cinerea, and Dendropsophus ebraccatus . We recorded male calls and calculated standardized phenotypic variance-covariance ( P ) matrices for characteristics within and across call types. We found significant integration across call types, but the strength of integration varied by species and corresponded with the acoustic similarity of the call types within each species. H. versicolor had the most modular advertisement and aggressive calls and the least acoustically similar call types. Additionally, P was robust to changing social competition levels in H. versicolor . Our findings suggest new directions in animal communication research in which the complex relationships among the traits of multiple signals are a key consideration for understanding signal evolution.
Model Independence in Downscaled Climate Projections: a Case Study in the Southeast United States
NASA Astrophysics Data System (ADS)
Gray, G. M. E.; Boyles, R.
2016-12-01
Downscaled climate projections are used to deduce how the climate will change in future decades at local and regional scales. It is important to use multiple models to characterize part of the future uncertainty given the impact on adaptation decision making. This is traditionally employed through an equally-weighted ensemble of multiple GCMs downscaled using one technique. Newer practices include several downscaling techniques in an effort to increase the ensemble's representation of future uncertainty. However, this practice may be adding statistically dependent models to the ensemble. Previous research has shown a dependence problem in the GCM ensemble in multiple generations, but has not been shown in the downscaled ensemble. In this case study, seven downscaled climate projections on the daily time scale are considered: CLAREnCE10, SERAP, BCCA (CMIP5 and CMIP3 versions), Hostetler, CCR, and MACA-LIVNEH. These data represent 83 ensemble members, 44 GCMs, and two generations of GCMs. Baseline periods are compared against the University of Idaho's METDATA gridded observation dataset. Hierarchical agglomerative clustering is applied to the correlated errors to determine dependent clusters. Redundant GCMs across different downscaling techniques show the most dependence, while smaller dependence signals are detected within downscaling datasets and across generations of GCMs. These results indicate that using additional downscaled projections to increase the ensemble size must be done with care to avoid redundant GCMs and the process of downscaling may increase the dependence of those downscaled GCMs. Climate model generation does not appear dissimilar enough to be treated as two separate statistical populations for ensemble building at the local and regional scales.
Nguyen, Quynh C; Osypuk, Theresa L; Schmidt, Nicole M; Glymour, M Maria; Tchetgen Tchetgen, Eric J
2015-03-01
Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship between the exposure (treatment) and multiple mediators, conditional on covariates, by regressing exposure on mediators and covariates. The inverse of the covariate-adjusted exposure-mediator odds ratio association is used to weight the primary analytical regression of the outcome on treatment. The treatment coefficient in such a weighted regression estimates the natural direct effect of treatment on the outcome, and indirect effects are identified by subtracting direct effects from total effects. Weighting renders treatment and mediators independent, thereby deactivating indirect pathways of the mediators. This new mediation technique accommodates multiple discrete or continuous mediators. IORW is easily implemented and is appropriate for any standard regression model, including quantile regression and survival analysis. An empirical example is given using data from the Moving to Opportunity (1994-2002) experiment, testing whether neighborhood context mediated the effects of a housing voucher program on obesity. Relevant Stata code (StataCorp LP, College Station, Texas) is provided. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Super-resolution imaging of multiple cells by optimized flat-field epi-illumination
NASA Astrophysics Data System (ADS)
Douglass, Kyle M.; Sieben, Christian; Archetti, Anna; Lambert, Ambroise; Manley, Suliana
2016-11-01
Biological processes are inherently multi-scale, and supramolecular complexes at the nanoscale determine changes at the cellular scale and beyond. Single-molecule localization microscopy (SMLM) techniques have been established as important tools for studying cellular features with resolutions of the order of around 10 nm. However, in their current form these modalities are limited by a highly constrained field of view (FOV) and field-dependent image resolution. Here, we develop a low-cost microlens array (MLA)-based epi-illumination system—flat illumination for field-independent imaging (FIFI)—that can efficiently and homogeneously perform simultaneous imaging of multiple cells with nanoscale resolution. The optical principle of FIFI, which is an extension of the Köhler integrator, is further elucidated and modelled with a new, free simulation package. We demonstrate FIFI's capabilities by imaging multiple COS-7 and bacteria cells in 100 × 100 μm2 SMLM images—more than quadrupling the size of a typical FOV and producing near-gigapixel-sized images of uniformly high quality.
Elia, Marinos; Betts, Peter; Jackson, Diane M; Mulligan, Jean
2007-09-01
Intrauterine programming of body composition [percentage body fat (%BF)] has been sparsely examined with multiple independent reference techniques in children. The effects on and consequences of body build (dimensions, mass, and length of body segments) are unclear. The study examined whether percentage fat and relation of percentage fat to body mass index (BMI; in kg/m2) in prepubertal children are programmed during intrauterine development and are dependent on body build. It also aimed to examine the extent to which height can be predicted by parental height and birth weight. Eighty-five white children (44 boys, 41 girls; aged 6.5-9.1 y) had body composition measured with a 4-component model (n = 58), dual-energy X-ray absorptiometry (n = 84), deuterium dilution (n = 81), densitometry (n = 62), and skinfold thicknesses (n = 85). An increase in birth weight of 1 SD was associated with a decrease of 1.95% fat as measured by the 4-component model (P = 0.012) and 0.82-2.75% by the other techniques. These associations were independent of age, sex, socioeconomic status, physical activity, BMI, and body build. Body build did not decrease the strength of the associations. Birth weight was a significantly better predictor of height than was self-reported midparental height, accounting for 19.4% of the variability at 5 y of age and 10.3% at 7.8 y of age (17.8% and 8.8% of which were independent of parental height at these ages, respectively). Consistent trends across body-composition measurement techniques add strength to the suggestion that percentage fat in prepubertal children is programmed in utero (independently of body build and BMI). It also suggests birth weight is a better predictor of prepubertal height than is self-reported midparental height.
Porting Ordinary Applications to Blue Gene/Q Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy
2015-08-31
Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less
Frequency stabilization of multiple lasers on a single medium-finesse cavity
NASA Astrophysics Data System (ADS)
Han, Chengyin; Zhou, Min; Gao, Qi; Li, Shangyan; Zhang, Shuang; Qiao, Hao; Ai, Di; Zhang, Mengya; Lou, Ge; Luo, Limeng; Xu, Xinye
2018-04-01
We present a simple, compact, and robust frequency stabilization system of three lasers operating at 649, 759, and 770 nm, respectively. These lasers are applied in experiments on ytterbium optical lattice clocks, for which each laser needs to have a linewidth of a few hundred or tens of kilohertz while maintaining a favorable long-term stability. Here, a single medium-finesse cavity is adopted as the frequency reference and the standard Pound-Drever-Hall technique is used to stabilize the laser frequencies. Based on the independent phase modulation, multiple-laser locking is demonstrated without mutual intervention. The locked lasers are measured to have a linewidth of 100 kHz and the residual frequency drift is about 78.5 Hz s-1. This kind of setup provides a construction that is much simpler than that in previous work.
NASA Astrophysics Data System (ADS)
Collins, Curtis Andrew
Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.
Scanning probe recognition microscopy investigation of tissue scaffold properties
Fan, Yuan; Chen, Qian; Ayres, Virginia M; Baczewski, Andrew D; Udpa, Lalita; Kumar, Shiva
2007-01-01
Scanning probe recognition microscopy is a new scanning probe microscopy technique which enables selective scanning along individual nanofibers within a tissue scaffold. Statistically significant data for multiple properties can be collected by repetitively fine-scanning an identical region of interest. The results of a scanning probe recognition microscopy investigation of the surface roughness and elasticity of a series of tissue scaffolds are presented. Deconvolution and statistical methods were developed and used for data accuracy along curved nanofiber surfaces. Nanofiber features were also independently analyzed using transmission electron microscopy, with results that supported the scanning probe recognition microscopy-based analysis. PMID:18203431
Scanning probe recognition microscopy investigation of tissue scaffold properties.
Fan, Yuan; Chen, Qian; Ayres, Virginia M; Baczewski, Andrew D; Udpa, Lalita; Kumar, Shiva
2007-01-01
Scanning probe recognition microscopy is a new scanning probe microscopy technique which enables selective scanning along individual nanofibers within a tissue scaffold. Statistically significant data for multiple properties can be collected by repetitively fine-scanning an identical region of interest. The results of a scanning probe recognition microscopy investigation of the surface roughness and elasticity of a series of tissue scaffolds are presented. Deconvolution and statistical methods were developed and used for data accuracy along curved nanofiber surfaces. Nanofiber features were also independently analyzed using transmission electron microscopy, with results that supported the scanning probe recognition microscopy-based analysis.
Implications of the Babinet Principle for Casimir interactions
NASA Astrophysics Data System (ADS)
Maghrebi, Mohammad F.; Jaffe, Robert L.; Abravanel, Ronen
2011-09-01
We formulate the Babinet Principle (BP) as a relation between scattering amplitudes and combine it with multiple scattering techniques to derive new properties of electromagnetic Casimir forces. We show that the Casimir force exerted by a planar conductor or dielectric on a self-complementary perforated planar mirror is approximately half that on a uniform mirror independent of the distance between them. Also, the BP suggests that Casimir edge effects are generically anomalously small. Furthermore, the BP can be used to relate any planar object to its complementary geometry, a relation we use to estimate Casimir forces between two screens with apertures.
Koerner, Tess K.; Zhang, Yang
2017-01-01
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422
Rupture Propagation Imaging of Fluid Induced Events at the Basel EGS Project
NASA Astrophysics Data System (ADS)
Folesky, Jonas; Kummerow, Jörn; Shapiro, Serge A.
2014-05-01
The analysis of rupture properties using rupture propagation imaging techniques is a fast developing field of research in global seismology. Usually rupture fronts of large to megathrust earthquakes are subject of recent studies, like e.g. the 2004 Sumatra-Andaman earthquake or the 2011 Tohoku, Japan earthquake. The back projection technique is the most prominent technique in this field. Here the seismograms recorded at an array or at a seismic network are back shifted to a grid of possible source locations via a special stacking procedure. This can provide information on the energy release and energy distribution of the rupture which then can be used to find estimates of event properties like location, rupture direction, rupture speed or length. The procedure is fast and direct and it only relies on a reasonable velocity model. Thus it is a good way to rapidly estimate the rupture properties and it can be used to confirm independently achieved event information. We adopted the back projection technique and put it in a microseismic context. We demonstrated its usage for multiple synthetic ruptures within a reservoir model of microseismic scale in earlier works. Our motivation hereby is the occurrence of relatively large, induced seismic events at a number of stimulated geothermal reservoirs or waste disposal sites, having magnitudes ML ≥ 3.4 and yielding rupture lengths of several hundred meters. We use the configuration of the seismic network and reservoir properties of the Basel Geothermal Site to build a synthetic model of a rupture by modeling the wave field of multiple spatio-temporal separated single sources using Finite-Difference modeling. The focus of this work is the application of the Back Projection technique and the demonstration of its feasibility to retrieve the rupture properties of real fluid induced events. We take four microseismic events with magnitudes from ML 3.1 to 3.4 and reconstruct source parameters like location, orientation and length. By comparison with our synthetic results as well as independent localization studies and source mechanism studies in this area we can show, that the obtained results are reasonable and that the application of back projection imaging is not only possible for microseismic datasets of respective quality, but that it provides important additional insights in the rupture process.
PECAN: library-free peptide detection for data-independent acquisition tandem mass spectrometry data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ting, Ying S.; Egertson, Jarrett D.; Bollinger, James G.
Data-independent acquisition (DIA) is an emerging mass spectrometry (MS)-based technique for unbiased and reproducible measurement of protein mixtures. DIA tandem mass spectrometry spectra are often highly multiplexed, containing product ions from multiple cofragmenting precursors. Detecting peptides directly from DIA data is therefore challenging; most DIA data analyses require spectral libraries. Here we present PECECAN (http://pecan.maccosslab.org), a library-free, peptide-centric tool that robustly and accurately detects peptides directly from DIA data. PECECAN reports evidence of detection based on product ion scoring, which enables detection of low-abundance analytes with poor precursor ion signal. We demonstrate the chromatographic peak picking accuracy and peptide detectionmore » capability of PECECAN, and we further validate its detection with data-dependent acquisition and targeted analyses. Lastly, we used PECECAN to build a plasma proteome library from DIA data and to query known sequence variants.« less
Koch, Tobias; Schultze, Martin; Jeon, Minjeong; Nussbeck, Fridtjof W; Praetorius, Anna-Katharina; Eid, Michael
2016-01-01
Multirater (multimethod, multisource) studies are increasingly applied in psychology. Eid and colleagues (2008) proposed a multilevel confirmatory factor model for multitrait-multimethod (MTMM) data combining structurally different and multiple independent interchangeable methods (raters). In many studies, however, different interchangeable raters (e.g., peers, subordinates) are asked to rate different targets (students, supervisors), leading to violations of the independence assumption and to cross-classified data structures. In the present work, we extend the ML-CFA-MTMM model by Eid and colleagues (2008) to cross-classified multirater designs. The new C4 model (Cross-Classified CTC[M-1] Combination of Methods) accounts for nonindependent interchangeable raters and enables researchers to explicitly model the interaction between targets and raters as a latent variable. Using a real data application, it is shown how credibility intervals of model parameters and different variance components can be obtained using Bayesian estimation techniques.
Minimum envelope roughness pulse design for reduced amplifier distortion in parallel excitation.
Grissom, William A; Kerr, Adam B; Stang, Pascal; Scott, Greig C; Pauly, John M
2010-11-01
Parallel excitation uses multiple transmit channels and coils, each driven by independent waveforms, to afford the pulse designer an additional spatial encoding mechanism that complements gradient encoding. In contrast to parallel reception, parallel excitation requires individual power amplifiers for each transmit channel, which can be cost prohibitive. Several groups have explored the use of low-cost power amplifiers for parallel excitation; however, such amplifiers commonly exhibit nonlinear memory effects that distort radio frequency pulses. This is especially true for pulses with rapidly varying envelopes, which are common in parallel excitation. To overcome this problem, we introduce a technique for parallel excitation pulse design that yields pulses with smoother envelopes. We demonstrate experimentally that pulses designed with the new technique suffer less amplifier distortion than unregularized pulses and pulses designed with conventional regularization.
NASA Astrophysics Data System (ADS)
Liu, Yuanyuan; Jiang, Weijian; Yang, Yang; Pu, Huayan; Peng, Yan; Xin, Liming; Zhang, Yi; Sun, Yu
2018-01-01
Constructing vascular scaffolds is important in tissue engineering. However, scaffolds with characteristics such as multiple layers and a certain degree of spatial morphology still cannot be readily constructed by current vascular scaffolds fabrication techniques. This paper presents a three-layered bifurcated vascular scaffold with a curved structure. The technique combines 3D printed molds and casting hydrogel and fugitive ink to create vessel-mimicking constructs with customizable structural parameters. Compared with other fabrication methods, the technique can create more native-like 3D geometries. The diameter and wall thickness of the fabricated constructs can be independently controlled, providing a feasible approach for vascular scaffold construction. Enzymatically-crosslinked gelatin was used as the scaffold material. The morphology and mechanical properties were evaluated. Human umbilical cord derived endothelial cells (HUVECs) were seeded on the scaffolds and cultured for 72 h. Cell viability and morphology were assessed. The results showed that the proposed process had good application potentials, and will hopefully provide a feasible approach for constructing vascular scaffolds.
Chen, Hsiao-Wei; Wu, Chun-Feng; Chu, Lichieh Julie; Chiang, Wei-Fang; Wu, Chih-Ching; Yu, Jau-Song; Tsai, Cheng-Han; Liang, Kung-Hao; Chang, Yu-Sun; Wu, Maureen; Ou Yang, Wei-Ting
2017-01-01
Multiple (selected) reaction monitoring (MRM/SRM) of peptides is a growing technology for target protein quantification because it is more robust, precise, accurate, high-throughput, and multiplex-capable than antibody-based techniques. The technique has been applied clinically to the large-scale quantification of multiple target proteins in different types of fluids. However, previous MRM-based studies have placed less focus on sample-preparation workflow and analytical performance in the precise quantification of proteins in saliva, a noninvasively sampled body fluid. In this study, we evaluated the analytical performance of a simple and robust multiple reaction monitoring (MRM)-based targeted proteomics approach incorporating liquid chromatography with mass spectrometry detection (LC-MRM/MS). This platform was used to quantitatively assess the biomarker potential of a group of 56 salivary proteins that have previously been associated with human cancers. To further enhance the development of this technology for assay of salivary samples, we optimized the workflow for salivary protein digestion and evaluated quantification performance, robustness and technical limitations in analyzing clinical samples. Using a clinically well-characterized cohort of two independent clinical sample sets (total n = 119), we quantitatively characterized these protein biomarker candidates in saliva specimens from controls and oral squamous cell carcinoma (OSCC) patients. The results clearly showed a significant elevation of most targeted proteins in saliva samples from OSCC patients compared with controls. Overall, this platform was capable of assaying the most highly multiplexed panel of salivary protein biomarkers, highlighting the clinical utility of MRM in oral cancer biomarker research. PMID:28235782
Multi-Sectional Views Textural Based SVM for MS Lesion Segmentation in Multi-Channels MRIs
Abdullah, Bassem A; Younis, Akmal A; John, Nigel M
2012-01-01
In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI) data. The technique uses a trained support vector machine (SVM) to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions mainly based on the textural features with aid of the other features. The classification is done on each of the axial, sagittal and coronal sectional brain view independently and the resultant segmentations are aggregated to provide more accurate output segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated approach that does not rely on manually delineating the MS lesions. In addition, the technique introduces the concept of the multi-sectional view segmentation to produce verified segmentation. The proposed textural-based SVM technique was evaluated using three simulated datasets and more than fifty real MRI datasets. The results were compared with state of the art methods. The obtained results indicate that the proposed method would be viable for use in clinical practice for the detection of MS lesions in MRI. PMID:22741026
Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.
Jung, So Lyung; Lee, Jeong Hyun; Shong, Young Kee; Sung, Jin Yong; Kim, Kyu Sun; Lee, Ducky; Kim, Ji-hoon; Baek, Seon Mi; Sim, Jung Suk; Na, Dong Gyu
2018-01-01
Objective To assess the efficacy and safety of thyroid radiofrequency (RF) ablation for benign thyroid nodules by trained radiologists according to a unified protocol in a multi-center study. Materials and Methods From 2010 to 2011, 345 nodules from 345 patients (M:F = 43:302; mean age ± SD = 46.0 ± 12.7 years, range = 15–79) who met eligibility criteria were enrolled from five institutions. At pre-ablation, the mean volume was 14.2 ± 13.2 mL (1.1–80.8 mL). For 12 months or longer after treatment, 276 lesions, consisting of 248 solid and 28 predominantly cystic nodules, were followed. All operators performed RF ablation with a cool-tip RF system and two standard techniques (a transisthmic approach and the moving-shot technique). Volume reduction at 12 months after RF ablation (the primary outcome), therapeutic success, improvement of symptoms as well as of cosmetic problems, and complications were evaluated. Multiple linear regression analysis was applied to identify factors that were independently predictive of volume reduction. Results The mean volume reduction at 12 months was 80.3% (n = 276) and at the 24-, 36-, 48-, and 60-month follow-ups 84.3% (n = 198), 89.2% (n = 128), 91.9% (n = 57), and 95.3% (n = 6), respectively. Our therapeutic success was 97.8%. Both mean symptom and cosmetic scores showed significant improvements (p < 0.001). The rate of major complications was 1.0% (3/276). Solidity and applied energy were independent factors that predicted volume reduction. Conclusion Radiofrequency ablation performed by trained radiologists from multiple institutions using a unified protocol and similar devices was effective and safe for treating benign thyroid nodules. PMID:29354014
Hope, Sarah A; Antonis, Paul; Adam, David; Cameron, James D; Meredith, Ian T
2007-10-01
The aim of this study was to test the hypothesis that coronary artery disease extent and severity are associated with central aortic pressure waveform characteristics. Although it is thought that central aortic pressure waveform characteristics, particularly augmentation index, may influence cardiovascular disease progression and predict cardiovascular risk, little is known of the relationship between central waveform characteristics and the severity and extent of coronary artery disease. Central aortic waveforms (2F Millar pressure transducer-tipped catheters) were acquired at the time of coronary angiography for suspected native coronary artery disease in 40 patients (24 male). The severity and extent of disease were assessed independently by two observers using two previously described scoring systems (modified Gensini's stenosis and Sullivan's extent scores). Relationships between disease scores, aortic waveform characteristics, aorto-radial pulse wave velocity and subject demographic features were assessed by regression techniques. Both extent and severity scores were associated with increasing age and male sex (P < 0.001), but no other risk factors. Both scores were independently associated with aorto-radial pulse wave velocity (P < 0.001), which entered a multiple regression model prior to age and sex. This association was not dependent upon blood pressure. Neither score was associated with central aortic augmentation index, by either simple or multiple linear regression techniques including heart rate, subject demographic features and cardiovascular risk factors. Aorto-radial pulse wave velocity, but not central aortic augmentation index, is associated with both the extent and severity of coronary artery disease. This has potentially important implications for applicability of a generalized arterial transfer function.
Use of Objective Metrics in Dynamic Facial Reanimation: A Systematic Review.
Revenaugh, Peter C; Smith, Ryan M; Plitt, Max A; Ishii, Lisa; Boahene, Kofi; Byrne, Patrick J
2018-06-21
Facial nerve deficits cause significant functional and social consequences for those affected. Existing techniques for dynamic restoration of facial nerve function are imperfect and result in a wide variety of outcomes. Currently, there is no standard objective instrument for facial movement as it relates to restorative techniques. To determine what objective instruments of midface movement are used in outcome measurements for patients treated with dynamic methods for facial paralysis. Database searches from January 1970 to June 2017 were performed in PubMed, Embase, Cochrane Library, Web of Science, and Scopus. Only English-language articles on studies performed in humans were considered. The search terms used were ("Surgical Flaps"[Mesh] OR "Nerve Transfer"[Mesh] OR "nerve graft" OR "nerve grafts") AND (face [mh] OR facial paralysis [mh]) AND (innervation [sh]) OR ("Face"[Mesh] OR facial paralysis [mh]) AND (reanimation [tiab]). Two independent reviewers evaluated the titles and abstracts of all articles and included those that reported objective outcomes of a surgical technique in at least 2 patients. The presence or absence of an objective instrument for evaluating outcomes of midface reanimation. Additional outcome measures were reproducibility of the test, reporting of symmetry, measurement of multiple variables, and test validity. Of 241 articles describing dynamic facial reanimation techniques, 49 (20.3%) reported objective outcome measures for 1898 patients. Of those articles reporting objective measures, there were 29 different instruments, only 3 of which reported all outcome measures. Although instruments are available to objectively measure facial movement after reanimation techniques, most studies do not report objective outcomes. Of objective facial reanimation instruments, few are reproducible and able to measure symmetry and multiple data points. To accurately compare objective outcomes in facial reanimation, a reproducible, objective, and universally applied instrument is needed.
Multiple seeding for the growth of bulk GdBCO-Ag superconductors with single grain behaviour
NASA Astrophysics Data System (ADS)
Shi, Y.; Durrell, J. H.; Dennis, A. R.; Huang, K.; Namburi, D. K.; Zhou, D.; Cardwell, D. A.
2017-01-01
Rare earth-barium-copper oxide bulk superconductors fabricated in large or complicated geometries are required for a variety of engineering applications. Initiating crystal growth from multiple seeds reduces the time taken to melt-process individual samples and can reduce the problem of poor crystal texture away from the seed. Grain boundaries between regions of independent crystal growth can reduce significantly the flow of current due to crystallographic misalignment and the agglomeration of impurity phases. Enhanced supercurrent flow at such boundaries has been achieved by minimising the depth of the boundary between A growth sectors generated during the melt growth process by reducing second phase agglomerations and by a new technique for initiating crystal growth that minimises the misalignment between different growth regions. The trapped magnetic fields measured for the resulting samples exhibit a single trapped field peak indicating they are equivalent to conventional single grains.
Master of Puppets: Cooperative Multitasking for In Situ Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Lukic, Zarija
2016-01-01
Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monozov, Dmitriy; Lukie, Zarija
2016-04-01
Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less
Steen, Paul J.; Passino-Reader, Dora R.; Wiley, Michael J.
2006-01-01
As a part of the Great Lakes Regional Aquatic Gap Analysis Project, we evaluated methodologies for modeling associations between fish species and habitat characteristics at a landscape scale. To do this, we created brook trout Salvelinus fontinalis presence and absence models based on four different techniques: multiple linear regression, logistic regression, neural networks, and classification trees. The models were tested in two ways: by application to an independent validation database and cross-validation using the training data, and by visual comparison of statewide distribution maps with historically recorded occurrences from the Michigan Fish Atlas. Although differences in the accuracy of our models were slight, the logistic regression model predicted with the least error, followed by multiple regression, then classification trees, then the neural networks. These models will provide natural resource managers a way to identify habitats requiring protection for the conservation of fish species.
Multiple Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.
2010-01-01
A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.
Simultaneous fits in ISIS on the example of GRO J1008-57
NASA Astrophysics Data System (ADS)
Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern
2015-04-01
Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.
The M Word: Multicollinearity in Multiple Regression.
ERIC Educational Resources Information Center
Morrow-Howell, Nancy
1994-01-01
Notes that existence of substantial correlation between two or more independent variables creates problems of multicollinearity in multiple regression. Discusses multicollinearity problem in social work research in which independent variables are usually intercorrelated. Clarifies problems created by multicollinearity, explains detection of…
Gunnery, Sarah D; Naumova, Elena N; Saint-Hilaire, Marie; Tickle-Degnen, Linda
2017-01-01
People with Parkinson's disease (PD) often experience a decrease in their facial expressivity, but little is known about how the coordinated movements across regions of the face are impaired in PD. The face has neurologically independent regions that coordinate to articulate distinct social meanings that others perceive as gestalt expressions, and so understanding how different regions of the face are affected is important. Using the Facial Action Coding System, this study comprehensively measured spontaneous facial expression across 600 frames for a multiple case study of people with PD who were rated as having varying degrees of facial expression deficits, and created correlation matrices for frequency and intensity of produced muscle activations across different areas of the face. Data visualization techniques were used to create temporal and correlational mappings of muscle action in the face at different degrees of facial expressivity. Results showed that as severity of facial expression deficit increased, there was a decrease in number, duration, intensity, and coactivation of facial muscle action. This understanding of how regions of the parkinsonian face move independently and in conjunction with other regions will provide a new focus for future research aiming to model how facial expression in PD relates to disease progression, stigma, and quality of life.
NASA Astrophysics Data System (ADS)
Miyazaki, K.; Eskes, H.; Sudo, K.
2012-04-01
Carbon monoxide (CO) and nitrogen oxides (NOx) play an important role in tropospheric chemistry through their influences on the ozone and hydroxyl radical (OH). The simultaneous optimization of various chemical components is expected to improve the emission inversion through the better description of the chemical feedbacks in the NOx- and CO-chemistry. This study aims to reproduce chemical composition distributions in the troposphere by combining information obtained from multiple satellite data sets. The emissions of CO and NOx, together with the 3D concentration fields of all forecasted chemical species in the global CTM CHASER have been simultaneously optimized using the ensemble Kalman filter (EnKF) data assimilation technique, and NO2, O3, CO, and HNO3 data obtained from OMI, TES, MOPITT, and MLS satellite measurements. The performance is evaluated against independent data from ozone sondes, aircraft measurements, GOME-2, and SCIAMACHY satellite data. Observing System Experiments (OSEs) have been carried out. These OSEs quantify the relative importance of each data set on constraining the emissions and concentrations. We confirmed that the simultaneous data assimilation improved the agreement with these independent data sets. The combined analysis of multiple data sets by means of advanced data assimilation system can provide a useful framework for the air quality research.
Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.
Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen
2018-01-19
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Stator for a rotating electrical machine having multiple control windings
Shah, Manoj R.; Lewandowski, Chad R.
2001-07-17
A rotating electric machine is provided which includes multiple independent control windings for compensating for rotor imbalances and for levitating/centering the rotor. The multiple independent control windings are placed at different axial locations along the rotor to oppose forces created by imbalances at different axial locations along the rotor. The multiple control windings can also be used to levitate/center the rotor with a relatively small magnetic field per unit area since the rotor and/or the main power winding provides the bias field.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1995-01-01
Particle Image Velocimetry provides a means of measuring the instantaneous 2-component velocity field across a planar region of a seeded flowfield. In this work only two camera, single exposure images are considered where both cameras have the same view of the illumination plane. Two competing techniques which yield unambiguous velocity vector direction information have been widely used for reducing the single exposure, multiple image data: cross-correlation and particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. The correlation technique requires identification of the correlation peak on the correlation plane corresponding to the average displacement of particles across the subregion. Noise on the images and particle dropout contribute to spurious peaks on the correlation plane, leading to misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak on the correlation plane, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus velocity. The advantage of this technique is the improved spatial resolution which is available from the particle tracking operation. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two staged approach offers a velocimetric technique capable of measuring particle velocities with high spatial resolution over a broad range of seeding densities.
Cloning-Independent and Counterselectable Markerless Mutagenesis System in Streptococcus mutans▿
Xie, Zhoujie; Okinaga, Toshinori; Qi, Fengxia; Zhang, Zhijun; Merritt, Justin
2011-01-01
Insertion duplication mutagenesis and allelic replacement mutagenesis are among the most commonly utilized approaches for targeted mutagenesis in bacteria. However, both techniques are limited by a variety of factors that can complicate mutant phenotypic studies. To circumvent these limitations, multiple markerless mutagenesis techniques have been developed that utilize either temperature-sensitive plasmids or counterselectable suicide vectors containing both positive- and negative-selection markers. For many species, these techniques are not especially useful due to difficulties of cloning with Escherichia coli and/or a lack of functional negative-selection markers. In this study, we describe the development of a novel approach for the creation of markerless mutations. This system employs a cloning-independent methodology and should be easily adaptable to a wide array of Gram-positive and Gram-negative bacterial species. The entire process of creating both the counterselection cassette and mutation constructs can be completed using overlapping PCR protocols, which allows extremely quick assembly and eliminates the requirement for either temperature-sensitive replicons or suicide vectors. As a proof of principle, we used Streptococcus mutans reference strain UA159 to create markerless in-frame deletions of 3 separate bacteriocin genes as well as triple mutants containing all 3 deletions. Using a panel of 5 separate wild-type S. mutans strains, we further demonstrated that the procedure is nearly 100% efficient at generating clones with the desired markerless mutation, which is a considerable improvement in yield compared to existing approaches. PMID:21948849
Imaging mass spectrometry data reduction: automated feature identification and extraction.
McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M
2010-12-01
Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.
Faught, Austin M; Davidson, Scott E; Popple, Richard; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core-Houston (IROC-H) Quality Assurance Center (formerly the Radiological Physics Center) has reported varying levels of compliance from their anthropomorphic phantom auditing program. IROC-H studies have suggested that one source of disagreement between institution submitted calculated doses and measurement is the accuracy of the institution's treatment planning system dose calculations and heterogeneity corrections used. In order to audit this step of the radiation therapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Varian flattening filter free (FFF) 6 MV and FFF 10 MV therapeutic x-ray beams were commissioned based on central axis depth dose data from a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open-field measurements in a water tank for field sizes ranging from 3 × 3 cm 2 to 40 × 40 cm 2 . The models were then benchmarked against IROC-H's anthropomorphic head and neck phantom and lung phantom measurements. Validation results, assessed with a ±2%/2 mm gamma criterion, showed average agreement of 99.9% and 99.0% for central axis depth dose data for FFF 6 MV and FFF 10 MV models, respectively. Dose profile agreement using the same evaluation technique averaged 97.8% and 97.9% for the respective models. Phantom benchmarking comparisons were evaluated with a ±3%/2 mm gamma criterion, and agreement averaged 90.1% and 90.8% for the respective models. Multiple source models for Varian FFF 6 MV and FFF 10 MV beams have been developed, validated, and benchmarked for inclusion in an independent dose calculation quality assurance tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
del Rayo Rivas-Ortiz, Yazmín; Hernández-Herrera, Ricardo Jorge
2010-06-01
Recently assisted reproduction techniques are more common, which increases multiple pregnancies and adverse perinatal outcomes. Some authors report increased mortality in multiple pregnancies products obtained by techniques of assisted reproduction vs. conceived spontaneously, although other authors found no significant difference. To evaluate mortality rate of multiple pregnancies comparing those obtained by assisted reproduction vs. spontaneous conception. Retrospective, observational and comparative study. We included pregnant women with 3 or more products that went to the Unidad Médica de Alta Especialidad No. 23, IMSS, in Monterrey, NL (Mexico), between 2002-2008. We compared the number of complicated pregnancies and dead products obtained by a technique of assisted reproduction vs. spontaneous. 68 multiple pregnancies were included. On average, spontaneously conceived fetuses had more weeks of gestation and more birth weight than those achieved by assisted reproduction techniques (p = ns). 20.5% (14/68) of multiple pregnancies had one or more fatal events: 10/40 (25%) by assisted reproduction techniques vs. 4/28 (14%) of spontaneous multiple pregnancies (p = 0.22). 21/134 (16%) of the products conceived by assisted reproduction techniques and 6/88 (7%) of spontaneous (p < 0.03) died. 60% of all multiple pregnancies were obtained by a technique of assisted reproduction and 21% of the cases had one or more fatal events (11% more in pregnancies achieved by assisted reproduction techniques). 12% of the products of multiple pregnancies died (9% more in those obtained by a technique of assisted reproduction).
Lang, Dean H; Sharkey, Neil A; Lionikas, Arimantas; Mack, Holly A; Larsson, Lars; Vogler, George P; Vandenbergh, David J; Blizard, David A; Stout, Joseph T; Stitt, Joseph P; McClearn, Gerald E
2005-05-01
The aim of this study was to compare three methods of adjusting skeletal data for body size and examine their use in QTL analyses. It was found that dividing skeletal phenotypes by body mass index induced erroneous QTL results. The preferred method of body size adjustment was multiple regression. Many skeletal studies have reported strong correlations between phenotypes for muscle, bone, and body size, and these correlations add to the difficulty in identifying genetic influence on skeletal traits that are not mediated through overall body size. Quantitative trait loci (QTL) identified for skeletal phenotypes often map to the same chromosome regions as QTLs for body size. The actions of a QTL identified as influencing BMD could therefore be mediated through the generalized actions of growth on body size or muscle mass. Three methods of adjusting skeletal phenotypes to body size were performed on morphologic, structural, and compositional measurements of the femur and tibia in 200-day-old C57BL/6J x DBA/2 (BXD) second generation (F(2)) mice (n = 400). A common method of removing the size effect has been through the use of ratios. This technique and two alternative techniques using simple and multiple regression were performed on muscle and skeletal data before QTL analyses, and the differences in QTL results were examined. The use of ratios to remove the size effect was shown to increase the size effect by inducing spurious correlations, thereby leading to inaccurate QTL results. Adjustments for body size using multiple regression eliminated these problems. Multiple regression should be used to remove the variance of co-factors related to skeletal phenotypes to allow for the study of genetic influence independent of correlated phenotypes. However, to better understand the genetic influence, adjusted and unadjusted skeletal QTL results should be compared. Additional insight can be gained by observing the difference in LOD score between the adjusted and nonadjusted phenotypes. Identifying QTLs that exert their effects on skeletal phenotypes through body size-related pathways as well as those having a more direct and independent influence on bone are equally important in deciphering the complex physiologic pathways responsible for the maintenance of bone health.
Resource Management for Distributed Parallel Systems
NASA Technical Reports Server (NTRS)
Neuman, B. Clifford; Rao, Santosh
1993-01-01
Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.
Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi
2017-10-05
We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A default Bayesian hypothesis test for mediation.
Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan
2015-03-01
In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).
NASA Astrophysics Data System (ADS)
Marrale, Maurizio; Collura, Giorgio; Gallo, Salvatore; Nici, Stefania; Tranchina, Luigi; Abbate, Boris Federico; Marineo, Sandra; Caracappa, Santo; d'Errico, Francesco
2017-04-01
This work focused on the analysis of the temporal diffusion of ferric ions through PVA-GTA gel dosimeters. PVA-GTA gel samples, partly exposed with 6 MV X-rays in order to create an initial steep gradient, were mapped using magnetic resonance imaging on a 7T MRI scanner for small animals. Multiple images of the gels were acquired over several hours after irradiation and were analyzed to quantitatively extract the signal profile. The spatial resolution achieved is 200 μm and this makes this technique particularly suitable for the analysis of steep gradients of ferric ion concentration. The results obtained with PVA-GTA gels were compared with those achieved with agarose gels, which is a standard dosimetric gel formulation. The analysis showed that the diffusion process is much slower (more than five times) for PVA-GTA gels than for agarose ones. Furthermore, it is noteworthy that the diffusion coefficient value obtained through MRI analysis is significantly consistent with that obtained in separate study Marini et al. (Submitted for publication) using a totally independent method such as spectrophotometry. This is a valuable result highlighting that the good dosimetric features of this gel matrix not only can be reproduced but also can be measured through independent experimental techniques based on different physical principles.
Developmental Trampoline Activities for Individuals with Multiple Handicapping Conditions.
ERIC Educational Resources Information Center
Thomas, Bill
1979-01-01
The use of trampoline activities with multiple handicapped students is discussed. Management considerations in safety are noted, and developmental trampoline skills are listed beginning with bouncing for stimulation. Progression to limited independence and finally independent jumping is described. The position statement of the American Alliance…
Chen, Yi-Ting; Chen, Hsiao-Wei; Wu, Chun-Feng; Chu, Lichieh Julie; Chiang, Wei-Fang; Wu, Chih-Ching; Yu, Jau-Song; Tsai, Cheng-Han; Liang, Kung-Hao; Chang, Yu-Sun; Wu, Maureen; Ou Yang, Wei-Ting
2017-05-01
Multiple (selected) reaction monitoring (MRM/SRM) of peptides is a growing technology for target protein quantification because it is more robust, precise, accurate, high-throughput, and multiplex-capable than antibody-based techniques. The technique has been applied clinically to the large-scale quantification of multiple target proteins in different types of fluids. However, previous MRM-based studies have placed less focus on sample-preparation workflow and analytical performance in the precise quantification of proteins in saliva, a noninvasively sampled body fluid. In this study, we evaluated the analytical performance of a simple and robust multiple reaction monitoring (MRM)-based targeted proteomics approach incorporating liquid chromatography with mass spectrometry detection (LC-MRM/MS). This platform was used to quantitatively assess the biomarker potential of a group of 56 salivary proteins that have previously been associated with human cancers. To further enhance the development of this technology for assay of salivary samples, we optimized the workflow for salivary protein digestion and evaluated quantification performance, robustness and technical limitations in analyzing clinical samples. Using a clinically well-characterized cohort of two independent clinical sample sets (total n = 119), we quantitatively characterized these protein biomarker candidates in saliva specimens from controls and oral squamous cell carcinoma (OSCC) patients. The results clearly showed a significant elevation of most targeted proteins in saliva samples from OSCC patients compared with controls. Overall, this platform was capable of assaying the most highly multiplexed panel of salivary protein biomarkers, highlighting the clinical utility of MRM in oral cancer biomarker research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
Dort, Jonathan M; Trickey, Amber W; Kallies, Kara J; Joshi, Amit R T; Sidwell, Richard A; Jarman, Benjamin T
2015-01-01
This study evaluated characteristics of applicants selected for interview and ranked by independent general surgery residency programs and assessed independent program application volumes, interview selection, rank list formation, and match success. Demographic and academic information was analyzed for 2014-2015 applicants. Applicant characteristics were compared by ranking status using univariate and multivariable statistical techniques. Characteristics independently associated with whether or not an applicant was ranked were identified using multivariable logistic regression modeling with backward stepwise variable selection and cluster-correlated robust variance estimates to account for correlations among individuals who applied to multiple programs. The Electronic Residency Application Service was used to obtain applicant data and program match outcomes at 33 independent surgery programs. All applicants selected to interview at 33 participating independent general surgery residency programs were included in the study. Applicants were 60% male with median age of 26 years. Birthplace was well distributed. Most applicants (73%) had ≥1 academic publication. Median United States Medical Licensing Exams (USMLE) Step 1 score was 228 (interquartile range: 218-240), and median USMLE Step 2 clinical knowledge score was 241 (interquartile range: 231-250). Residency programs in some regions more often ranked applicants who attended medical school within the same region. On multivariable analysis, significant predictors of ranking by an independent residency program were: USMLE scores, medical school region, and birth region. Independent programs received an average of 764 applications (range: 307-1704). On average, 12% interviews, and 81% of interviewed applicants were ranked. Most programs (84%) matched at least 1 applicant ranked in their top 10. Participating independent programs attract a large volume of applicants and have high standards in the selection process. This information can be used by surgery residency applicants to gauge their candidacy at independent programs. Independent programs offer a select number of interviews, rank most applicants that they interview, and successfully match competitive applicants. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Postoperative pain management in the postanesthesia care unit: an update
Luo, Jie; Min, Su
2017-01-01
Acute postoperative pain remains a major problem, resulting in multiple undesirable outcomes if inadequately controlled. Most surgical patients spend their immediate postoperative period in the postanesthesia care unit (PACU), where pain management, being unsatisfactory and requiring improvements, affects further recovery. Recent studies on postoperative pain management in the PACU were reviewed for the advances in assessments and treatments. More objective assessments of pain being independent of patients’ participation may be potentially appropriate in the PACU, including photoplethysmography-derived parameters, analgesia nociception index, skin conductance, and pupillometry, although further studies are needed to confirm their utilities. Multimodal analgesia with different analgesics and techniques has been widely used. With theoretical basis of preventing central sensitization, preventive analgesia is increasingly common. New opioids are being developed with minimization of adverse effects of traditional opioids. More intravenous nonopioid analgesics and adjuncts (such as dexmedetomidine and dexamethasone) are introduced for their opioid-sparing effects. Current evidence suggests that regional analgesic techniques are effective in the reduction of pain and stay in the PACU. Being available alternatives to epidural analgesia, perineural techniques and infiltrative techniques including wound infiltration, transversus abdominis plane block, local infiltration analgesia, and intraperitoneal administration have played a more important role for their effectiveness and safety. PMID:29180895
Integrative sparse principal component analysis of gene expression data.
Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge
2017-12-01
In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.
Overview of Sparse Graph for Multiple Access in Future Mobile Networks
NASA Astrophysics Data System (ADS)
Lei, Jing; Li, Baoguo; Li, Erbao; Gong, Zhenghui
2017-10-01
Multiple access via sparse graph, such as low density signature (LDS) and sparse code multiple access (SCMA), is a promising technique for future wireless communications. This survey presents an overview of the developments in this burgeoning field, including transmitter structures, extrinsic information transform (EXIT) chart analysis and comparisons with existing multiple access techniques. Such technique enables multiple access under overloaded conditions to achieve a satisfactory performance. Message passing algorithm is utilized for multi-user detection in the receiver, and structures of the sparse graph are illustrated in detail. Outlooks and challenges of this technique are also presented.
Structural damage diagnostics via wave propagation-based filtering techniques
NASA Astrophysics Data System (ADS)
Ayers, James T., III
Structural health monitoring (SHM) of aerospace components is a rapidly emerging field due in part to commercial and military transport vehicles remaining in operation beyond their designed life cycles. Damage detection strategies are sought that provide real-time information of the structure's integrity. One approach that has shown promise to accurately identify and quantify structural defects is based on guided ultrasonic wave (GUW) inspections, where low amplitude attenuation properties allow for long range and large specimen evaluation. One drawback to GUWs is that they exhibit a complex multi-modal response, such that each frequency corresponds to at least two excited modes, and thus intelligent signal processing is required for even the simplest of structures. In addition, GUWs are dispersive, whereby the wave velocity is a function of frequency, and the shape of the wave packet changes over the spatial domain, requiring sophisticated detection algorithms. Moreover, existing damage quantification measures are typically formulated as a comparison of the damaged to undamaged response, which has proven to be highly sensitive to changes in environment, and therefore often unreliable. As a response to these challenges inherent to GUW inspections, this research develops techniques to locate and estimate the severity of the damage. Specifically, a phase gradient based localization algorithm is introduced to identify the defect position independent of excitation frequency and damage size. Mode separation through the filtering technique is central in isolating and extracting single mode components, such as reflected, converted, and transmitted modes that may arise from the incident wave impacting a damage. Spatially-integrated single and multiple component mode coefficients are also formulated with the intent to better characterize wave reflections and conversions and to increase the signal to noise ratios. The techniques are applied to damaged isotropic finite element plate models and experimental data obtained from Scanning Laser Doppler Vibrometry tests. Numerical and experimental parametric studies are conducted, and the current strengths and weaknesses of the proposed approaches are discussed. In particular, limitations to the damage profiling characterization are shown for low ultrasonic frequency regimes, whereas the multiple component mode conversion coefficients provide excellent noise mitigation. Multiple component estimation relies on an experimental technique developed for the estimation of Lamb wave polarization using a 1D Laser Vibrometer. Lastly, suggestions are made to apply the techniques to more structurally complex geometries.
Tsonev, Latchezar I; Hirsh, Allen G
2016-10-14
We have previously described a liquid chromatographic (LC) method for uncoupling controlled, wide range pH gradients and simultaneous controlled gradients of a non-buffering solute on ion exchange resins (Hirsh and Tsonev, 2012) [1]. Here we report the application of this two dimensional LC technique to the problem of resolving Human Transferrin (HT) isoforms. This important iron transporting protein should theoretically occur in several thousand glycoforms, but only about a dozen have been reported. Using dual simultaneous independent gradients (DSIGs) of acetonitrile (ACN) and pH on a mixed bed stationary phase (SP) consisting of a mixture of an anion exchange resin and a reversed phase (RP) resin we partially resolve about 60 isoforms. These are likely to be partially refolded glycoforms generated by interaction of HT with the highly hydrophobic RP SP, as well as distinct folded glycoforms. Thus this study should have interesting implications for both glycoform separation and the study of protein folding. Copyright © 2016 Elsevier B.V. All rights reserved.
Rouger, Vincent; Bordet, Guillaume; Couillault, Carole; Monneret, Serge; Mailfert, Sébastien; Ewbank, Jonathan J; Pujol, Nathalie; Marguet, Didier
2014-05-20
To investigate the early stages of cell-cell interactions occurring between living biological samples, imaging methods with appropriate spatiotemporal resolution are required. Among the techniques currently available, those based on optical trapping are promising. Methods to image trapped objects, however, in general suffer from a lack of three-dimensional resolution, due to technical constraints. Here, we have developed an original setup comprising two independent modules: holographic optical tweezers, which offer a versatile and precise way to move multiple objects simultaneously but independently, and a confocal microscope that provides fast three-dimensional image acquisition. The optical decoupling of these two modules through the same objective gives users the possibility to easily investigate very early steps in biological interactions. We illustrate the potential of this setup with an analysis of infection by the fungus Drechmeria coniospora of different developmental stages of Caenorhabditis elegans. This has allowed us to identify specific areas on the nematode's surface where fungal spores adhere preferentially. We also quantified this adhesion process for different mutant nematode strains, and thereby derive insights into the host factors that mediate fungal spore adhesion. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Proceedings of the Mobile Satellite System Architectures and Multiple Access Techniques Workshop
NASA Technical Reports Server (NTRS)
Dessouky, Khaled
1989-01-01
The Mobile Satellite System Architectures and Multiple Access Techniques Workshop served as a forum for the debate of system and network architecture issues. Particular emphasis was on those issues relating to the choice of multiple access technique(s) for the Mobile Satellite Service (MSS). These proceedings contain articles that expand upon the 12 presentations given in the workshop. Contrasting views on Frequency Division Multiple Access (FDMA), Code Division Multiple Access (CDMA), and Time Division Multiple Access (TDMA)-based architectures are presented, and system issues relating to signaling, spacecraft design, and network management constraints are addressed. An overview article that summarizes the issues raised in the numerous discussion periods of the workshop is also included.
Sherwood, J.M.
1986-01-01
Methods are presented for estimating peak discharges, flood volumes and hydrograph shapes of small (less than 5 sq mi) urban streams in Ohio. Examples of how to use the various regression equations and estimating techniques also are presented. Multiple-regression equations were developed for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The significant independent variables affecting peak discharge are drainage area, main-channel slope, average basin-elevation index, and basin-development factor. Standard errors of regression and prediction for the peak discharge equations range from +/-37% to +/-41%. An equation also was developed to estimate the flood volume of a given peak discharge. Peak discharge, drainage area, main-channel slope, and basin-development factor were found to be the significant independent variables affecting flood volumes for given peak discharges. The standard error of regression for the volume equation is +/-52%. A technique is described for estimating the shape of a runoff hydrograph by applying a specific peak discharge and the estimated lagtime to a dimensionless hydrograph. An equation for estimating the lagtime of a basin was developed. Two variables--main-channel length divided by the square root of the main-channel slope and basin-development factor--have a significant effect on basin lagtime. The standard error of regression for the lagtime equation is +/-48%. The data base for the study was established by collecting rainfall-runoff data at 30 basins distributed throughout several metropolitan areas of Ohio. Five to eight years of data were collected at a 5-min record interval. The USGS rainfall-runoff model A634 was calibrated for each site. The calibrated models were used in conjunction with long-term rainfall records to generate a long-term streamflow record for each site. Each annual peak-discharge record was fitted to a Log-Pearson Type III frequency curve. Multiple-regression techniques were then used to analyze the peak discharge data as a function of the basin characteristics of the 30 sites. (Author 's abstract)
Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong
2006-07-01
To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.
Three-dimensional hybrid grid generation using advancing front techniques
NASA Technical Reports Server (NTRS)
Steinbrenner, John P.; Noack, Ralph W.
1995-01-01
A new 3-dimensional hybrid grid generation technique has been developed, based on ideas of advancing fronts for both structured and unstructured grids. In this approach, structured grids are first generate independently around individual components of the geometry. Fronts are initialized on these structure grids, and advanced outward so that new cells are extracted directly from the structured grids. Employing typical advancing front techniques, cells are rejected if they intersect the existing front or fail other criteria When no more viable structured cells exist further cells are advanced in an unstructured manner to close off the overall domain, resulting in a grid of 'hybrid' form. There are two primary advantages to the hybrid formulation. First, generating blocks with limited regard to topology eliminates the bottleneck encountered when a multiple block system is used to fully encapsulate a domain. Individual blocks may be generated free of external constraints, which will significantly reduce the generation time. Secondly, grid points near the body (presumably with high aspect ratio) will still maintain a structured (non-triangular or tetrahedral) character, thereby maximizing grid quality and solution accuracy near the surface.
Using foreground/background analysis to determine leaf and canopy chemistry
NASA Technical Reports Server (NTRS)
Pinzon, J. E.; Ustin, S. L.; Hart, Q. J.; Jacquemoud, S.; Smith, M. O.
1995-01-01
Spectral Mixture Analysis (SMA) has become a well established procedure for analyzing imaging spectrometry data, however, the technique is relatively insensitive to minor sources of spectral variation (e.g., discriminating stressed from unstressed vegetation and variations in canopy chemistry). Other statistical approaches have been tried e.g., stepwise multiple linear regression analysis to predict canopy chemistry. Grossman et al. reported that SMLR is sensitive to measurement error and that the prediction of minor chemical components are not independent of patterns observed in more dominant spectral components like water. Further, they observed that the relationships were strongly dependent on the mode of expressing reflectance (R, -log R) and whether chemistry was expressed on a weight (g/g) or are basis (g/sq m). Thus, alternative multivariate techniques need to be examined. Smith et al. reported a revised SMA that they termed Foreground/Background Analysis (FBA) that permits directing the analysis along any axis of variance by identifying vectors through the n-dimensional spectral volume orthonormal to each other. Here, we report an application of the FBA technique for the detection of canopy chemistry using a modified form of the analysis.
Techniques for estimating flood-depth frequency relations for streams in West Virginia
Wiley, J.B.
1987-01-01
Multiple regression analyses are applied to data from 119 U.S. Geological Survey streamflow stations to develop equations that estimate baseline depth (depth of 50% flow duration) and 100-yr flood depth on unregulated streams in West Virginia. Drainage basin characteristics determined from the 100-yr flood depth analysis were used to develop 2-, 10-, 25-, 50-, and 500-yr regional flood depth equations. Two regions with distinct baseline depth equations and three regions with distinct flood depth equations are delineated. Drainage area is the most significant independent variable found in the central and northern areas of the state where mean basin elevation also is significant. The equations are applicable to any unregulated site in West Virginia where values of independent variables are within the range evaluated for the region. Examples of inapplicable sites include those in reaches below dams, within and directly upstream from bridge or culvert constrictions, within encroached reaches, in karst areas, and where streams flow through lakes or swamps. (Author 's abstract)
PECAN: Library Free Peptide Detection for Data-Independent Acquisition Tandem Mass Spectrometry Data
Ting, Ying S.; Egertson, Jarrett D.; Bollinger, James G.; Searle, Brian C.; Payne, Samuel H.; Noble, William Stafford; MacCoss, Michael J.
2017-01-01
In mass spectrometry-based shogun proteomics, data-independent acquisition (DIA) is an emerging technique for unbiased and reproducible measurement of protein mixtures. Without targeting a specific precursor ion, DIA MS/MS spectra are often highly multiplexed, containing product ions from multiple co-fragmenting precursors. Thus, detecting peptides directly from DIA data is challenging; most DIA data analyses require spectral libraries. Here we present a new library-free, peptide-centric tool PECAN that detects peptides directly from DIA data. PECAN reports evidence of detection based on product ion scoring, enabling detection of low abundance analytes with poor precursor ion signal. We benchmarked PECAN with chromatographic peak picking accuracy and peptide detection capability. We further validated PECAN detection with data-dependent acquisition and targeted analyses. Last, we used PECAN to build a library from DIA data and to query sequence variants. Together, these results show that PECAN detects peptides robustly and accurately from DIA data without using a library. PMID:28783153
Theodorsson-Norheim, E
1986-08-01
Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.
Analysis of multiple tank car releases in train accidents.
Liu, Xiang; Liu, Chang; Hong, Yili
2017-10-01
There are annually over two million carloads of hazardous materials transported by rail in the United States. The American railroads use large blocks of tank cars to transport petroleum crude oil and other flammable liquids from production to consumption sites. Being different from roadway transport of hazardous materials, a train accident can potentially result in the derailment and release of multiple tank cars, which may result in significant consequences. The prior literature predominantly assumes that the occurrence of multiple tank car releases in a train accident is a series of independent Bernoulli processes, and thus uses the binomial distribution to estimate the total number of tank car releases given the number of tank cars derailing or damaged. This paper shows that the traditional binomial model can incorrectly estimate multiple tank car release probability by magnitudes in certain circumstances, thereby significantly affecting railroad safety and risk analysis. To bridge this knowledge gap, this paper proposes a novel, alternative Correlated Binomial (CB) model that accounts for the possible correlations of multiple tank car releases in the same train. We test three distinct correlation structures in the CB model, and find that they all outperform the conventional binomial model based on empirical tank car accident data. The analysis shows that considering tank car release correlations would result in a significantly improved fit of the empirical data than otherwise. Consequently, it is prudent to consider alternative modeling techniques when analyzing the probability of multiple tank car releases in railroad accidents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cross, Russell; Olivieri, Laura; O'Brien, Kendall; Kellman, Peter; Xue, Hui; Hansen, Michael
2016-02-25
Traditional cine imaging for cardiac functional assessment requires breath-holding, which can be problematic in some situations. Free-breathing techniques have relied on multiple averages or real-time imaging, producing images that can be spatially and/or temporally blurred. To overcome this, methods have been developed to acquire real-time images over multiple cardiac cycles, which are subsequently motion corrected and reformatted to yield a single image series displaying one cardiac cycle with high temporal and spatial resolution. Application of these algorithms has required significant additional reconstruction time. The use of distributed computing was recently proposed as a way to improve clinical workflow with such algorithms. In this study, we have deployed a distributed computing version of motion corrected re-binning reconstruction for free-breathing evaluation of cardiac function. Twenty five patients and 25 volunteers underwent cardiovascular magnetic resonance (CMR) for evaluation of left ventricular end-systolic volume (ESV), end-diastolic volume (EDV), and end-diastolic mass. Measurements using motion corrected re-binning were compared to those using breath-held SSFP and to free-breathing SSFP with multiple averages, and were performed by two independent observers. Pearson correlation coefficients and Bland-Altman plots tested agreement across techniques. Concordance correlation coefficient and Bland-Altman analysis tested inter-observer variability. Total scan plus reconstruction times were tested for significant differences using paired t-test. Measured volumes and mass obtained by motion corrected re-binning and by averaged free-breathing SSFP compared favorably to those obtained by breath-held SSFP (r = 0.9863/0.9813 for EDV, 0.9550/0.9685 for ESV, 0.9952/0.9771 for mass). Inter-observer variability was good with concordance correlation coefficients between observers across all acquisition types suggesting substantial agreement. Both motion corrected re-binning and averaged free-breathing SSFP acquisition and reconstruction times were shorter than breath-held SSFP techniques (p < 0.0001). On average, motion corrected re-binning required 3 min less than breath-held SSFP imaging, a 37% reduction in acquisition and reconstruction time. The motion corrected re-binning image reconstruction technique provides robust cardiac imaging that can be used for quantification that compares favorably to breath-held SSFP as well as multiple average free-breathing SSFP, but can be obtained in a fraction of the time when using cloud-based distributed computing reconstruction.
LEAP: An Innovative Direction Dependent Ionospheric Calibration Scheme for Low Frequency Arrays
NASA Astrophysics Data System (ADS)
Rioja, María J.; Dodson, Richard; Franzen, Thomas M. O.
2018-05-01
The ambitious scientific goals of the SKA require a matching capability for calibration of atmospheric propagation errors, which contaminate the observed signals. We demonstrate a scheme for correcting the direction-dependent ionospheric and instrumental phase effects at the low frequencies and with the wide fields of view planned for SKA-Low. It leverages bandwidth smearing, to filter-out signals from off-axis directions, allowing the measurement of the direction-dependent antenna-based gains in the visibility domain; by doing this towards multiple directions it is possible to calibrate across wide fields of view. This strategy removes the need for a global sky model, therefore all directions are independent. We use MWA results at 88 and 154 MHz under various weather conditions to characterise the performance and applicability of the technique. We conclude that this method is suitable to measure and correct for temporal fluctuations and direction-dependent spatial ionospheric phase distortions on a wide range of scales: both larger and smaller than the array size. The latter are the most intractable and pose a major challenge for future instruments. Moreover this scheme is an embarrassingly parallel process, as multiple directions can be processed independently and simultaneously. This is an important consideration for the SKA, where the current planned architecture is one of compute-islands with limited interconnects. Current implementation of the algorithm and on-going developments are discussed.
Validating a biometric authentication system: sample size requirements.
Dass, Sarat C; Zhu, Yongfang; Jain, Anil K
2006-12-01
Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.
Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.
ERIC Educational Resources Information Center
Kromrey, Jeffrey D.; Hines, Constance V.
1995-01-01
The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…
Inflight Radiometric Calibration of New Horizons' Multispectral Visible Imaging Camera (MVIC)
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Parker, A. H.; Olkin, C. B.; Reuter, D. C.; Ennico, K.; Grundy, W. M.; Graps, A. L.; Harrison, K. P.; Throop, H. B.; Buie, M. W.;
2016-01-01
We discuss two semi-independent calibration techniques used to determine the inflight radiometric calibration for the New Horizons Multi-spectral Visible Imaging Camera (MVIC). The first calibration technique compares the measured number of counts (DN) observed from a number of well calibrated stars to those predicted using the component-level calibration. The ratio of these values provides a multiplicative factor that allows a conversation between the preflight calibration to the more accurate inflight one, for each detector. The second calibration technique is a channel-wise relative radiometric calibration for MVIC's blue, near-infrared and methane color channels using Hubble and New Horizons observations of Charon and scaling from the red channel stellar calibration. Both calibration techniques produce very similar results (better than 7% agreement), providing strong validation for the techniques used. Since the stellar calibration described here can be performed without a color target in the field of view and covers all of MVIC's detectors, this calibration was used to provide the radiometric keyword values delivered by the New Horizons project to the Planetary Data System (PDS). These keyword values allow each observation to be converted from counts to physical units; a description of how these keyword values were generated is included. Finally, mitigation techniques adopted for the gain drift observed in the near-infrared detector and one of the panchromatic framing cameras are also discussed.
A Modal Approach to Compact MIMO Antenna Design
NASA Astrophysics Data System (ADS)
Yang, Binbin
MIMO (Multiple-Input Multiple-Output) technology offers new possibilities for wireless communication through transmission over multiple spatial channels, and enables linear increases in spectral efficiency as the number of the transmitting and receiving antennas increases. However, the physical implementation of such systems in compact devices encounters many physical constraints mainly from the design of multi-antennas. First, an antenna's bandwidth decreases dramatically as its electrical size reduces, a fact known as antenna Q limit; secondly, multiple antennas closely spaced tend to couple with each other, undermining MIMO performance. Though different MIMO antenna designs have been proposed in the literature, there is still a lack of a systematic design methodology and knowledge of performance limits. In this dissertation, we employ characteristic mode theory (CMT) as a powerful tool for MIMO antenna analysis and design. CMT allows us to examine each physical mode of the antenna aperture, and to access its many physical parameters without even exciting the antenna. For the first time, we propose efficient circuit models for MIMO antennas of arbitrary geometry using this modal decomposition technique. Those circuit models demonstrate the powerful physical insight of CMT for MIMO antenna modeling, and simplify MIMO antenna design problem to just the design of specific antenna structural modes and a modal feed network, making possible the separate design of antenna aperture and feeds. We therefore develop a feed-independent shape synthesis technique for optimization of broadband multi-mode apertures. Combining the shape synthesis and circuit modeling techniques for MIMO antennas, we propose a shape-first feed-next design methodology for MIMO antennas, and designed and fabricated two planar MIMO antennas, each occupying an aperture much smaller than the regular size of lambda/2 x lambda/2. Facilitated by the newly developed source formulation for antenna stored energy and recently reported work on antenna Q factor minimization, we extend the minimum Q limit to antennas of arbitrary geometry, and show that given an antenna aperture, any antenna design based on its substructure will result into minimum Q factors larger than or equal to that of the complete structure. This limit is much tighter than Chu's limit based on spherical modes, and applies to antennas of arbitrary geometry. Finally, considering the almost inevitable presence of mutual coupling effects within compact multiport antennas, we develop new decoupling networks (DN) and decoupling network synthesis techniques. An information-theoretic metric, information mismatch loss (Gammainfo), is defined for DN characterization. Based on this metric, the optimization of decoupling networks for broadband system performance is conducted, which demonstrates the limitation of the single-frequency decoupling techniques and room for improvement.
Long-term follow-up results of umbilical hernia repair.
Venclauskas, Linas; Jokubauskas, Mantas; Zilinskas, Justas; Zviniene, Kristina; Kiudelis, Mindaugas
2017-12-01
Multiple suture techniques and various mesh repairs are used in open or laparoscopic umbilical hernia (UH) surgery. To compare long-term follow-up results of UH repair in different hernia surgery groups and to identify risk factors for UH recurrence. A retrospective analysis of 216 patients who underwent elective surgery for UH during a 10-year period was performed. The patients were divided into three groups according to surgery technique (suture, mesh and laparoscopic repair). Early and long-term follow-up results including hospital stay, postoperative general and wound complications, recurrence rate and postoperative patient complaints were reviewed. Risk factors for recurrence were also analyzed. One hundred and forty-six patients were operated on using suture repair, 52 using open mesh and 18 using laparoscopic repair technique. 77.8% of patients underwent long-term follow-up. The postoperative wound complication rate and long-term postoperative complaints were significantly higher in the open mesh repair group. The overall hernia recurrence rate was 13.1%. Only 2 (1.7%) patients with small hernias (< 2 cm) had a recurrence in the suture repair group. Logistic regression analysis showed that body mass index (BMI) > 30 kg/m 2 , diabetes and wound infection were independent risk factors for umbilical hernia recurrence. The overall umbilical hernia recurrence rate was 13.1%. Body mass index > 30 kg/m 2 , diabetes and wound infection were independent risk factors for UH recurrence. According to our study results, laparoscopic medium and large umbilical hernia repair has slight advantages over open mesh repair concerning early postoperative complications, long-term postoperative pain and recurrence.
NASA Technical Reports Server (NTRS)
1977-01-01
Multiple access techniques (FDMA, CDMA, TDMA) for the mobile user and attempts to identify the current best technique are discussed. Traffic loading is considered as well as voice and data modulation and spacecraft and system design. Emphasis is placed on developing mobile terminal cost estimates for the selected design. In addition, design examples are presented for the alternative techniques of multiple access in order to compare with the selected technique.
Recombinational Cloning Using Gateway and In-Fusion Cloning Schemes
Throop, Andrea L.; LaBaer, Joshua
2015-01-01
The comprehensive study of protein structure and function, or proteomics, depends on the obtainability of full-length cDNAs in species-specific expression vectors and subsequent functional analysis of the expressed protein. Recombinational cloning is a universal cloning technique based on site-specific recombination that is independent of the insert DNA sequence of interest, which differentiates this method from the classical restriction enzyme-based cloning methods. Recombinational cloning enables rapid and efficient parallel transfer of DNA inserts into multiple expression systems. This unit summarizes strategies for generating expression-ready clones using the most popular recombinational cloning technologies, including the commercially available Gateway® (Life Technologies) and In-Fusion® (Clontech) cloning technologies. PMID:25827088
Collective cell migration: a physics perspective
NASA Astrophysics Data System (ADS)
Hakim, Vincent; Silberzan, Pascal
2017-07-01
Cells have traditionally been viewed either as independently moving entities or as somewhat static parts of tissues. However, it is now clear that in many cases, multiple cells coordinate their motions and move as collective entities. Well-studied examples comprise development events, as well as physiological and pathological situations. Different ex vivo model systems have also been investigated. Several recent advances have taken place at the interface between biology and physics, and have benefitted from progress in imaging and microscopy, from the use of microfabrication techniques, as well as from the introduction of quantitative tools and models. We review these interesting developments in quantitative cell biology that also provide rich examples of collective out-of-equilibrium motion.
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
Use of Multiscale Entropy to Facilitate Artifact Detection in Electroencephalographic Signals
Mariani, Sara; Borges, Ana F. T.; Henriques, Teresa; Goldberger, Ary L.; Costa, Madalena D.
2016-01-01
Electroencephalographic (EEG) signals present a myriad of challenges to analysis, beginning with the detection of artifacts. Prior approaches to noise detection have utilized multiple techniques, including visual methods, independent component analysis and wavelets. However, no single method is broadly accepted, inviting alternative ways to address this problem. Here, we introduce a novel approach based on a statistical physics method, multiscale entropy (MSE) analysis, which quantifies the complexity of a signal. We postulate that noise corrupted EEG signals have lower information content, and, therefore, reduced complexity compared with their noise free counterparts. We test the new method on an open-access database of EEG signals with and without added artifacts due to electrode motion. PMID:26738116
Reduced exercise capacity in genetic haemochromatosis.
Davidsen, Einar Skulstad; Liseth, Knut; Omvik, Per; Hervig, Tor; Gerdts, Eva
2007-06-01
Many patients with genetic haemochromatosis complain about fatigue and reduced physical capacity. Exercise capacity, however, has not been evaluated in larger series of haemochromatosis patients treated with repeated phlebotomy. We performed exercise echocardiography in 152 treated haemochromatosis patients (48+/-13 years, 26% women) and 50 healthy blood donors (49+/-13 years, 30% women), who served as controls. Echocardiography was performed at rest and during exercise in a semiupright position on a chair bicycle, starting from 20 W, increasing by 20 W/min. Transmitral early and atrial velocity and isovolumic relaxation time were measured at each step. Ventilatory gas exchange was measured by the breath-to-breath-technique. Compared with healthy controls, haemochromatosis patients were more obese and less trained. More of them smoked, and 17% had a history of cardiovascular or pulmonary disease. Adjusted for training, the left ventricular function and dimensions at rest did not differ between the groups. During exercise the haemochromatosis patients obtained a significantly lower peak oxygen (O2) uptake (28.1 vs. 34.4 ml/kg per min, P<0.001). In a multiple regression analysis haemochromatosis predicted lower peak O2 uptake independently of significant contributions of sex, age, and height, as well as of systolic blood pressure and log-transformed isovolumic relaxation time at peak exercise, whereas no independent association was found with weight or physical activity (multiple R=0.74, P<0.001). Adding genotype, s-ferritin, prevalence of smoking, or history of cardiopulmonary disease among the covariates in subsequent models did not change the results. Genetic haemochromatosis, even when treated with regular phlebotomy, is associated with lower exercise capacity independently of other covariates of exercise capacity.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael
2017-12-01
In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.
Pre-injury psychosocial and demographic predictors of long-term functional outcomes post-TBI.
Seagly, Katharine S; O'Neil, Rochelle L; Hanks, Robin A
2018-01-01
To determine whether pre-injury psychosocial and demographic factors differentially influence long-term functional outcomes post-TBI. Urban rehabilitation hospital. 149 individuals, ages 16-75, who sustained a mild complicated, moderate or severe TBI, were enrolled in a TBI Model System (TBIMS), and had functional outcome data five-15 years post-injury. Archival data were analysed with SPSS-18 using multiple regression to determine amount of variance accounted for in five functional domains. Predictors included age at injury, pre-injury education, Glasgow Coma Scale (GCS), pre-injury incarceration and psychiatric history. Craig Handicap Assessment and Reporting Technique (CHART), including Cognitive Independence, Physical Independence, Mobility, Occupation and Social Integration domains. Models were significant for Cognitive and Physical Independence, Mobility, and Occupation. Incarceration and psychiatric history accounted for the most variance in Cognitive and Physical Independence, over and above GCS and age at injury. Psychiatric history was also the strongest predictor of Occupation. Mobility was the only domain in which GCS accounted for the most variance. Pre-injury psychosocial and demographic factors may be more important than injury severity for predicting some long-term functional outcomes post-TBI. It would likely be beneficial to assess these factors in the inpatient setting, with input from a multidisciplinary team, as an early understanding of prognostic indicators can help guide treatment for optimal functional outcomes.
Kawamura, Ryuzo; Miyazaki, Minami; Shimizu, Keita; Matsumoto, Yuta; Silberberg, Yaron R; Sathuluri, Ramachandra Rao; Iijima, Masumi; Kuroda, Shun'ichi; Iwata, Futoshi; Kobayashi, Takeshi; Nakamura, Chikashi
2017-11-08
Focusing on intracellular targets, we propose a new cell separation technique based on a nanoneedle array (NNA) device, which allows simultaneous insertion of multiple needles into multiple cells. The device is designed to target and lift ("fish") individual cells from a mixed population of cells on a substrate using an antibody-functionalized NNA. The mechanics underlying this approach were validated by force analysis using an atomic force microscope. Accurate high-throughput separation was achieved using one-to-one contacts between the nanoneedles and the cells by preparing a single-cell array in which the positions of the cells were aligned with 10,000 nanoneedles in the NNA. Cell-type-specific separation was realized by controlling the adhesion force so that the cells could be detached in cell-type-independent manner. Separation of nestin-expressing neural stem cells (NSCs) derived from human induced pluripotent stem cells (hiPSCs) was demonstrated using the proposed technology, and successful differentiation to neuronal cells was confirmed.
Ambulatory rehabilitation in multiple sclerosis.
Kelleher, Kevin John; Spence, William; Solomonidis, Stephan; Apatsidis, Dimitrios
2009-01-01
Multiple sclerosis (MS) is an autoimmunogenic disease involving demyelination within the central nervous system. Many of the typical impairments associated with MS can affect gait patterns. With walking ability being one of the most decisive factors when assessing quality of life and independent living, this review focuses on matters, which are considered of significance for maintaining and supporting ambulation. This article is an attempt to describe current research and available interventions that the caring healthcare professional can avail of and to review the present trends in research to further these available options. Evidence-based rehabilitation techniques are of interest in the care of patients with MS, given the various existing modalities of treatment. In this review, we summarise the primary factors affecting ambulation and highlight available treatment methods. We review studies that have attempted to characterise gait deficits within this patient population. Finally, as ambulatory rehabilitation requires multidisciplinary interventions, we examine approaches, which may serve to support and maintain ambulation within this patient group for as long as possible.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
Test of 3D CT reconstructions by EM + TV algorithm from undersampled data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da
2013-05-06
Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry withmore » alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.« less
Kishan, Alysha; Walker, Taneidra; Sears, Nick; Wilems, Thomas; Cosgriff-Hernandez, Elizabeth
2018-05-01
To better mimic native tissue microenvironments, current efforts have moved beyond single growth factor delivery to more complex multiple growth factor delivery with distinct release profiles. Electrospun gelatin, a widely investigated drug delivery vehicle, requires postprocessing crosslinking techniques that generate a mesh with uniform crosslinking density, limiting the ability to deliver multiple factors at different rates. Herein, we describe a method to independently control release of multiple factors from a single electrospun gelatin mesh. Two in situ crosslinking modalities, photocrosslinking of methacyrlated gelatin and reactive crosslinking of gelatin with a diisocyanate, are coelectrospun to generate distinct fiber populations with different crosslinking chemistry and density in a single mesh. The photocrosslinked gelatin-methacrylate resulted in a relatively rapid release of a model protein (48 ± 12% at day 1, 96 ± 3% at day 10) due to diffusion of embedded protein from the crosslinked fibers. The reactive crosslinking system displayed a more sustained release (7 ± 5% at day 1, 33 ± 2% at day 10) that was attributed to the conjugation of protein to gelatin with the diisocyanate, requiring degradation of gelatin prior to diffusion out of the fibers. Both modalities displayed tunable release profiles. Subsequent release studies of a cospun mesh with two different crosslinked fiber populations confirmed that the cospun mesh displayed multifactor release with independent release profiles. Overall, this bimodal, in situ crosslinking approach enables the delivery of multiple factors with distinct release kinetics from a single mesh and is expected to have broad utility in tissue engineering. © 2018 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 106A: 1155-1164, 2018. © 2018 Wiley Periodicals, Inc.
Statistical and sampling issues when using multiple particle tracking
NASA Astrophysics Data System (ADS)
Savin, Thierry; Doyle, Patrick S.
2007-08-01
Video microscopy can be used to simultaneously track several microparticles embedded in a complex material. The trajectories are used to extract a sample of displacements at random locations in the material. From this sample, averaged quantities characterizing the dynamics of the probes are calculated to evaluate structural and/or mechanical properties of the assessed material. However, the sampling of measured displacements in heterogeneous systems is singular because the volume of observation with video microscopy is finite. By carefully characterizing the sampling design in the experimental output of the multiple particle tracking technique, we derive estimators for the mean and variance of the probes’ dynamics that are independent of the peculiar statistical characteristics. We expose stringent tests of these estimators using simulated and experimental complex systems with a known heterogeneous structure. Up to a certain fundamental limitation, which we characterize through a material degree of sampling by the embedded probe tracking, these estimators can be applied to quantify the heterogeneity of a material, providing an original and intelligible kind of information on complex fluid properties. More generally, we show that the precise assessment of the statistics in the multiple particle tracking output sample of observations is essential in order to provide accurate unbiased measurements.
Improving the Numerical Stability of Fast Matrix Multiplication
Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...
2016-10-04
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
NASA Astrophysics Data System (ADS)
Nazrul Islam, Mohammed; Karim, Mohammad A.; Vijayan Asari, K.
2013-09-01
Protecting and processing of confidential information, such as personal identification, biometrics, remains a challenging task for further research and development. A new methodology to ensure enhanced security of information in images through the use of encryption and multiplexing is proposed in this paper. We use orthogonal encoding scheme to encode multiple information independently and then combine them together to save storage space and transmission bandwidth. The encoded and multiplexed image is encrypted employing multiple reference-based joint transform correlation. The encryption key is fed into four channels which are relatively phase shifted by different amounts. The input image is introduced to all the channels and then Fourier transformed to obtain joint power spectra (JPS) signals. The resultant JPS signals are again phase-shifted and then combined to form a modified JPS signal which yields the encrypted image after having performed an inverse Fourier transformation. The proposed cryptographic system makes the confidential information absolutely inaccessible to any unauthorized intruder, while allows for the retrieval of the information to the respective authorized recipient without any distortion. The proposed technique is investigated through computer simulations under different practical conditions in order to verify its overall robustness.
Feature generation and representations for protein-protein interaction classification.
Lan, Man; Tan, Chew Lim; Su, Jian
2009-10-01
Automatic detecting protein-protein interaction (PPI) relevant articles is a crucial step for large-scale biological database curation. The previous work adopted POS tagging, shallow parsing and sentence splitting techniques, but they achieved worse performance than the simple bag-of-words representation. In this paper, we generated and investigated multiple types of feature representations in order to further improve the performance of PPI text classification task. Besides the traditional domain-independent bag-of-words approach and the term weighting methods, we also explored other domain-dependent features, i.e. protein-protein interaction trigger keywords, protein named entities and the advanced ways of incorporating Natural Language Processing (NLP) output. The integration of these multiple features has been evaluated on the BioCreAtIvE II corpus. The experimental results showed that both the advanced way of using NLP output and the integration of bag-of-words and NLP output improved the performance of text classification. Specifically, in comparison with the best performance achieved in the BioCreAtIvE II IAS, the feature-level and classifier-level integration of multiple features improved the performance of classification 2.71% and 3.95%, respectively.
Managing Headship Transitions in U.S. Independent Schools
ERIC Educational Resources Information Center
Kane, Pearl Rock; Barbaro, Justin
2016-01-01
Headship transitions in U.S. independent schools represent critical organizational events that affect multiple school constituencies, including faculty, staff, and students. With recent projections forecasting a high level of impending headship transitions in independent schools, this paper seeks to capture how second-year U.S. independent school…
NASA Astrophysics Data System (ADS)
Bruni, Sara; Rebischung, Paul; Zerbini, Susanna; Altamimi, Zuheir; Errico, Maddalena; Santi, Efisio
2018-04-01
The realization of the international terrestrial reference frame (ITRF) is currently based on the data provided by four space geodetic techniques. The accuracy of the different technique-dependent materializations of the frame physical parameters (origin and scale) varies according to the nature of the relevant observables and to the impact of technique-specific errors. A reliable computation of the ITRF requires combining the different inputs, so that the strengths of each technique can compensate for the weaknesses of the others. This combination, however, can only be performed providing some additional information which allows tying together the independent technique networks. At present, the links used for that purpose are topometric surveys (local/terrestrial ties) available at ITRF sites hosting instruments of different techniques. In principle, a possible alternative could be offered by spacecrafts accommodating the positioning payloads of multiple geodetic techniques realizing their co-location in orbit (space ties). In this paper, the GNSS-SLR space ties on-board GPS and GLONASS satellites are thoroughly examined in the framework of global reference frame computations. The investigation focuses on the quality of the realized physical frame parameters. According to the achieved results, the space ties on-board GNSS satellites cannot, at present, substitute terrestrial ties in the computation of the ITRF. The study is completed by a series of synthetic simulations investigating the impact that substantial improvements in the volume and quality of SLR observations to GNSS satellites would have on the precision of the GNSS frame parameters.
Applications of Flow Cytometry to Clinical Microbiology†
Álvarez-Barrientos, Alberto; Arroyo, Javier; Cantón, Rafael; Nombela, César; Sánchez-Pérez, Miguel
2000-01-01
Classical microbiology techniques are relatively slow in comparison to other analytical techniques, in many cases due to the need to culture the microorganisms. Furthermore, classical approaches are difficult with unculturable microorganisms. More recently, the emergence of molecular biology techniques, particularly those on antibodies and nucleic acid probes combined with amplification techniques, has provided speediness and specificity to microbiological diagnosis. Flow cytometry (FCM) allows single- or multiple-microbe detection in clinical samples in an easy, reliable, and fast way. Microbes can be identified on the basis of their peculiar cytometric parameters or by means of certain fluorochromes that can be used either independently or bound to specific antibodies or oligonucleotides. FCM has permitted the development of quantitative procedures to assess antimicrobial susceptibility and drug cytotoxicity in a rapid, accurate, and highly reproducible way. Furthermore, this technique allows the monitoring of in vitro antimicrobial activity and of antimicrobial treatments ex vivo. The most outstanding contribution of FCM is the possibility of detecting the presence of heterogeneous populations with different responses to antimicrobial treatments. Despite these advantages, the application of FCM in clinical microbiology is not yet widespread, probably due to the lack of access to flow cytometers or the lack of knowledge about the potential of this technique. One of the goals of this review is to attempt to mitigate this latter circumstance. We are convinced that in the near future, the availability of commercial kits should increase the use of this technique in the clinical microbiology laboratory. PMID:10755996
Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.
ERIC Educational Resources Information Center
Smith, Kent W.; Sasaki, M. S.
1979-01-01
A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)
NASA Astrophysics Data System (ADS)
Anghileri, D.; Giuliani, M.; Castelletti, A.
2012-04-01
There is a general agreement that one of the most challenging issues related to water system management is the presence of many and often conflicting interests as well as the presence of several and independent decision makers. The traditional approach to multi-objective water systems management is a centralized management, in which an ideal central regulator coordinates the operation of the whole system, exploiting all the available information and balancing all the operating objectives. Although this approach allows to obtain Pareto-optimal solutions representing the maximum achievable benefit, it is based on assumptions which strongly limits its application in real world contexts: 1) top-down management, 2) existence of a central regulation institution, 3) complete information exchange within the system, 4) perfect economic efficiency. A bottom-up decentralized approach seems therefore to be more suitable for real case applications since different reservoir operators may maintain their independence. In this work we tested the consequences of a change in the water management approach moving from a centralized toward a decentralized one. In particular we compared three different cases: the centralized management approach, the independent management approach where each reservoir operator takes the daily release decision maximizing (or minimizing) his operating objective independently from each other, and an intermediate approach, leading to the Nash equilibrium of the associated game, where different reservoir operators try to model the behaviours of the other operators. The three approaches are demonstrated using a test case-study composed of two reservoirs regulated for the minimization of flooding in different locations. The operating policies are computed by solving one single multi-objective optimal control problem, in the centralized management approach; multiple single-objective optimization problems, i.e. one for each operator, in the independent case; using techniques related to game theory for the description of the interaction between the two operators, in the last approach. Computational results shows that the Pareto-optimal control policies obtained in the centralized approach dominate the control policies of both the two cases of decentralized management and that the so called price of anarchy increases moving toward the independent management approach. However, the Nash equilibrium solution seems to be the most promising alternative because it represents a good compromise in maximizing management efficiency without limiting the behaviours of the reservoir operators.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
Role of diversity in ICA and IVA: theory and applications
NASA Astrophysics Data System (ADS)
Adalı, Tülay
2016-05-01
Independent component analysis (ICA) has been the most popular approach for solving the blind source separation problem. Starting from a simple linear mixing model and the assumption of statistical independence, ICA can recover a set of linearly-mixed sources to within a scaling and permutation ambiguity. It has been successfully applied to numerous data analysis problems in areas as diverse as biomedicine, communications, finance, geo- physics, and remote sensing. ICA can be achieved using different types of diversity—statistical property—and, can be posed to simultaneously account for multiple types of diversity such as higher-order-statistics, sample dependence, non-circularity, and nonstationarity. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more type of diversity, statistical dependence across the data sets, for jointly achieving independent decomposition of multiple data sets. With the addition of each new diversity type, identification of a broader class of signals become possible, and in the case of IVA, this includes sources that are independent and identically distributed Gaussians. We review the fundamentals and properties of ICA and IVA when multiple types of diversity are taken into account, and then ask the question whether diversity plays an important role in practical applications as well. Examples from various domains are presented to demonstrate that in many scenarios it might be worthwhile to jointly account for multiple statistical properties. This paper is submitted in conjunction with the talk delivered for the "Unsupervised Learning and ICA Pioneer Award" at the 2016 SPIE Conference on Sensing and Analysis Technologies for Biomedical and Cognitive Applications.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
NASA Technical Reports Server (NTRS)
Westmeyer, Paul A. (Inventor); Wertenberg, Russell F. (Inventor); Krage, Frederick J. (Inventor); Riegel, Jack F. (Inventor)
2017-01-01
An authentication procedure utilizes multiple independent sources of data to determine whether usage of a device, such as a desktop computer, is authorized. When a comparison indicates an anomaly from the base-line usage data, the system, provides a notice that access of the first device is not authorized.
Study of CT image texture using deep learning techniques
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Fan, Jiahua; Chevalier, David
2018-03-01
For CT imaging, reduction of radiation dose while improving or maintaining image quality (IQ) is currently a very active research and development topic. Iterative Reconstruction (IR) approaches have been suggested to be able to offer better IQ to dose ratio compared to the conventional Filtered Back Projection (FBP) reconstruction. However, it has been widely reported that often CT image texture from IR is different compared to that from FBP. Researchers have proposed different figure of metrics to quantitate the texture from different reconstruction methods. But there is still a lack of practical and robust method in the field for texture description. This work applied deep learning method for CT image texture study. Multiple dose scans of a 20cm diameter cylindrical water phantom was performed on Revolution CT scanner (GE Healthcare, Waukesha) and the images were reconstructed with FBP and four different IR reconstruction settings. The training images generated were randomly allotted (80:20) to a training and validation set. An independent test set of 256-512 images/class were collected with the same scan and reconstruction settings. Multiple deep learning (DL) networks with Convolution, RELU activation, max-pooling, fully-connected, global average pooling and softmax activation layers were investigated. Impact of different image patch size for training was investigated. Original pixel data as well as normalized image data were evaluated. DL models were reliably able to classify CT image texture with accuracy up to 99%. Results show that the deep learning techniques suggest that CT IR techniques may help lower the radiation dose compared to FBP.
Cornish, B H; Ward, L C; Thomas, B J; Jebb, S A; Elia, M
1996-03-01
To assess the application of a Cole-Cole analysis of multiple frequency bioelectrical impedance analysis (MFBIA) measurements to predict total body water (TBW) and extracellular water (ECW) in humans. This technique has previously been shown to produce accurate and reliable estimates in both normal and abnormal animals. The whole body impedance of 60 healthy humans was measured at 496 frequencies (ranging from 4 kHz to 1 MHz) and the impedance at zero frequency, Ro, and at the characteristic frequency, Zc, were determined from the impedance spectrum, (Cole-Cole plot). TBW and ECW were independently determined using deuterium and bromide tracer dilution techniques. At the Dunn Clinical Nutrition Centre and The Department of Biochemistry, University of Queensland. 60 healthy adult volunteers (27 men and 33 women, aged 18-45 years). The results presented suggest that the swept frequency bioimpedance technique estimates total body water, (SEE = 5.2%), and extracellular water, (SEE = 10%), only slightly better in normal, healthy subjects than a method based on single frequency bioimpedance or anthropometric estimates based on weight, height and gender. This study has undertaken the most extensive analysis to date of relationships between TBW (and ECW) and individual impedances obtained at different frequencies ( > 400 frequencies), and has shown marginal advantages of using one frequency over another, even if values predicted from theoretical bioimpedance models are used in the estimations. However in situations where there are disturbances of fluid distribution, values predicted from the Cole-Cole analysis of swept frequency bioimpedance measurements could prove to be more useful.
Resolving z ~2 galaxy using adaptive coadded source plane reconstruction
NASA Astrophysics Data System (ADS)
Sharma, Soniya; Richard, Johan; Kewley, Lisa; Yuan, Tiantian
2018-06-01
Natural magnification provided by gravitational lensing coupled with Integral field spectrographic observations (IFS) and adaptive optics (AO) imaging techniques have become the frontier of spatially resolved studies of high redshift galaxies (z>1). Mass models of gravitational lenses hold the key for understanding the spatially resolved source–plane (unlensed) physical properties of the background lensed galaxies. Lensing mass models very sensitively control the accuracy and precision of source-plane reconstructions of the observed lensed arcs. Effective source-plane resolution defined by image-plane (observed) point spread function (PSF) makes it challenging to recover the unlensed (source-plane) surface brightness distribution.We conduct a detailed study to recover the source-plane physical properties of z=2 lensed galaxy using spatially resolved observations from two different multiple images of the lensed target. To deal with PSF’s from two data sets on different multiple images of the galaxy, we employ a forward (Source to Image) approach to merge these independent observations. Using our novel technique, we are able to present a detailed analysis of the source-plane dynamics at scales much better than previously attainable through traditional image inversion methods. Moreover, our technique is adapted to magnification, thus allowing us to achieve higher resolution in highly magnified regions of the source. We find that this lensed system is highly evident of a minor merger. In my talk, I present this case study of z=2 lensed galaxy and also discuss the applications of our algorithm to study plethora of lensed systems, which will be available through future telescopes like JWST and GMT.
Waveform-Diverse Multiple-Input Multiple-Output Radar Imaging Measurements
NASA Astrophysics Data System (ADS)
Stewart, Kyle B.
Multiple-input multiple-output (MIMO) radar is an emerging set of technologies designed to extend the capabilities of multi-channel radar systems. While conventional radar architectures emphasize the use of antenna array beamforming to maximize real-time power on target, MIMO radar systems instead attempt to preserve some degree of independence between their received signals and to exploit this expanded matrix of target measurements in the signal-processing domain. Specifically the use of sparse “virtual” antenna arrays may allow MIMO radars to achieve gains over traditional multi-channel systems by post-processing diverse received signals to implement both transmit and receive beamforming at all points of interest within a given scene. MIMO architectures have been widely examined for use in radar target detection, but these systems may yet be ideally suited to real and synthetic aperture radar imaging applications where their proposed benefits include improved resolutions, expanded area coverage, novel modes of operation, and a reduction in hardware size, weight, and cost. While MIMO radar's theoretical benefits have been well established in the literature, its practical limitations have not received great attention thus far. The effective use of MIMO radar techniques requires a diversity of signals, and to date almost all MIMO system demonstrations have made use of time-staggered transmission to satisfy this requirement. Doing so is reliable but can be prohibitively slow. Waveform-diverse systems have been proposed as an alternative in which multiple, independent waveforms are broadcast simultaneously over a common bandwidth and separated on receive using signal processing. Operating in this way is much faster than its time-diverse equivalent, but finding a set of suitable waveforms for this technique has proven to be a difficult problem. In light of this, many have questioned the practicality of MIMO radar imaging and whether or not its theoretical benefits may be extended to real systems. Work in this writing focuses specifically on the practical aspects of MIMO radar imaging systems and provides performance data sourced from experimental measurements made using a four-channel software-defined MIMO radar platform. Demonstrations of waveform-diverse imaging data products are provided and compared directly against time-diverse equivalents in order to assess the performance of prospective MIMO waveforms. These are sourced from the pseudo-noise, short-term shift orthogonal, and orthogonal frequency multiplexing signal families while experimental results demonstrate waveform-diverse measurements of polarimetric radar cross section, top-down stationary target images, and finally volumetric MIMO synthetic aperture radar imagery. The data presented represents some of the first available concerning the overall practicality of waveform-diverse MIMO radar architectures, and results suggest that such configurations may achieve a reasonable degree of performance even in the presence of significant practical limitations.
NASA Astrophysics Data System (ADS)
Schelkanova, Irina; Toronov, Vladislav
2011-07-01
Although near infrared spectroscopy (NIRS) is now widely used both in emerging clinical techniques and in cognitive neuroscience, the development of the apparatuses and signal processing methods for these applications is still a hot research topic. The main unresolved problem in functional NIRS is the separation of functional signals from the contaminations by systemic and local physiological fluctuations. This problem was approached by using various signal processing methods, including blind signal separation techniques. In particular, principal component analysis (PCA) and independent component analysis (ICA) were applied to the data acquired at the same wavelength and at multiple sites on the human or animal heads during functional activation. These signal processing procedures resulted in a number of principal or independent components that could be attributed to functional activity but their physiological meaning remained unknown. On the other hand, the best physiological specificity is provided by broadband NIRS. Also, a comparison with functional magnetic resonance imaging (fMRI) allows determining the spatial origin of fNIRS signals. In this study we applied PCA and ICA to broadband NIRS data to distill the components correlating with the breath hold activation paradigm and compared them with the simultaneously acquired fMRI signals. Breath holding was used because it generates blood carbon dioxide (CO2) which increases the blood-oxygen-level-dependent (BOLD) signal as CO2 acts as a cerebral vasodilator. Vasodilation causes increased cerebral blood flow which washes deoxyhaemoglobin out of the cerebral capillary bed thus increasing both the cerebral blood volume and oxygenation. Although the original signals were quite diverse, we found very few different components which corresponded to fMRI signals at different locations in the brain and to different physiological chromophores.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
Ultrascalable petaflop parallel supercomputer
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY
2010-07-20
A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.
Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.
Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo
2003-04-01
An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.
Chemical analyses of fossil bone.
Zheng, Wenxia; Schweitzer, Mary Higby
2012-01-01
The preservation of microstructures consistent with soft tissues, cells, and other biological components in demineralized fragments of dinosaur bone tens of millions of years old was unexpected, and counter to current hypotheses of tissue, cellular, and molecular degradation. Although the morphological similarity of these tissues to extant counterparts was unmistakable, after at least 80 million years exposed to geochemical influences, morphological similarity is insufficient to support an endogenous source. To test this hypothesis, and to characterize these materials at a molecular level, we applied multiple independent chemical, molecular, and microscopic analyses to identify the presence of original components produced by the extinct organisms. Microscopic techniques included field emission scanning electron microscopy, analytical transmission electron microscopy, transmitted light microscopy (LM), and fluorescence microscopy (FM). The chemical and molecular techniques include enzyme-linked immunosorbant assay, sodium dodecyl sulfate polyacrylamide gel electrophoresis, western blot (immunoblot), and attenuated total reflectance infrared spectroscopy. In situ analyses performed directly on tissues included immunohistochemistry and time-of-flight secondary ion mass spectrometry. The details of sample preparation and methodology are described in detail herein.
Wetzel, Kim L.; Bettandorff, J.M.
1986-01-01
Techniques are presented for estimating various streamflow characteristics, such as peak flows, mean monthly and annual flows, flow durations, and flow volumes, at ungaged sites on unregulated streams in the Eastern Coal region. Streamflow data and basin characteristics for 629 gaging stations were used to develop multiple-linear-regression equations. Separate equations were developed for the Eastern and Interior Coal Provinces. Drainage area is an independent variable common to all equations. Other variables needed, depending on the streamflow characteristic, are mean annual precipitation, mean basin elevation, main channel length, basin storage, main channel slope, and forest cover. A ratio of the observed 50- to 90-percent flow durations was used in the development of relations to estimate low-flow frequencies in the Eastern Coal Province. Relations to estimate low flows in the Interior Coal Province are not presented because the standard errors were greater than 0.7500 log units and were considered to be of poor reliability.
Fráter, Mark; Forster, András; Jantyik, Ádám; Braunitzer, Gábor; Nagy, Katalin
2015-12-01
The purpose of this in vitro investigation was to evaluate the reinforcing effect of different fibre-reinforced composite (FRC) posts and insertion techniques in premolar teeth when using minimal invasive post space preparation. Thirty two extracted and endodontically treated premolar teeth were used and divided into four groups (n = 8) depending on the post used (Group 1-4). 1: one single conventional post, 2: one main conventional and one collateral post, 3: one flexible post, 4: one main flexible and one collateral post. After cementation and core build-up the specimens were submitted to static fracture toughness test. Fracture thresholds and fracture patterns were recorded and evaluated. The multi-post techniques (group 2 and 4) showed statistically higher fracture resistance compared to group one. Regarding fracture patterns there was no statistically significant difference between the tested groups. The application of multiple posts seems to be beneficial regarding fracture resistance independent from the used FRC post. Fracture pattern was not influenced by the elasticity of the post.
NASA Astrophysics Data System (ADS)
Chesley, J. T.; Leier, A. L.; White, S.; Torres, R.
2017-06-01
Recently developed data collection techniques allow for improved characterization of sedimentary outcrops. Here, we outline a workflow that utilizes unmanned aerial vehicles (UAV) and structure-from-motion (SfM) photogrammetry to produce sub-meter-scale outcrop reconstructions in 3-D. SfM photogrammetry uses multiple overlapping images and an image-based terrain extraction algorithm to reconstruct the location of individual points from the photographs in 3-D space. The results of this technique can be used to construct point clouds, orthomosaics, and digital surface models that can be imported into GIS and related software for further study. The accuracy of the reconstructed outcrops, with respect to an absolute framework, is improved with geotagged images or independently gathered ground control points, and the internal accuracy of 3-D reconstructions is sufficient for sub-meter scale measurements. We demonstrate this approach with a case study from central Utah, USA, where UAV-SfM data can help delineate complex features within Jurassic fluvial sandstones.
A case study of data integration for aquatic resources using semantic web technologies
Gordon, Janice M.; Chkhenkeli, Nina; Govoni, David L.; Lightsom, Frances L.; Ostroff, Andrea C.; Schweitzer, Peter N.; Thongsavanh, Phethala; Varanka, Dalia E.; Zednik, Stephan
2015-01-01
Use cases, information modeling, and linked data techniques are Semantic Web technologies used to develop a prototype system that integrates scientific observations from four independent USGS and cooperator data systems. The techniques were tested with a use case goal of creating a data set for use in exploring potential relationships among freshwater fish populations and environmental factors. The resulting prototype extracts data from the BioData Retrieval System, the Multistate Aquatic Resource Information System, the National Geochemical Survey, and the National Hydrography Dataset. A prototype user interface allows a scientist to select observations from these data systems and combine them into a single data set in RDF format that includes explicitly defined relationships and data definitions. The project was funded by the USGS Community for Data Integration and undertaken by the Community for Data Integration Semantic Web Working Group in order to demonstrate use of Semantic Web technologies by scientists. This allows scientists to simultaneously explore data that are available in multiple, disparate systems beyond those they traditionally have used.
Detection and segmentation of multiple touching product inspection items
NASA Astrophysics Data System (ADS)
Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David
1996-12-01
X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.
Nonlinear zero-sum differential game analysis by singular perturbation methods
NASA Technical Reports Server (NTRS)
Sinar, J.; Farber, N.
1982-01-01
A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.
Sparchez, Zeno; Mocan, Tudor; Radu, Pompilia; Anton, Ofelia; Bolog, Nicolae
2016-03-01
The last decades have known continuous development of therapeutic strategies in hepatocellular carcinoma (HCC). Unfortunately the disease it still not diagnosed until it is already at an intermediate or even an advanced disease. In these circumstances transarterial chemoembolization (TACE) is considered an effective treatment for HCC. The most important independent prognostic factor of both disease free survival and overall survival is the presence of complete necrosis. Therefore, treatment outcomes are dictated by the proper use of radiological imaging. Current guidelines recommend contrast enhanced computer tomography (CECT) as the standard imaging technique for evaluating the therapeutic response in patients with HCC after TACE. One of the most important disadvantage of CECT is the overestimation of tumor response. As an attempt to overcome this limitation contrast enhanced ultrasound (CEUS) has gained particular attention as an imaging modality in HCC patients after TACE. Of all available imaging modalities, CEUS performs better in the early and very early assessment of TACE especially after lipiodol TACE. As any other imaging techniques CEUS has disadvantages especially in hypovascular tumors or in cases of tumor multiplicity. Not far from now the current limitations of CEUS will be overcome by the new CEUS techniques that are already tested in clinical practice such as dynamic CEUS with quantification, three-dimensional CEUS or fusion techniques.
Multiple Independent File Parallel I/O with HDF5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M. C.
2016-07-13
The HDF5 library has supported the I/O requirements of HPC codes at Lawrence Livermore National Labs (LLNL) since the late 90’s. In particular, HDF5 used in the Multiple Independent File (MIF) parallel I/O paradigm has supported LLNL code’s scalable I/O requirements and has recently been gainfully used at scales as large as O(10 6) parallel tasks.
Otowa, Y; Nakamura, T; Takiguchi, G; Yamamoto, M; Kanaji, S; Imanishi, T; Oshikiri, T; Suzuki, S; Tanaka, K; Kakeji, Y
2016-03-01
Enhancements in surgical techniques have led to improved outcomes for esophageal cancer. Recent findings have showed that esophageal cancer is frequently associated with multiple primary cancers, and surgical resection is usually complicated in such cases. The aim of this study was to clarify the clinical significance of surgery for patients with esophageal squamous cell cancer associated with multiple primary cancers. The clinical outcomes of surgical resection for esophageal cancer were compared among 79 patients with antecedent and/or synchronous cancers (Multiple cancer group) and 194 patients without antecedent and/or synchronous cancers (Single cancer group). The most common site of multiple primary cancers was the pharynx (36 patients; 29.7%), followed by the stomach (24 patients; 19.8%). The reconstruction method was more complicated in the Multiple cancer group as a result of the prolonged surgery time and increased blood loss. However, postoperative morbidity and overall survival (OS) did not differ between the two groups. After esophagectomy, metachronous cancers were observed in 26 patients, with 30 regions in total, and 93.1% were found to be curable. Sex was the only independent risk factors for developing metachronous cancer after esophagectomy. The presence of antecedent and synchronous cancers complicates the surgical resection of esophageal cancer; however, no differences were found in the OS and postoperative morbidity between the two groups. Therefore, surgical intervention should be selected as a first-line treatment. Because second primary cancers are often observed in esophageal cancer, we recommend a close follow-up using esophagogastroduodenoscopy and contrast-enhanced computed tomography. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kastberger, G; Kranner, G
2000-02-01
Viscovery SOMine is a software tool for advanced analysis and monitoring of numerical data sets. It was developed for professional use in business, industry, and science and to support dependency analysis, deviation detection, unsupervised clustering, nonlinear regression, data association, pattern recognition, and animated monitoring. Based on the concept of self-organizing maps (SOMs), it employs a robust variant of unsupervised neural networks--namely, Kohonen's Batch-SOM, which is further enhanced with a new scaling technique for speeding up the learning process. This tool provides a powerful means by which to analyze complex data sets without prior statistical knowledge. The data representation contained in the trained SOM is systematically converted to be used in a spectrum of visualization techniques, such as evaluating dependencies between components, investigating geometric properties of the data distribution, searching for clusters, or monitoring new data. We have used this software tool to analyze and visualize multiple influences of the ocellar system on free-flight behavior in giant honeybees. Occlusion of ocelli will affect orienting reactivities in relation to flight target, level of disturbance, and position of the bee in the flight chamber; it will induce phototaxis and make orienting imprecise and dependent on motivational settings. Ocelli permit the adjustment of orienting strategies to environmental demands by enforcing abilities such as centering or flight kinetics and by providing independent control of posture and flight course.
NASA Astrophysics Data System (ADS)
Adams, Hans-Peter; Wagner, Simone; Koziol, James A.
1998-06-01
Magnetic resonance imaging (MRI) is routinely used for the diagnosis of multiple sclerosis (MS), and for objective assessment of the extent of disease as a marker of treatment efficacy in MS clinical trials. The purpose of this study is to compare the evaluation of T2-weighted MRI scans in MS patients using a semi-automated quantitative technique with an independent assessment by a neurologist. Baseline, 6- month, and 12-month T2-weighted MRI scans from 41 chronic progressive MS patients were examined. The lesion volume ranged from 0.50 to 51.56 cm2 (mean: 8.08 cm2). Reproducibility of the quantitative technique was assessed by the re-evaluation of a random subset of 20 scans, the coefficient of variation of the replicate determinations was 8.2%. The reproducibility of the neurologist evaluations was assessed by the re-evaluation of a random subset of 10 patients. The rank correlation between the results of the two methods was 0.097, which did not significantly differ from zero. Disease-related activity in T2-weighted MRI scans is a multi-dimensional construct, and is not adequately summarized solely by determination of lesion volume. In this setting, image analysis software should not only support storage and retrieval as sets of pixels, but should also support links to an anatomical dictionary.
Memory-assisted quantum key distribution resilient against multiple-excitation effects
NASA Astrophysics Data System (ADS)
Lo Piparo, Nicolò; Sinclair, Neil; Razavi, Mohsen
2018-01-01
Memory-assisted measurement-device-independent quantum key distribution (MA-MDI-QKD) has recently been proposed as a technique to improve the rate-versus-distance behavior of QKD systems by using existing, or nearly-achievable, quantum technologies. The promise is that MA-MDI-QKD would require less demanding quantum memories than the ones needed for probabilistic quantum repeaters. Nevertheless, early investigations suggest that, in order to beat the conventional memory-less QKD schemes, the quantum memories used in the MA-MDI-QKD protocols must have high bandwidth-storage products and short interaction times. Among different types of quantum memories, ensemble-based memories offer some of the required specifications, but they typically suffer from multiple excitation effects. To avoid the latter issue, in this paper, we propose two new variants of MA-MDI-QKD both relying on single-photon sources for entangling purposes. One is based on known techniques for entanglement distribution in quantum repeaters. This scheme turns out to offer no advantage even if one uses ideal single-photon sources. By finding the root cause of the problem, we then propose another setup, which can outperform single memory-less setups even if we allow for some imperfections in our single-photon sources. For such a scheme, we compare the key rate for different types of ensemble-based memories and show that certain classes of atomic ensembles can improve the rate-versus-distance behavior.
NASA Astrophysics Data System (ADS)
Michon, Frédéric; Aarts, Arno; Holzhammer, Tobias; Ruther, Patrick; Borghs, Gustaaf; McNaughton, Bruce; Kloosterman, Fabian
2016-08-01
Objective. Understanding how neuronal assemblies underlie cognitive function is a fundamental question in system neuroscience. It poses the technical challenge to monitor the activity of populations of neurons, potentially widely separated, in relation to behaviour. In this paper, we present a new system which aims at simultaneously recording from a large population of neurons from multiple separated brain regions in freely behaving animals. Approach. The concept of the new device is to combine the benefits of two existing electrophysiological techniques, i.e. the flexibility and modularity of micro-drive arrays and the high sampling ability of electrode-dense silicon probes. Main results. Newly engineered long bendable silicon probes were integrated into a micro-drive array. The resulting device can carry up to 16 independently movable silicon probes, each carrying 16 recording sites. Populations of neurons were recorded simultaneously in multiple cortical and/or hippocampal sites in two freely behaving implanted rats. Significance. Current approaches to monitor neuronal activity either allow to flexibly record from multiple widely separated brain regions (micro-drive arrays) but with a limited sampling density or to provide denser sampling at the expense of a flexible placement in multiple brain regions (neural probes). By combining these two approaches and their benefits, we present an alternative solution for flexible and simultaneous recordings from widely distributed populations of neurons in freely behaving rats.
Deng, Peng; Kavehrad, Mohsen; Liu, Zhiwen; Zhou, Zhou; Yuan, Xiuhua
2013-07-01
We study the average capacity performance for multiple-input multiple-output (MIMO) free-space optical (FSO) communication systems using multiple partially coherent beams propagating through non-Kolmogorov strong turbulence, assuming equal gain combining diversity configuration and the sum of multiple gamma-gamma random variables for multiple independent partially coherent beams. The closed-form expressions of scintillation and average capacity are derived and then used to analyze the dependence on the number of independent diversity branches, power law α, refractive-index structure parameter, propagation distance and spatial coherence length of source beams. Obtained results show that, the average capacity increases more significantly with the increase in the rank of MIMO channel matrix compared with the diversity order. The effect of the diversity order on the average capacity is independent of the power law, turbulence strength parameter and spatial coherence length, whereas these effects on average capacity are gradually mitigated as the diversity order increases. The average capacity increases and saturates with the decreasing spatial coherence length, at rates depending on the diversity order, power law and turbulence strength. There exist optimal values of the spatial coherence length and diversity configuration for maximizing the average capacity of MIMO FSO links over a variety of atmospheric turbulence conditions.
Efficient Mutagenesis Independent of Ligation (EMILI).
Füzik, Tibor; Ulbrich, Pavel; Ruml, Tomáš
2014-11-01
Site-directed mutagenesis is one of the most widely used techniques in life sciences. Here we describe an improved and simplified method for introducing mutations at desired sites. It consists of an inverse PCR using a plasmid template and two partially complementary primers. The synthesis step is followed by annealing of the PCR product's sticky ends, which are generated by exonuclease digestion. This method is fast, extremely efficient and cost-effective. It can be used to introduce large insertions and deletions, but also for multiple point mutations in a single step. To show the principle and to prove the efficiency of the method, we present a series of basic mutations (insertions, deletions, point mutations) on pUC19 plasmid DNA. Copyright © 2014 Elsevier B.V. All rights reserved.
LaBombard, B; Lyons, L
2007-07-01
A new method for the real-time evaluation of the conditions in a magnetized plasma is described. The technique employs an electronic "mirror Langmuir probe" (MLP), constructed from bipolar rf transistors and associated high-bandwidth electronics. Utilizing a three-state bias wave form and active feedback control, the mirror probe's I-V characteristic is continuously adjusted to be a scaled replica of the "actual" Langmuir electrode immersed in a plasma. Real-time high-bandwidth measurements of the plasma's electron temperature, ion saturation current, and floating potential can thereby be obtained using only a single electrode. Initial tests of a prototype MLP system are reported, proving the concept. Fast-switching metal-oxide-semiconductor field-effect transistors produce the required three-state voltage bias wave form, completing a full cycle in under 1 mus. Real-time outputs of electron temperature, ion saturation current, and floating potential are demonstrated, which accurately track an independent computation of these values from digitally stored I-V characteristics. The MLP technique represents a significant improvement over existing real-time methods, eliminating the need for multiple electrodes and sampling all three plasma parameters at a single spatial location.
Composition Independent Thermometry in Gaseous Combustion Using Spectral Lineshape Information
NASA Astrophysics Data System (ADS)
Zelenak, Dominic
2016-11-01
Temperature is an important thermochemical property that holds the key to revealing several combustion phenomena such as pollutant formation, flame extinction, and heat release. In a practical combusting environment, the local composition is unknown, hindering the effectiveness of established non-intrusive thermometry techniques. This study aims to offset this limitation by developing laser thermometry techniques that do not require prior knowledge of the local composition. Multiple methods for obtaining temperature are demonstrated, which make use of the spectral line broadening of an absorbing species (Kr) seeded into the flow. These techniques involve extracting the Doppler broadening from the Voight profile and utilizing compositional scaling of collisional broadening and shift to determine temperature. Doppler broadening-temperature scaling of two photon Kr-PLIF is provided. Lean-premixed and diffusion jet flames of CH4 will serve as the test bed for experimentation, and validation of the two methods will be made using the corresponding temperature determined from Rayleigh scattering imaging with adiabatic mixing and unity Lewis number assumptions. A ratiometric dual lineshape thermometry method for turbulent flames will also be introduced. AFOSR Grant FA9550-16-1-0190 with Dr. Chiping Li as Program Manager.
Classification of independent components of EEG into multiple artifact classes.
Frølich, Laura; Andersen, Tobias S; Mørup, Morten
2015-01-01
In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components. Between subjects within studies, high classification performances were obtained. Between studies, however, classification was more difficult. For neural versus nonneural classifications, performance was on par with previous results obtained by others. We found that automatic separation of multiple artifact classes is possible with a small feature set. Our method can reduce manual workload and allow for the selective removal of artifact classes. Identifying artifacts during EEG recording may be used to instruct subjects to refrain from activity causing them. Copyright © 2014 Society for Psychophysiological Research.
Fixed Delay Interferometry for Doppler Extrasolar Planet Detection
NASA Astrophysics Data System (ADS)
Ge, Jian
2002-06-01
We present a new technique based on fixed delay interferometry for high-throughput, high-precision, and multiobject Doppler radial velocity (RV) surveys for extrasolar planets. The Doppler measurements are conducted by monitoring the stellar fringe phase shifts of the interferometer instead of absorption-line centroid shifts as in state-of-the-art echelle spectroscopy. High Doppler sensitivity is achieved through optimizing the optical delay in the interferometer and reducing photon noise by measuring multiple fringes over a broad band. This broadband operation is performed by coupling the interferometer with a low- to medium-resolution postdisperser. The resulting fringing spectra over the bandpass are recorded on a two-dimensional detector, with fringes sampled in the slit spatial direction and the spectrum sampled in the dispersion direction. The resulting total Doppler sensitivity is, in theory, independent of the dispersing power of the postdisperser, which allows for the development of new-generation RV machines with much reduced size, high stability, and low cost compared to echelles. This technique has the potential to improve RV survey efficiency by 2-3 orders of magnitude over the cross-dispersed echelle spectroscopy approach, which would allow a full-sky RV survey of hundreds of thousands of stars for planets, brown dwarfs, and stellar companions once the instrument is operated as a multiobject instrument and is optimized for high throughput. The simple interferometer response potentially allows this technique to be operated at other wavelengths independent of popular iodine reference sources, being actively used in most of the current echelles for Doppler planet searches, to search for planets around early-type stars, white dwarfs, and M, L, and T dwarfs for the first time. The high throughput of this instrument could also allow investigation of extragalactic objects for RV variations at high precision.
Long-term follow-up results of umbilical hernia repair
Venclauskas, Linas; Zilinskas, Justas; Zviniene, Kristina; Kiudelis, Mindaugas
2017-01-01
Introduction Multiple suture techniques and various mesh repairs are used in open or laparoscopic umbilical hernia (UH) surgery. Aim To compare long-term follow-up results of UH repair in different hernia surgery groups and to identify risk factors for UH recurrence. Material and methods A retrospective analysis of 216 patients who underwent elective surgery for UH during a 10-year period was performed. The patients were divided into three groups according to surgery technique (suture, mesh and laparoscopic repair). Early and long-term follow-up results including hospital stay, postoperative general and wound complications, recurrence rate and postoperative patient complaints were reviewed. Risk factors for recurrence were also analyzed. Results One hundred and forty-six patients were operated on using suture repair, 52 using open mesh and 18 using laparoscopic repair technique. 77.8% of patients underwent long-term follow-up. The postoperative wound complication rate and long-term postoperative complaints were significantly higher in the open mesh repair group. The overall hernia recurrence rate was 13.1%. Only 2 (1.7%) patients with small hernias (< 2 cm) had a recurrence in the suture repair group. Logistic regression analysis showed that body mass index (BMI) > 30 kg/m2, diabetes and wound infection were independent risk factors for umbilical hernia recurrence. Conclusions The overall umbilical hernia recurrence rate was 13.1%. Body mass index > 30 kg/m2, diabetes and wound infection were independent risk factors for UH recurrence. According to our study results, laparoscopic medium and large umbilical hernia repair has slight advantages over open mesh repair concerning early postoperative complications, long-term postoperative pain and recurrence. PMID:29362649
Differential Binding Models for Direct and Reverse Isothermal Titration Calorimetry.
Herrera, Isaac; Winnik, Mitchell A
2016-03-10
Isothermal titration calorimetry (ITC) is a technique to measure the stoichiometry and thermodynamics from binding experiments. Identifying an appropriate mathematical model to evaluate titration curves of receptors with multiple sites is challenging, particularly when the stoichiometry or binding mechanism is not available. In a recent theoretical study, we presented a differential binding model (DBM) to study calorimetry titrations independently of the interaction among the binding sites (Herrera, I.; Winnik, M. A. J. Phys. Chem. B 2013, 117, 8659-8672). Here, we build upon our DBM and show its practical application to evaluate calorimetry titrations of receptors with multiple sites independently of the titration direction. Specifically, we present a set of ordinary differential equations (ODEs) with the general form d[S]/dV that can be integrated numerically to calculate the equilibrium concentrations of free and bound species S at every injection step and, subsequently, to evaluate the volume-normalized heat signal (δQ(V) = δq/dV) of direct and reverse calorimetry titrations. Additionally, we identify factors that influence the shape of the titration curve and can be used to optimize the initial concentrations of titrant and analyte. We demonstrate the flexibility of our updated DBM by applying these differentials and a global regression analysis to direct and reverse calorimetric titrations of gadolinium ions with multidentate ligands of increasing denticity, namely, diglycolic acid (DGA), citric acid (CIT), and nitrilotriacetic acid (NTA), and use statistical tests to validate the stoichiometries for the metal-ligand pairs studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Battista, L.; Sciuto, S. A.; Scorza, A.
2013-03-15
In this work, a simple and low-cost air flow sensor, based on a novel fiber-optic sensing technique has been developed for monitoring air flows rates supplied by a neonatal ventilator to support infants in intensive care units. The device is based on a fiber optic sensing technique allowing (a) the immunity to light intensity variations independent by measurand and (b) the reduction of typical shortcomings affecting all biomedical fields (electromagnetic interference and patient electrical safety). The sensing principle is based on the measurement of transversal displacement of an emitting fiber-optic cantilever due to action of air flow acting on it;more » the fiber tip displacement is measured by means of a photodiode linear array, placed in front of the entrance face of the emitting optical fiber in order to detect its light intensity profile. As the measurement system is based on a detection of the illumination pattern, and not on an intensity modulation technique, it results less sensitive to light intensity fluctuation independent by measurand than intensity-based sensors. The considered technique is here adopted in order to develop two different configurations for an air flow sensor suitable for the measurement of air flow rates typically occurring during mechanical ventilation of newborns: a mono-directional and a bi-directional transducer have been proposed. A mathematical model for the air flow sensor is here proposed and a static calibration of two different arrangements has been performed: a measurement range up to 3.00 Multiplication-Sign 10{sup -4} m{sup 3}/s (18.0 l/min) for the mono-directional sensor and a measurement range of {+-}3.00 Multiplication-Sign 10{sup -4} m{sup 3}/s ({+-}18.0 l/min) for the bi-directional sensor are experimentally evaluated, according to the air flow rates normally encountered during tidal breathing of infants with a mass lower than 10 kg. Experimental data of static calibration result in accordance with the proposed theoretical model: for the mono-directional configuration, the coefficient of determination r{sup 2} is equal to 0.997; for the bi-directional configuration, the coefficient of determination r{sup 2} is equal to 0.990 for positive flows (inspiration) and 0.988 for negative flows (expiration). Measurement uncertainty {delta}Q of air flow rate has been evaluated by means of the propagation of distributions and the percentage error in the arrangement of bi-directional sensor ranges from a minimum of about 0.5% at -18.0 l/min to a maximum of about 9% at -12.0 l/min.« less
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
NASA Astrophysics Data System (ADS)
Hann, Swook; Kim, Dong-Hwan; Park, Chang-Soo
2006-04-01
A monitoring technique for multiple power splitter-passive optical networks (PS-PON) is presented. The technique is based on the remote sensing of fiber Bragg grating (FBG) using a tunable OTDR. To monitor the multiple PS-PON, the FBG can be used for a wavelength dependent reflective reference on each branch end of the PS. The FBG helps discern an individual event of the multiple PS-PON for the monitoring in collaborate with information of Rayleigh backscattered power. The multiple PS-PON can be analyzed by the monitoring method at the central office under 10-Gbit/s in-service.
Multiverse dark matter: SUSY or axions
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio
2014-11-01
The observed values of the cosmological constant and the abundance of Dark Matter (DM) can be successfully understood, using certain measures, by imposing the anthropic requirement that density perturbations go non-linear and virialize to form halos. This requires a probability distribution favoring low amounts of DM, i.e. low values of the PQ scale f for the QCD axion and low values of the superpartner mass scale for LSP thermal relics. In theories with independent scanning of multiple DM components, there is a high probability for DM to be dominated by a single component. For example, with independent scanning of f and , TeV-scale LSP DM and an axion solution to the strong CP problem are unlikely to coexist. With thermal LSP DM, the scheme allows an understanding of a Little SUSY Hierarchy with multi-TeV superpartners. Alternatively, with axion DM, PQ breaking before (after) inflation leads to f typically below (below) the projected range of the current ADMX experiment of f = (3 - 30) × 1011 GeV, providing strong motivation to develop experimental techniques for probing lower f.
NASA Astrophysics Data System (ADS)
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-12-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.
Zhang, Hainan; Tran, Hong Hanh; Chung, Bong Hyun; Lee, Nae Yoon
2013-03-21
In this paper, we demonstrate a simple technique for sequentially introducing multiple sample liquids into microchannels driven by centrifugal force combined with a hydrophobic barrier pressure and apply the technique for performing solid-phase based on-chip DNA purification. Three microchannels with varying widths, all equipped with independent sample reservoirs at the inlets, were fabricated on a hydrophobic elastomer, poly(dimethylsiloxane) (PDMS). First, glass beads were packed inside the reaction chamber, and a whole cell containing the DNA extract was introduced into the widest channel by applying centrifugal force for physical adsorption of the DNA onto the glass beads. Next, washing and elution solutions were sequentially introduced into the intermediate and narrowest microchannels, respectively, by gradually increasing the amount of centrifugal force. Through a precise manipulation of the centrifugal force, the DNA adsorbed onto the glass beads was successfully washed and eluted in a continuous manner without the need to introduce each solution manually. A stepwise injection of liquids was successfully demonstrated using multiple ink solutions, the results of which corresponded well with the theoretical analyses. As a practical application, the D1S80 locus of human genomic DNA, which is widely used for forensic purposes, was successfully purified using the microdevice introduced in this study, as demonstrated through successful target amplification. This will pave the way for the construction of a control-free valve system for realizing on-chip DNA purification, which is one of the most labor-intensive and hard-to-miniaturize components, on a greatly simplified and miniaturized platform employing hydrophobic PDMS.
NASA Astrophysics Data System (ADS)
Henzl, V.; Croft, S.; Richard, J.; Swinhoe, M. T.; Tobin, S. J.
2013-06-01
In this paper, we present a novel approach to estimating the total plutonium content in a spent fuel assembly (SFA) that is based on combining information from a passive measurement of the total neutron count rate (PN) of the assayed SFA and a measure of its multiplication. While PN can be measured essentially with any non-destructive assay (NDA) technique capable of neutron detection, the measure of multiplication is, in our approach, determined by means of active interrogation using an instrument based on the Differential Die-Away technique (DDA). The DDA is a NDA technique developed within the U.S. Department of Energy's Next Generation Safeguards Initiative (NGSI) project focused on the utilization of NDA techniques to determine the elemental plutonium content in commercial nuclear SFA's [1]. This approach was adopted since DDA also allows determination of other SFA characteristics, such as burnup, initial enrichment, and cooling time, and also allows for detection of certain types of diversion of nuclear material. The quantification of total plutonium is obtained using an analytical correlation function in terms of the observed PN and active multiplication. Although somewhat similar approaches relating Pu content with PN have been adopted in the past, we demonstrate by extensive simulation of the fuel irradiation and NDA process that our analytical method is independent of explicit knowledge of the initial enrichment, burnup, and an absolute value of the SFA's reactivity (i.e. multiplication factor). We show that when tested with MCNPX™ simulations comprising the 64 SFA NGSI Spent Fuel Library-1 we were able to determine elemental plutonium content, using just a few calibration parameters, with an average variation in the prediction of around 1-2% across the wide dynamic range of irradiation history parameters used, namely initial enrichment (IE=2-5%), burnup (BU=15-60 GWd/tU) and cooling time (CT=1-80 y). In this paper we describe the basic approach and the success obtained against synthetic data. We recognize that our synthetic data may not fully capture the rich behavior of actual irradiated fuel and the uncertainties of the practical measurements. However, this design study is based on a rather complete nuclide inventory and the correlations for Pu seem robust to variation of input. Thus it is concluded that the proposed method is sufficiently promising that further experimentally based work is desirable.
Multiple electron processes of He and Ne by proton impact
NASA Astrophysics Data System (ADS)
Terekhin, Pavel Nikolaevich; Montenegro, Pablo; Quinto, Michele; Monti, Juan; Fojon, Omar; Rivarola, Roberto
2016-05-01
A detailed investigation of multiple electron processes (single and multiple ionization, single capture, transfer-ionization) of He and Ne is presented for proton impact at intermediate and high collision energies. Exclusive absolute cross sections for these processes have been obtained by calculation of transition probabilities in the independent electron and independent event models as a function of impact parameter in the framework of the continuum distorted wave-eikonal initial state theory. A binomial analysis is employed to calculate exclusive probabilities. The comparison with available theoretical and experimental results shows that exclusive probabilities are needed for a reliable description of the experimental data. The developed approach can be used for obtaining the input database for modeling multiple electron processes of charged particles passing through the matter.
Contact-free heart rate measurement using multiple video data
NASA Astrophysics Data System (ADS)
Hung, Pang-Chan; Lee, Kual-Zheng; Tsai, Luo-Wei
2013-10-01
In this paper, we propose a contact-free heart rate measurement method by analyzing sequential images of multiple video data. In the proposed method, skin-like pixels are firstly detected from multiple video data for extracting the color features. These color features are synchronized and analyzed by independent component analysis. A representative component is finally selected among these independent component candidates to measure the HR, which achieves under 2% deviation on average compared with a pulse oximeter in the controllable environment. The advantages of the proposed method include: 1) it uses low cost and high accessibility camera device; 2) it eases users' discomfort by utilizing contact-free measurement; and 3) it achieves the low error rate and the high stability by integrating multiple video data.
Brown, Philip J; Mannava, Sandeep; Seyler, Thorsten M; Plate, Johannes F; Van Sikes, Charles; Stitzel, Joel D; Lang, Jason E
2016-10-26
Femoral head core decompression is an efficacious joint-preserving procedure for treatment of early stage avascular necrosis. However, postoperative fractures have been described which may be related to the decompression technique used. Femoral head decompressions were performed on 12 matched human cadaveric femora comparing large 8mm single bore versus multiple 3mm small drilling techniques. Ultimate failure strength of the femora was tested using a servo-hydraulic material testing system. Ultimate load to failure was compared between the different decompression techniques using two paired ANCOVA linear regression models. Prior to biomechanical testing and after the intervention, volumetric bone mineral density was determined using quantitative computed tomography to account for variation between cadaveric samples and to assess the amount of bone disruption by the core decompression. Core decompression, using the small diameter bore and multiple drilling technique, withstood significantly greater load prior to failure compared with the single large bore technique after adjustment for bone mineral density (p< 0.05). The 8mm single bore technique removed a significantly larger volume of bone compared to the 3mm multiple drilling technique (p< 0.001). However, total fracture energy was similar between the two core decompression techniques. When considering core decompression for the treatment of early stage avascular necrosis, the multiple small bore technique removed less bone volume, thereby potentially leading to higher load to failure.
NASA Astrophysics Data System (ADS)
Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.
2017-07-01
Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).
Are Independent Probes Truly Independent?
ERIC Educational Resources Information Center
Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene
2009-01-01
The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…
Multiple and Inseparable: Conceptualizing the Development of Independence and Interdependence
ERIC Educational Resources Information Center
Raeff, Catherine
2006-01-01
Based on the position that cultural ideologies shape child development, many developmental analyses have focused on analyzing cultural conceptions of independence and interdependence. Less attention has been paid to charting the developmental sequences of children's independent and interdependent behavior that are ostensibly shaped by cultural…
Kartalis, Nikolaos; Loizou, Louiza; Edsborg, Nick; Segersvärd, Ralf; Albiin, Nils
2012-10-01
To compare respiratory-triggered, free-breathing, and breath-hold DWI techniques regarding (1) image quality, and (2) signal intensity (SI) and ADC measurements in pancreatic ductal adenocarcinoma (PDAC). Fifteen patients with histopathologically proven PDAC underwent DWI prospectively at 1.5 T (b = 0, 50, 300, 600 and 1,000 s/mm(2)) with the three techniques. Two radiologists, independently and blindly, assigned total image quality scores [sum of rating diffusion images (lesion detection, anatomy, presence of artefacts) and ADC maps (lesion characterisation, overall image quality)] per technique and ranked them. The lesion SI, signal-to-noise ratio, mean ADC and coefficient of variation (CV) were compared. Total image quality scores for respiratory-triggered, free-breathing and breath-hold techniques were 17.9, 16.5 and 17.1 respectively (respiratory-triggered was significantly higher than free-breathing but not breath-hold). The respiratory-triggered technique had a significantly higher ranking. Lesion SI on all b-values and signal-to-noise ratio on b300 and b600 were significantly higher for the respiratory-triggered technique. For respiratory-triggered, free-breathing and breath-hold techniques the mean ADCs were 1.201, 1.132 and 1.253 × 10(-3) mm(2)/s, and mean CVs were 8.9, 10.8 and 14.1 % respectively (respiratory-triggered and free-breathing techniques had a significantly lower mean CV than the breath-hold technique). In both analyses, respiratory-triggered DWI showed superiority and seems the optimal DWI technique for demonstrating PDAC. • Diffusion-weighted magnetic resonance imaging is increasingly used to detect pancreatic cancer • Images are acquired using various breathing techniques and multiple b-values • Breathing techniques used: respiratory-triggering, free-breathing and breath-hold • Respiratory-triggering seems the optimal breathing technique for demonstrating pancreatic cancer.
Multiple Query Evaluation Based on an Enhanced Genetic Algorithm.
ERIC Educational Resources Information Center
Tamine, Lynda; Chrisment, Claude; Boughanem, Mohand
2003-01-01
Explains the use of genetic algorithms to combine results from multiple query evaluations to improve relevance in information retrieval. Discusses niching techniques, relevance feedback techniques, and evolution heuristics, and compares retrieval results obtained by both genetic multiple query evaluation and classical single query evaluation…
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2001-01-01
The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.
NASA Astrophysics Data System (ADS)
Shokralla, Shaddy Samir Zaki
Multi-frequency eddy current measurements are employed in estimating pressure tube (PT) to calandria tube (CT) gap in CANDU fuel channels, a critical inspection activity required to ensure fitness for service of fuel channels. In this thesis, a comprehensive characterization of eddy current gap data is laid out, in order to extract further information on fuel channel condition, and to identify generalized applications for multi-frequency eddy current data. A surface profiling technique, generalizable to multiple probe and conductive material configurations has been developed. This technique has allowed for identification of various pressure tube artefacts, has been independently validated (using ultrasonic measurements), and has been deployed and commissioned at Ontario Power Generation. Dodd and Deeds solutions to the electromagnetic boundary value problem associated with the PT to CT gap probe configuration were experimentally validated for amplitude response to changes in gap. Using the validated Dodd and Deeds solutions, principal components analysis (PCA) has been employed to identify independence and redundancies in multi-frequency eddy current data. This has allowed for an enhanced visualization of factors affecting gap measurement. Results of the PCA of simulation data are consistent with the skin depth equation, and are validated against PCA of physical experiments. Finally, compressed data acquisition has been realized, allowing faster data acquisition for multi-frequency eddy current systems with hardware limitations, and is generalizable to other applications where real time acquisition of large data sets is prohibitive.
Gavelis, Gregory S; White, Richard A; Suttle, Curtis A; Keeling, Patrick J; Leander, Brian S
2015-07-17
Most microbial eukaryotes are uncultivated and thus poorly suited to standard genomic techniques. This is the case for Polykrikos lebouriae, a dinoflagellate with ultrastructurally aberrant plastids. It has been suggested that these plastids stem from a novel symbiosis with either a diatom or haptophyte, but this hypothesis has been difficult to test as P. lebouriae dwells in marine sand rife with potential genetic contaminants. We applied spliced-leader targeted PCR (SLPCR) to obtain dinoflagellate-specific transcriptomes on single-cell isolates of P. lebouriae from marine sediments. Polykrikos lebouriae expressed nuclear-encoded photosynthetic genes that were characteristic of the peridinin-plastids of dinoflagellates, rather than those from a diatom of haptophyte. We confirmed these findings at the genomic level using multiple displacement amplification (MDA) to obtain a partial plastome of P. lebouriae. From these data, we infer that P. lebouriae has retained the peridinin plastids ancestral for dinoflagellates as a whole, while its closest relatives have lost photosynthesis multiple times independently. We discuss these losses with reference to mixotrophy in polykrikoid dinoflagellates. Our findings demonstrate new levels of variation associated with the peridinin plastids of dinoflagellates and the usefulness of SLPCR approaches on single cell isolates. Unlike other transcriptomic methods, SLPCR has taxonomic specificity, and can in principle be adapted to different splice-leader bearing groups.
Multiple excitation nano-spot generation and confocal detection for far-field microscopy.
Mondal, Partha Pratim
2010-03-01
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
Multiple excitation nano-spot generation and confocal detection for far-field microscopy
NASA Astrophysics Data System (ADS)
Mondal, Partha Pratim
2010-03-01
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
29 CFR 788.15 - Multiple crews.
Code of Federal Regulations, 2010 CFR
2010-07-01
... employees splits his employees into several allegedly “independent businesses” in order to take advantage of... are delivered or whether each such crew is a truly independently owned and operated business. If the number of employees in such a truly independently owned and operated business does not exceed eight, the...
Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru
2017-09-01
Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
The Ice Sheet Mass Balance Inter-comparison Exercise
NASA Astrophysics Data System (ADS)
Shepherd, A.; Ivins, E. R.
2015-12-01
Fluctuations in the mass of ice stored in Antarctica and Greenland are of considerable societal importance. The Ice Sheet Mass Balance Inter-Comparison Exercise (IMBIE) is a joint-initiative of ESA and NASA aimed at producing a single estimate of the global sea level contribution to polar ice sheet losses. Within IMBIE, estimates of ice sheet mass balance are developed from a variety of satellite geodetic techniques using a common spatial and temporal reference frame and a common appreciation of the contributions due to external signals. The project brings together the laboratories and space agencies that have been instrumental in developing independent estimates of ice sheet mass balance to date. In its first phase, IMBIE involved 27 science teams, and delivered a first community assessment of ice sheet mass imbalance to replace 40 individual estimates. The project established that (i) there is good agreement between the three main satellite-based techniques for estimating ice sheet mass balance, (ii) combining satellite data sets leads to significant improvement in certainty, (iii) the polar ice sheets contributed 11 ± 4 mm to global sea levels between 1992 and 2012, and (iv) that combined ice losses from Antarctica and Greenland have increased over time, rising from 10% of the global trend in the early 1990's to 30% in the late 2000's. Demand for an updated assessment has grown, and there are now new satellite missions, new geophysical corrections, new techniques, and new teams producing data. The period of overlap between independent satellite techniques has increased from 5 to 12 years, and the full period of satellite data over which an assessment can be performed has increased from 19 to 40 years. It is also clear that multiple satellite techniques are required to confidently separate mass changes associated with snowfall and ice dynamical imbalance - information that is of critical importance for climate modelling. This presentation outlines the approach for the second phase of IMBIE, including the project organisation, the work programme and schedule, the main science goals, and its current status, and reviews the recent and historical contributions that the Antarctic and Greenland ice sheets have made to global sea level rise.
On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels
NASA Astrophysics Data System (ADS)
Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit
2012-12-01
This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.
Tumor cell migration in complex microenvironments
Polacheck, William J.; Zervantonakis, Ioannis K.; Kamm, Roger D.
2012-01-01
Tumor cell migration is essential for invasion and dissemination from primary solid tumors and for the establishment of lethal secondary metastases at distant organs. In vivo and in vitro models enabled identification of different factors in the tumor microenvironment that regulate tumor progression and metastasis. However, the mechanisms by which tumor cells integrate these chemical and mechanical signals from multiple sources to navigate the complex microenvironment remain poorly understood. In this review, we discuss the factors that influence tumor cell migration with a focus on the migration of transformed carcinoma cells. We provide an overview of the experimental and computational methods that allow the investigation of tumor cell migration, and we highlight the benefits and shortcomings of the various assays. We emphasize that the chemical and mechanical stimulus paradigms are not independent and that crosstalk between them motivates the development of new assays capable of applying multiple, simultaneous stimuli and imaging the cellular migratory response in real-time. These next-generation assays will more closely mimic the in vivo microenvironment to provide new insights into tumor progression, inform techniques to control tumor cell migration, and render cancer more treatable. PMID:22926411
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
Spelman, Tim; Gray, Orla; Lucas, Robyn; Butzkueven, Helmut
2015-12-09
This report describes a novel Stata-based application of trigonometric regression modelling to 55 years of multiple sclerosis relapse data from 46 clinical centers across 20 countries located in both hemispheres. Central to the success of this method was the strategic use of plot analysis to guide and corroborate the statistical regression modelling. Initial plot analysis was necessary for establishing realistic hypotheses regarding the presence and structural form of seasonal and latitudinal influences on relapse probability and then testing the performance of the resultant models. Trigonometric regression was then necessary to quantify these relationships, adjust for important confounders and provide a measure of certainty as to how plausible these associations were. Synchronization of graphing techniques with regression modelling permitted a systematic refinement of models until best-fit convergence was achieved, enabling novel inferences to be made regarding the independent influence of both season and latitude in predicting relapse onset timing in MS. These methods have the potential for application across other complex disease and epidemiological phenomena suspected or known to vary systematically with season and/or geographic location.
VLBI observations to the APOD satellite
NASA Astrophysics Data System (ADS)
Sun, Jing; Tang, Geshi; Shu, Fengchun; Li, Xie; Liu, Shushi; Cao, Jianfeng; Hellerschmied, Andreas; Böhm, Johannes; McCallum, Lucia; McCallum, Jamie; Lovell, Jim; Haas, Rüdiger; Neidhardt, Alexander; Lu, Weitao; Han, Songtao; Ren, Tianpeng; Chen, Lue; Wang, Mei; Ping, Jinsong
2018-02-01
The APOD (Atmospheric density detection and Precise Orbit Determination) is the first LEO (Low Earth Orbit) satellite in orbit co-located with a dual-frequency GNSS (GPS/BD) receiver, an SLR reflector, and a VLBI X/S dual band beacon. From the overlap statistics between consecutive solution arcs and the independent validation by SLR measurements, the orbit position deviation was below 10 cm before the on-board GNSS receiver got partially operational. In this paper, the focus is on the VLBI observations to the LEO satellite from multiple geodetic VLBI radio telescopes, since this is the first implementation of a dedicated VLBI transmitter in low Earth orbit. The practical problems of tracking a fast moving spacecraft with current VLBI ground infrastructure were solved and strong interferometric fringes were obtained by cross-correlation of APOD carrier and DOR (Differential One-way Ranging) signals. The precision in X-band time delay derived from 0.1 s integration time of the correlator output is on the level of 0.1 ns. The APOD observations demonstrate encouraging prospects of co-location of multiple space geodetic techniques in space, as a first prototype.
Covariate Selection for Multilevel Models with Missing Data
Marino, Miguel; Buxton, Orfeu M.; Li, Yi
2017-01-01
Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457
Performance analysis of multiple PRF technique for ambiguity resolution
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
For short wavelength spaceborne synthetic aperture radar (SAR), ambiguity in Doppler centroid estimation occurs when the azimuth squint angle uncertainty is larger than the azimuth antenna beamwidth. Multiple pulse recurrence frequency (PRF) hopping is a technique developed to resolve the ambiguity by operating the radar in different PRF's in the pre-imaging sequence. Performance analysis results of the multiple PRF technique are presented, given the constraints of the attitude bound, the drift rate uncertainty, and the arbitrary numerical values of PRF's. The algorithm performance is derived in terms of the probability of correct ambiguity resolution. Examples, using the Shuttle Imaging Radar-C (SIR-C) and X-SAR parameters, demonstrate that the probability of correct ambiguity resolution obtained by the multiple PRF technique is greater than 95 percent and 80 percent for the SIR-C and X-SAR applications, respectively. The success rate is significantly higher than that achieved by the range cross correlation technique.
Papiernik, E; Grangé, G; Zeitlin, J
1998-01-01
This article reviews the arguments for the use of multifetal pregnancy reduction (MFPR) for the prevention of preterm deliveries in triplet and higher order multiple pregnancies and evaluates its effectiveness based on data from published studies. The arguments in favour of pregnancy reduction are based on the substantial mortality and morbidity associated with these pregnancies. Triplets and higher order multiples have increased rates of preterm delivery and intrauterine growth retardation, both of which are independent risk factors for death and handicap. Even controlling for gestational age, rates of mortality and handicap are higher for multiples than for singletons. Moreover, the family's risk of losing a child or having a handicapped child is greater because there are more infants at risk. MFPR effectively lowers these risk by reducing the frequency of preterm delivery. However, its effectiveness may be limited. In some studies, the proportion of preterm deliveries in reduced pregnancies remains above levels found in spontaneous twin or singleton pregnancies and MFPR does not appear to reduce the prevalence of low birth weight. Furthermore, the procedure itself has unwanted side effects: it increases the risk of miscarriage, premature rupture of the membranes and causes adverse psychological effects such as grief or depression for many patients. The authors note that a majority of the higher order multiple pregnancies result from a medical intervention in the first place, either through IVF techniques or the use of ovulation stimulation drugs. Although MFPR is an effective measure for reducing the substantial morbidity and mortality associated with higher order multiple pregnancies, preventive methods, such as limiting to 2 the number of embryos transferred for IVF and better control of the use of ovulation induction drugs, remain more effective and less intrusive.
Weinberg, W A; McLean, A; Snider, R L; Rintelmann, J W; Brumback, R A
1989-12-01
Eight groups of learning disabled children (N = 100), categorized by the clinical Lexical Paradigm as good readers or poor readers, were individually administered the Gilmore Oral Reading Test, Form D, by one of four input/retrieval methods: (1) the standardized method of administration in which the child reads each paragraph aloud and then answers five questions relating to the paragraph [read/recall method]; (2) the child reads each paragraph aloud and then for each question selects the correct answer from among three choices read by the examiner [read/choice method]; (3) the examiner reads each paragraph aloud and reads each of the five questions to the child to answer [listen/recall method]; and (4) the examiner reads each paragraph aloud and then for each question reads three multiple-choice answers from which the child selects the correct answer [listen/choice method]. The major difference in scores was between the groups tested by the recall versus the orally read multiple-choice methods. This study indicated that poor readers who listened to the material and were tested by orally read multiple-choice format could perform as well as good readers. The performance of good readers was not affected by listening or by the method of testing. The multiple-choice testing improved the performance of poor readers independent of the input method. This supports the arguments made previously that a "bypass approach" to education of poor readers in which testing is accomplished using an orally read multiple-choice format can enhance the child's school performance on reading-related tasks. Using a listening while reading input method may further enhance performance.
Resampling probability values for weighted kappa with multiple raters.
Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E
2008-04-01
A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.
NASA Astrophysics Data System (ADS)
Frasch, Jonathan Lemoine
Determining the electrical permittivity and magnetic permeability of materials is an important task in electromagnetics research. The method using reflection and transmission scattering parameters to determine these constants has been widely employed for many years, ever since the work of Nicolson, Ross, and Weir in the 1970's. For general materials that are homogeneous, linear, and isotropic, the method they developed (the NRW method) works very well and provides an analytical solution. For materials which possess a metal backing or are applied as a coating to a metal surface, it can be difficult or even impossible to obtain a transmission measurement, especially when the coating is thin. In such a circumstance, it is common to resort to a method which uses two reflection type measurements. There are several such methods for free-space measurements, using multiple angles or polarizations for example. For waveguide measurements, obtaining two independent sources of information from which to extract two complex parameters can be a challenge. This dissertation covers three different topics. Two of these involve different techniques to characterize conductor-backed materials, and the third proposes a method for designing synthetic validation standards for use with standard NRW measurements. All three of these topics utilize modal expansions of electric and magnetic fields to analyze propagation in stepped rectangular waveguides. Two of the projects utilize evolutionary algorithms (EA) to design waveguide structures. These algorithms were developed specifically for these projects and utilize fairly recent innovations within the optimization community. The first characterization technique uses two different versions of a single vertical step in the waveguide. Samples to be tested lie inside the steps with the conductor reflection plane behind them. If the two reflection measurements are truly independent it should be possible to recover the values of two complex parameters, but success of the technique ultimately depends upon how independent the measurements actually are. Next, a method is demonstrated for developing synthetic verification standards. These standards are created from combinations of vertical steps formed from a single piece of metal or metal coated plastic. These fully insertable structures mimic some of the measurement characteristics of typical lab specimens and thus provide a useful tool for verifying the proper calibration and function of the experimental setup used for NRW characterization. These standards are designed with the use an EA, which compares possible designs based on the quality of the match with target parameter values. Several examples have been fabricated and tested, and the design specifications and results are presented. Finally, a second characterization technique is considered. This method uses multiple vertical steps to construct an error reducing structure within the waveguide, which allows parameters to be reliably extracted using both reflection and transmission measurements. These structures are designed with an EA, measuring fitness by the reduction of error in the extracted parameters. An additional EA is used to assist in the extraction of the material parameters supplying better initial guesses to a secant method solver. This hybrid approach greatly increases the stability of the solver and increases the speed of parameter extractions. Several designs have been identified and are analyzed.
Subject order-independent group ICA (SOI-GICA) for functional MRI data analysis.
Zhang, Han; Zuo, Xi-Nian; Ma, Shuang-Ye; Zang, Yu-Feng; Milham, Michael P; Zhu, Chao-Zhe
2010-07-15
Independent component analysis (ICA) is a data-driven approach to study functional magnetic resonance imaging (fMRI) data. Particularly, for group analysis on multiple subjects, temporally concatenation group ICA (TC-GICA) is intensively used. However, due to the usually limited computational capability, data reduction with principal component analysis (PCA: a standard preprocessing step of ICA decomposition) is difficult to achieve for a large dataset. To overcome this, TC-GICA employs multiple-stage PCA data reduction. Such multiple-stage PCA data reduction, however, leads to variable outputs due to different subject concatenation orders. Consequently, the ICA algorithm uses the variable multiple-stage PCA outputs and generates variable decompositions. In this study, a rigorous theoretical analysis was conducted to prove the existence of such variability. Simulated and real fMRI experiments were used to demonstrate the subject-order-induced variability of TC-GICA results using multiple PCA data reductions. To solve this problem, we propose a new subject order-independent group ICA (SOI-GICA). Both simulated and real fMRI data experiments demonstrated the high robustness and accuracy of the SOI-GICA results compared to those of traditional TC-GICA. Accordingly, we recommend SOI-GICA for group ICA-based fMRI studies, especially those with large data sets. Copyright 2010 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Platte Technical Community Coll., Columbus, NE.
These Project TEAMS (Techniques and Education for Achieving Managerial Skills) instructional materials consist of five units for use in training independent business owner/managers. The first unit contains materials which deal with management skills relating to personal characteristics of successful business people, knowledge of self and chosen…
Multiple Beam Interferometry in Elementary Teaching
ERIC Educational Resources Information Center
Tolansky, S.
1970-01-01
Discusses a relatively simple technique for demonstrating multiple beam interferometry. The technique can be applied to measuring (1) radii of curvature of lenses, (2) surface finish of glass, and (3) differential phase change on reflection. Microtopographies, modulated fringe systems and opaque objects may also be observed by this technique.…
Nanowire humidity optical sensor system based on fast Fourier transform technique
NASA Astrophysics Data System (ADS)
Rota-Rodrigo, S.; Pérez-Herrera, R.; Lopez-Aldaba, A.; López Bautista, M. C.; Esteban, O.; López-Amo, M.
2015-09-01
In this paper, a new sensor system for relative humidity measurements based on its interaction with the evanescent field of a nanowire is presented. The interrogation of the sensing head is carried out by monitoring the fast Fourier transform phase variations of one of the nanowire interference frequencies. This method is independent of the signal amplitude and also avoids the necessity of tracking the wavelength evolution in the spectrum, which can be a handicap when there are multiple interference frequency components with different sensitivities. The sensor is operated within a wide humidity range (20%-70% relative humidity) with a maximum sensitivity achieved of 0.14rad/% relative humidity. Finally, due to the system uses an optical interrogator as unique active element, the system presents a cost-effective feature.
A novel, easy and rapid method for constructing yeast two-hybrid vectors using In-Fusion technology.
Yu, Deshui; Liao, Libing; Zhang, Ju; Zhang, Yi; Xu, Kedong; Liu, Kun; Li, Xiaoli; Tan, Guangxuan; Chen, Ran; Wang, Yulu; Liu, Xia; Zhang, Xuan; Han, Xiaomeng; Wei, Zhangkun; Li, Chengwei
2018-05-01
Yeast two-hybrid systems are powerful tools for analyzing interactions between proteins. Vector construction is an essential step in yeast two-hybrid experiments, which require bait and prey plasmids. In this study, we modified the multiple cloning site sequence of the yeast plasmid pGADT7 by site-directed mutagenesis PCR to generate the pGADT7-In vector, which resulted in an easy and rapid method for constructing yeast two-hybrid vectors using the In-Fusion cloning technique. This method has three key advantages: only one pair of primers and one round of PCR are needed to generate bait and prey plasmids for each gene, it is restriction endonuclease- and ligase-independent, and it is fast and easily performed.
Cadmium zinc sulfide by solution growth
Chen, Wen S.
1992-05-12
A process for depositing thin layers of a II-VI compound cadmium zinc sulfide (CdZnS) by an aqueous solution growth technique with quality suitable for high efficiency photovoltaic or other devices which can benefit from the band edge shift resulting from the inclusion of Zn in the sulfide. A first solution comprising CdCl.sub.2 2.5H.sub.2 O, NH.sub.4 Cl, NH.sub.4 OH and ZnCl.sub.2, and a second solution comprising thiourea ((NH.sub.2).sub.2 CS) are combined and placed in a deposition cell, along with a substrate to form a thin i.e. 10 nm film of CdZnS on the substrate. This process can be sequentially repeated with to achieve deposition of independent multiple layers having different Zn concentrations.
A probabilistic approach to information retrieval in heterogeneous databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, A.; Segev, A.
During the post decade, organizations have increased their scope and operations beyond their traditional geographic boundaries. At the same time, they have adopted heterogeneous and incompatible information systems independent of each other without a careful consideration that one day they may need to be integrated. As a result of this diversity, many important business applications today require access to data stored in multiple autonomous databases. This paper examines a problem of inter-database information retrieval in a heterogeneous environment, where conventional techniques are no longer efficient. To solve the problem, broader definitions for join, union, intersection and selection operators are proposed.more » Also, a probabilistic method to specify the selectivity of these operators is discussed. An algorithm to compute these probabilities is provided in pseudocode.« less
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
Increasing Independence in Children with Autism Spectrum Disorders Using Video Self Modeling
ERIC Educational Resources Information Center
Bucalos, Julie Iberer
2013-01-01
Independent task completion was examined using a multiple probe across participants research design for three students with autism spectrum disorders (ASD) functioning in an inclusive classroom. Results were positive and suggest that video self-modeling (VSM) is a viable solution to decrease prompt dependence and increase independence and task…
Numerical simulations of imaging satellites with optical interferometry
NASA Astrophysics Data System (ADS)
Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong
2015-08-01
Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.
NASA Technical Reports Server (NTRS)
2005-01-01
A new all-electronic Particle Image Velocimetry technique that can efficiently map high speed gas flows has been developed in-house at the NASA Lewis Research Center. Particle Image Velocimetry is an optical technique for measuring the instantaneous two component velocity field across a planar region of a seeded flow field. A pulsed laser light sheet is used to illuminate the seed particles entrained in the flow field at two instances in time. One or more charged coupled device (CCD) cameras can be used to record the instantaneous positions of particles. Using the time between light sheet pulses and determining either the individual particle displacements or the average displacement of particles over a small subregion of the recorded image enables the calculation of the fluid velocity. Fuzzy logic minimizes the required operator intervention in identifying particles and computing velocity. Using two cameras that have the same view of the illumination plane yields two single exposure image frames. Two competing techniques that yield unambiguous velocity vector direction information have been widely used for reducing the single-exposure, multiple image frame data: (1) cross-correlation and (2) particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. For the correlation technique, the correlation peak corresponding to the average displacement of particles across the subregion must be identified. Noise on the images and particle dropout result in misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work, fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements, and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle-tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus the velocity. Combining these two techniques makes use of the higher spatial resolution available from the particle tracking. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two-staged velocimetric technique can measure particle velocities with high spatial resolution over a broad range of seeding densities.
Impact of nonzero boresight pointing error on ergodic capacity of MIMO FSO communication systems.
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen
2016-02-22
A thorough investigation of the impact of nonzero boresight pointing errors on the ergodic capacity of multiple-input/multiple-output (MIMO) free-space optical (FSO) systems with equal gain combining (EGC) reception under different turbulence models, which are modeled as statistically independent, but not necessarily identically distributed (i.n.i.d.) is addressed in this paper. Novel closed-form asymptotic expressions at high signal-to-noise ratio (SNR) for the ergodic capacity of MIMO FSO systems are derived when different geometric arrangements of the receive apertures at the receiver are considered in order to reduce the effect of nonzero inherent boresight displacement, which is inevitably present when more than one receive aperture is considered. As a result, the asymptotic ergodic capacity of MIMO FSO systems is evaluated over log-normal (LN), gamma-gamma (GG) and exponentiated Weibull (EW) atmospheric turbulence in order to study different turbulence conditions, different sizes of receive apertures as well as different aperture averaging conditions. It is concluded that the use of single-input/multiple-output (SIMO) and MIMO techniques can significantly increase the ergodic capacity respect to the direct path link when the inherent boresight displacement takes small values, i.e. when the spacing among receive apertures is not too big. The effect of nonzero additional boresight errors, which is due to the thermal expansion of the building, is evaluated in multiple-input/single-output (MISO) and single-input/single-output (SISO) FSO systems. Simulation results are further included to confirm the analytical results.
Chen, Xiwei; Yu, Jihnhee
2014-01-01
Abstract Many clinical and biomedical studies evaluate treatment effects based on multiple biomarkers that commonly consist of pre- and post-treatment measurements. Some biomarkers can show significant positive treatment effects, while other biomarkers can reflect no effects or even negative effects of the treatments, giving rise to a necessity to develop methodologies that may correctly and efficiently evaluate the treatment effects based on multiple biomarkers as a whole. In the setting of pre- and post-treatment measurements of multiple biomarkers, we propose to apply a receiver operating characteristic (ROC) curve methodology based on the best combination of biomarkers maximizing the area under the receiver operating characteristic curve (AUC)-type criterion among all possible linear combinations. In the particular case with independent pre- and post-treatment measurements, we show that the proposed method represents the well-known Su and Liu's (1993) result. Further, proceeding from derived best combinations of biomarkers' measurements, we propose an efficient technique via likelihood ratio tests to compare treatment effects. We show an extensive Monte Carlo study that confirms the superiority of the proposed test in comparison with treatment effects based on multiple biomarkers in a paired data setting. For practical applications, the proposed method is illustrated with a randomized trial of chlorhexidine gluconate on oral bacterial pathogens in mechanically ventilated patients as well as a treatment study for children with attention deficit-hyperactivity disorder and severe mood dysregulation. PMID:25019920
Accurate and fast multiple-testing correction in eQTL studies.
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-06-04
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Watkins, Herschel M.; Simon, Anna J.; Sosnick, Tobin R.; Lipman, Everett A.; Hjelm, Rex P.; Plaxco, Kevin W.
2015-01-01
Small-angle scattering studies generally indicate that the dimensions of unfolded single-domain proteins are independent (to within experimental uncertainty of a few percent) of denaturant concentration. In contrast, single-molecule FRET (smFRET) studies invariably suggest that protein unfolded states contract significantly as the denaturant concentration falls from high (∼6 M) to low (∼1 M). Here, we explore this discrepancy by using PEG to perform a hitherto absent negative control. This uncharged, highly hydrophilic polymer has been shown by multiple independent techniques to behave as a random coil in water, suggesting that it is unlikely to expand further on the addition of denaturant. Consistent with this observation, small-angle neutron scattering indicates that the dimensions of PEG are not significantly altered by the presence of either guanidine hydrochloride or urea. smFRET measurements on a PEG construct modified with the most commonly used FRET dye pair, however, produce denaturant-dependent changes in transfer efficiency similar to those seen for a number of unfolded proteins. Given the vastly different chemistries of PEG and unfolded proteins and the significant evidence that dye-free PEG is well-described as a denaturant-independent random coil, this similarity raises questions regarding the interpretation of smFRET data in terms of the hydrogen bond- or hydrophobically driven contraction of the unfolded state at low denaturant. PMID:25964362
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less
Multiple Uses of a Word Study Technique
ERIC Educational Resources Information Center
Joseph, Laurice M.; Orlins, Andrew
2005-01-01
This paper presents two case studies that illustrate the multiple uses of word sorts, a word study phonics technique. Case study children were Sara, a second grader, who had difficulty with reading basic words and John, a third grader, who had difficulty with spelling basic words. Multiple baseline designs were employed to study the effects of…
Rahim, Ruzairi Abdul; Fazalul Rahiman, Mohd Hafiz; Leong, Lai Chen; Chan, Kok San; Pang, Jon Fea
2008-01-01
The main objective of this project is to implement the multiple fan beam projection technique using optical fibre sensors with the aim to achieve a high data acquisition rate. Multiple fan beam projection technique here is defined as allowing more than one emitter to transmit light at the same time using the switch-mode fan beam method. For the thirty-two pairs of sensors used, the 2-projection technique and 4-projection technique are being investigated. Sixteen sets of projections will complete one frame of light emission for the 2-projection technique while eight sets of projection will complete one frame of light emission for the 4-projection technique. In order to facilitate data acquisition process, PIC microcontroller and the sample and hold circuit are being used. This paper summarizes the hardware configuration and design for this project. PMID:27879885
Zhou, Bing; Li, Ming-Hua; Wang, Wu; Xu, Hao-Wen; Cheng, Yong-De; Wang, Jue
2010-03-01
The authors conducted a study to evaluate the advantages of a 3D volume-rendering technique (VRT) in follow-up digital subtraction (DS) angiography of coil-embolized intracranial aneurysms. One hundred nine patients with 121 intracranial aneurysms underwent endovascular coil embolization and at least 1 follow-up DS angiography session at the authors' institution. Two neuroradiologists independently evaluated the conventional 2D DS angiograms, rotational angiograms, and 3D VRT images obtained at the interventional procedures and DS angiography follow-up. If multiple follow-up sessions were performed, the final follow-up was mainly considered. The authors compared the 3 techniques for their ability to detect aneurysm remnants (including aneurysm neck and sac remnants) and parent artery stenosis based on the angiographic follow-up. The Kruskal-Wallis test was used for group comparisons, and the kappa test was used to measure interobserver agreement. Statistical analyses were performed using commercially available software. There was a high statistical significance among 2D DS angiography, rotational angiography, and 3D VRT results (X(2) = 9.9613, p = 0.0069) when detecting an aneurysm remnant. Further comparisons disclosed a statistical significance between 3D VRT and rotational angiography (X(2) = 4.9754, p = 0.0257); a high statistical significance between 3D VRT and 2D DS angiography (X(2) = 8.9169, p = 0.0028); and no significant difference between rotational angiography and 2D DS angiography (X(2) = 0.5648, p = 0.4523). There was no statistical significance among the 3 techniques when detecting parent artery stenosis (X(2) = 2.5164, p = 0.2842). One case, in which parent artery stenosis was diagnosed by 2D DS angiography and rotational angiography, was excluded by 3D VRT following observations of multiple views. The kappa test showed good agreement between the 2 observers. The 3D VRT is more sensitive in detecting aneurysm remnants than 2D DS angiography and rotational angiography and is helpful for identifying parent artery stenosis. The authors recommend this technique for the angiographic follow-up of patients with coil-embolized aneurysms.
Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.
Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita
2012-06-01
A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.
Gas gun driven dynamic fracture and fragmentation of Ti-6Al-4V cylinders
NASA Astrophysics Data System (ADS)
Jones, D. R.; Chapman, D. J.; Eakins, D. E.
2014-05-01
The dynamic fracture and fragmentation of a material is a complex late stage phenomenon occurring in many shock loading scenarios. Improving our predictive capability depends upon exercising our current failure models against new loading schemes and data. We present axially-symmetric high strain rate (104 s-1) expansion of Ti-6Al-4V cylinders using a single stage light gas gun technique. A steel ogive insert was located inside the target cylinder, into which a polycarbonate rod was launched. Deformation of this rod around the insert drives the cylinder into rapid expansion. This technique we have developed facilitates repeatable loading, independent of the temperature of the sample cylinder, with straightforward adjustment of the radial strain rate. Expansion velocity was measured with multiple channels of photon Doppler velocimetry. High speed imaging was used to track the overall expansion process and record strain to failure and crack growth. Results from a cylinder at a temperature of 150 K are compared with work at room temperature, examining the deformation, failure mechanisms and differences in fragmentation.
NASA Astrophysics Data System (ADS)
Gazzarri, J. I.; Kesler, O.
In the first part of this two-paper series, we presented a numerical model of the impedance behaviour of a solid oxide fuel cell (SOFC) aimed at simulating the change in the impedance spectrum induced by contact degradation at the interconnect-electrode, and at the electrode-electrolyte interfaces. The purpose of that investigation was to develop a non-invasive diagnostic technique to identify degradation modes in situ. In the present paper, we appraise the predictive capabilities of the proposed method in terms of its robustness to uncertainties in the input parameters, many of which are very difficult to measure independently. We applied this technique to the degradation modes simulated in Part I, in addition to anode sulfur poisoning. Electrode delamination showed the highest robustness to input parameter variations, followed by interconnect oxidation and interconnect detachment. The most sensitive degradation mode was sulfur poisoning, due to strong parameter interactions. In addition, we simulate several simultaneous two-degradation-mode scenarios, assessing the method's capabilities and limitations for the prediction of electrochemical behaviour of SOFC's undergoing multiple simultaneous degradation modes.
Reduction of PAPR in coded OFDM using fast Reed-Solomon codes over prime Galois fields
NASA Astrophysics Data System (ADS)
Motazedi, Mohammad Reza; Dianat, Reza
2017-02-01
In this work, two new techniques using Reed-Solomon (RS) codes over GF(257) and GF(65,537) are proposed for peak-to-average power ratio (PAPR) reduction in coded orthogonal frequency division multiplexing (OFDM) systems. The lengths of these codes are well-matched to the length of OFDM frames. Over these fields, the block lengths of codes are powers of two and we fully exploit the radix-2 fast Fourier transform algorithms. Multiplications and additions are simple modulus operations. These codes provide desirable randomness with a small perturbation in information symbols that is essential for generation of different statistically independent candidates. Our simulations show that the PAPR reduction ability of RS codes is the same as that of conventional selected mapping (SLM), but contrary to SLM, we can get error correction capability. Also for the second proposed technique, the transmission of side information is not needed. To the best of our knowledge, this is the first work using RS codes for PAPR reduction in single-input single-output systems.
Digging deeper for new physics in the LHC data
NASA Astrophysics Data System (ADS)
Asadi, Pouya; Buckley, Matthew R.; DiFranzo, Anthony; Monteux, Angelo; Shih, David
2017-11-01
In this paper, we describe a novel, model-independent technique of "rectangular aggregations" for mining the LHC data for hints of new physics. A typical (CMS) search now has hundreds of signal regions, which can obscure potentially interesting anomalies. Applying our technique to the two CMS jets+MET SUSY searches, we identify a set of previously overlooked ˜ 3 σ excesses. Among these, four excesses survive tests of inter-and intra-search compatibility, and two are especially interesting: they are largely overlappingbetween the jets+MET searches and are characterized by low jet multiplicity, zero b-jets, and low MET and H T . We find that resonant color-triplet production decaying to a quark plus an invisible particle provides an excellent fit to these two excesses and all other data — including the ATLAS jets+MET search, which actually sees a correlated excess. We discuss the additional constraints coming from dijet resonance searches, monojet searches and pair production. Based on these results, we believe the wide-spread view that the LHC data contains no interesting excesses is greatly exaggerated.
Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Wall, Johnm W.
2011-01-01
In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.
Tellez, Jason A; Schmidt, Jason D
2011-08-20
The propagation of a free-space optical communications signal through atmospheric turbulence experiences random fluctuations in intensity, including signal fades, which negatively impact the performance of the communications link. The gamma-gamma probability density function is commonly used to model the scintillation of a single beam. One proposed method to reduce the occurrence of scintillation-induced fades at the receiver plane involves the use of multiple beams propagating through independent paths, resulting in a sum of independent gamma-gamma random variables. Recently an analytical model for the probability distribution of irradiance from the sum of multiple independent beams was developed. Because truly independent beams are practically impossible to create, we present here a more general but approximate model for the distribution of beams traveling through partially correlated paths. This model compares favorably with wave-optics simulations and highlights the reduced scintillation as the number of transmitted beams is increased. Additionally, a pulse-position modulation scheme is used to reduce the impact of signal fades when they occur. Analytical and simulated results showed significantly improved performance when compared to fixed threshold on/off keying. © 2011 Optical Society of America
Application of MIMO Techniques in sky-surface wave hybrid networking sea-state radar system
NASA Astrophysics Data System (ADS)
Zhang, L.; Wu, X.; Yue, X.; Liu, J.; Li, C.
2016-12-01
The sky-surface wave hybrid networking sea-state radar system contains of the sky wave transmission stations at different sites and several surface wave radar stations. The subject comes from the national 863 High-tech Project of China. The hybrid sky-surface wave system and the HF surface wave system work simultaneously and the HF surface wave radar (HFSWR) can work in multi-static and surface-wave networking mode. Compared with the single mode radar system, this system has advantages of better detection performance at the far ranges in ocean dynamics parameters inversion. We have applied multiple-input multiple-output(MIMO) techniques in this sea-state radar system. Based on the multiple channel and non-causal transmit beam-forming techniques, the MIMO radar architecture can reduce the size of the receiving antennas and simplify antenna installation. Besides, by efficiently utilizing the system's available degrees of freedom, it can provide a feasible approach for mitigating multipath effect and Doppler-spread clutter in Over-the-horizon Radar. In this radar, slow-time phase-coded MIMO method is used. The transmitting waveforms are phase-coded in slow-time so as to be orthogonal after Doppler processing at the receiver. So the MIMO method can be easily implemented without the need to modify the receiver hardware. After the radar system design, the MIMO experiments of this system have been completed by Wuhan University during 2015 and 2016. The experiment used Wuhan multi-channel ionospheric sounding system(WMISS) as sky-wave transmitting source and three dual-frequency HFSWR developed by the Oceanography Laboratory of Wuhan University. The transmitter system located at Chongyang with five element linear equi-spaced antenna array and Wuhan with one log-periodic antenna. The RF signals are generated by synchronized, but independent digital waveform generators - providing complete flexibility in element phase and amplitude control, and waveform type and parameters. The field experimental results show the presented method is effective. The echoes are obvious and distinguishable both in co-located MIMO mode and widely distributed MIMO mode. Key words: sky-surface wave hybrid networking; sea-state radar; MIMO; phase-coded
Sparsity-aware multiple relay selection in large multi-hop decode-and-forward relay networks
NASA Astrophysics Data System (ADS)
Gouissem, A.; Hamila, R.; Al-Dhahir, N.; Foufou, S.
2016-12-01
In this paper, we propose and investigate two novel techniques to perform multiple relay selection in large multi-hop decode-and-forward relay networks. The two proposed techniques exploit sparse signal recovery theory to select multiple relays using the orthogonal matching pursuit algorithm and outperform state-of-the-art techniques in terms of outage probability and computation complexity. To reduce the amount of collected channel state information (CSI), we propose a limited-feedback scheme where only a limited number of relays feedback their CSI. Furthermore, a detailed performance-complexity tradeoff investigation is conducted for the different studied techniques and verified by Monte Carlo simulations.
Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram
2014-04-01
Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.
Classification of air quality using fuzzy synthetic multiplication.
Abdullah, Lazim; Khalid, Noor Dalina
2012-11-01
Proper identification of environment's air quality based on limited observations is an essential task to meet the goals of environmental management. Various classification methods have been used to estimate the change of air quality status and health. However, discrepancies frequently arise from the lack of clear distinction between each air quality, the uncertainty in the quality criteria employed and the vagueness or fuzziness embedded in the decision-making output values. Owing to inherent imprecision, difficulties always exist in some conventional methodologies when describing integrated air quality conditions with respect to various pollutants. Therefore, this paper presents two fuzzy multiplication synthetic techniques to establish classification of air quality. The fuzzy multiplication technique empowers the max-min operations in "or" and "and" in executing the fuzzy arithmetic operations. Based on a set of air pollutants data carbon monoxide, sulfur dioxide, nitrogen dioxide, ozone, and particulate matter (PM(10)) collected from a network of 51 stations in Klang Valley, East Malaysia, Sabah, and Sarawak were utilized in this evaluation. The two fuzzy multiplication techniques consistently classified Malaysia's air quality as "good." The findings indicated that the techniques may have successfully harmonized inherent discrepancies and interpret complex conditions. It was demonstrated that fuzzy synthetic multiplication techniques are quite appropriate techniques for air quality management.
Peng, Shichun; Ma, Yilong; Spetsieris, Phoebe G; Mattis, Paul; Feigin, Andrew; Dhawan, Vijay; Eidelberg, David
2013-01-01
In order to generate imaging biomarkers from disease-specific brain networks, we have implemented a general toolbox to rapidly perform scaled subprofile modeling (SSM) based on principal component analysis (PCA) on brain images of patients and normals. This SSMPCA toolbox can define spatial covariance patterns whose expression in individual subjects can discriminate patients from controls or predict behavioral measures. The technique may depend on differences in spatial normalization algorithms and brain imaging systems. We have evaluated the reproducibility of characteristic metabolic patterns generated by SSMPCA in patients with Parkinson's disease (PD). We used [18F]fluorodeoxyglucose PET scans from PD patients and normal controls. Motor-related (PDRP) and cognition-related (PDCP) metabolic patterns were derived from images spatially normalized using four versions of SPM software (spm99, spm2, spm5 and spm8). Differences between these patterns and subject scores were compared across multiple independent groups of patients and control subjects. These patterns and subject scores were highly reproducible with different normalization programs in terms of disease discrimination and cognitive correlation. Subject scores were also comparable in PD patients imaged across multiple PET scanners. Our findings confirm a very high degree of consistency among brain networks and their clinical correlates in PD using images normalized in four different SPM platforms. SSMPCA toolbox can be used reliably for generating disease-specific imaging biomarkers despite the continued evolution of image preprocessing software in the neuroimaging community. Network expressions can be quantified in individual patients independent of different physical characteristics of PET cameras. PMID:23671030
Peng, Shichun; Ma, Yilong; Spetsieris, Phoebe G; Mattis, Paul; Feigin, Andrew; Dhawan, Vijay; Eidelberg, David
2014-05-01
To generate imaging biomarkers from disease-specific brain networks, we have implemented a general toolbox to rapidly perform scaled subprofile modeling (SSM) based on principal component analysis (PCA) on brain images of patients and normals. This SSMPCA toolbox can define spatial covariance patterns whose expression in individual subjects can discriminate patients from controls or predict behavioral measures. The technique may depend on differences in spatial normalization algorithms and brain imaging systems. We have evaluated the reproducibility of characteristic metabolic patterns generated by SSMPCA in patients with Parkinson's disease (PD). We used [(18) F]fluorodeoxyglucose PET scans from patients with PD and normal controls. Motor-related (PDRP) and cognition-related (PDCP) metabolic patterns were derived from images spatially normalized using four versions of SPM software (spm99, spm2, spm5, and spm8). Differences between these patterns and subject scores were compared across multiple independent groups of patients and control subjects. These patterns and subject scores were highly reproducible with different normalization programs in terms of disease discrimination and cognitive correlation. Subject scores were also comparable in patients with PD imaged across multiple PET scanners. Our findings confirm a very high degree of consistency among brain networks and their clinical correlates in PD using images normalized in four different SPM platforms. SSMPCA toolbox can be used reliably for generating disease-specific imaging biomarkers despite the continued evolution of image preprocessing software in the neuroimaging community. Network expressions can be quantified in individual patients independent of different physical characteristics of PET cameras. Copyright © 2013 Wiley Periodicals, Inc.
Carmichael, Owen; Xie, Jing; Fletcher, Evan; Singh, Baljeet; DeCarli, Charles
2012-06-01
Hippocampal injury in the Alzheimer's disease (AD) pathological process is region-specific and magnetic resonance imaging (MRI)-based measures of localized hippocampus (HP) atrophy are known to detect region-specific changes associated with clinical AD, but it is unclear whether these measures provide information that is independent of that already provided by measures of total HP volume. Therefore, this study assessed the strength of association between localized HP atrophy measures and AD-related measures including cerebrospinal fluid (CSF) amyloid beta and tau concentrations, and cognitive performance, in statistical models that also included total HP volume as a covariate. A computational technique termed localized components analysis (LoCA) was used to identify 7 independent patterns of HP atrophy among 390 semiautomatically delineated HP from baseline magnetic resonance imaging of participants in the Alzheimer's Disease Neuroimaging Initiative (ADNI). Among cognitively normal participants, multiple measures of localized HP atrophy were significantly associated with CSF amyloid concentration, while total HP volume was not. In addition, among all participants, localized HP atrophy measures and total HP volume were both independently and additively associated with CSF tau concentration, performance on numerous neuropsychological tests, and discrimination between normal, mild cognitive impairment (MCI), and AD clinical diagnostic groups. Together, these results suggest that regional measures of hippocampal atrophy provided by localized components analysis may be more sensitive than total HP volume to the effects of AD pathology burden among cognitively normal individuals and may provide information about HP regions whose deficits may have especially profound cognitive consequences throughout the AD clinical course. Copyright © 2012 Elsevier Inc. All rights reserved.
Independent Research and Independent Exploratory Development Annual Report Fiscal Year 1975
1975-09-01
and Coding Study.(Z?80) ................................... ......... .................... 40 Optical Cover CMMUnicallor’s Using Laser Transceiverst...Using Auger Spectroscopy and PUBLICATIONS Additional Advanced Analytical Techniques," Wagner, N. K., "Auger Electron Spectroscopy NELC Technical Note 2904...K.. "Analysis of Microelectronic Materials Using Auger Spectroscopy and Additional Advanced Analytical Techniques," Contact: Proceedings of the
Working Memory Systems in the Rat.
Bratch, Alexander; Kann, Spencer; Cain, Joshua A; Wu, Jie-En; Rivera-Reyes, Nilda; Dalecki, Stefan; Arman, Diana; Dunn, Austin; Cooper, Shiloh; Corbin, Hannah E; Doyle, Amanda R; Pizzo, Matthew J; Smith, Alexandra E; Crystal, Jonathon D
2016-02-08
A fundamental feature of memory in humans is the ability to simultaneously work with multiple types of information using independent memory systems. Working memory is conceptualized as two independent memory systems under executive control [1, 2]. Although there is a long history of using the term "working memory" to describe short-term memory in animals, it is not known whether multiple, independent memory systems exist in nonhumans. Here, we used two established short-term memory approaches to test the hypothesis that spatial and olfactory memory operate as independent working memory resources in the rat. In the olfactory memory task, rats chose a novel odor from a gradually incrementing set of old odors [3]. In the spatial memory task, rats searched for a depleting food source at multiple locations [4]. We presented rats with information to hold in memory in one domain (e.g., olfactory) while adding a memory load in the other domain (e.g., spatial). Control conditions equated the retention interval delay without adding a second memory load. In a further experiment, we used proactive interference [5-7] in the spatial domain to compromise spatial memory and evaluated the impact of adding an olfactory memory load. Olfactory and spatial memory are resistant to interference from the addition of a memory load in the other domain. Our data suggest that olfactory and spatial memory draw on independent working memory systems in the rat. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cruz, Aristides I; Lakomkin, Nikita; Fabricant, Peter D; Lawrence, J Todd R
2016-06-01
Most studies examining the safety and efficacy of transphyseal anterior cruciate ligament (ACL) reconstruction for skeletally immature patients utilize transtibial drilling. Independent femoral tunnel drilling may impart a different pattern of distal femoral physeal involvement. To radiographically assess differences in distal femoral physeal disruption between transtibial and independent femoral tunnel drilling. We hypothesized that more oblique tunnels associated with independent drilling involve a significantly larger area of physeal disruption compared with vertically oriented tunnels. Cross-sectional study; Level of evidence, 3. We analyzed skeletally immature patients aged between 10 and 15 years who underwent transphyseal ACL reconstruction utilizing an independent femoral tunnel drilling technique between January 1, 2008, and March 31, 2011. These patients were matched with a transtibial technique cohort based on age and sex. Radiographic measurements were recorded from preoperative magnetic resonance imaging and postoperative radiographs. Ten patients in each group were analyzed. There were significant differences between independent drilling and transtibial drilling cohorts in the estimated area of physeal disruption (1.64 vs 0.74 cm(2); P < .001), femoral tunnel angles (32.1° vs 72.8°; P < .001), and medial/lateral location of the femoral tunnel (24.2 vs 36.1 mm from lateral cortex; P = .001), respectively. There was a significant inverse correlation between femoral tunnel angle and estimated area of distal femoral physeal disruption (r = -0.8255, P = .003). Femoral tunnels created with an independent tunnel drilling technique disrupt a larger area of the distal femoral physis and create more eccentric tunnels compared with a transtibial technique. As most studies noting the safety of transphyseal ACL reconstruction have utilized a central, vertical femoral tunnel, surgeons should be aware that if an independent femoral tunnel technique is utilized during transphyseal ACL reconstruction, more physeal tissue is at risk and tunnels are more eccentrically placed across the physis when drilling at more horizontal angles. Prior studies have shown that greater physeal involvement and eccentric tunnels may increase the risk of growth disturbance.
Current Concepts and Ongoing Research in the Prevention and Treatment of Open Fracture Infections
Hannigan, Geoffrey D.; Pulos, Nicholas; Grice, Elizabeth A.; Mehta, Samir
2015-01-01
Significance: Open fractures are fractures in which the bone has violated the skin and soft tissue. Because of their severity, open fractures are associated with complications that can result in increased lengths of hospital stays, multiple operative interventions, and even amputation. One of the factors thought to influence the extent of these complications is exposure and contamination of the open fracture with environmental microorganisms, potentially those that are pathogenic in nature. Recent Advances: Current open fracture care aims to prevent infection by wound classification, prophylactic antibiotic administration, debridement and irrigation, and stable fracture fixation. Critical Issues: Despite these established treatment paradigms, infections and infection-related complications remain a significant clinical burden. To address this, improvements need to be made in our ability to detect bacterial infections, effectively remove wound contamination, eradicate infections, and treat and prevent biofilm formation associated with fracture fixation hardware. Future Directions: Current research is addressing these critical issues. While culture methods are of limited value, culture-independent molecular techniques are being developed to provide informative detection of bacterial contamination and infection. Other advanced contamination- and infection-detecting techniques are also being investigated. New hardware-coating methods are being developed to minimize the risk of biofilm formation in wounds, and immune stimulation techniques are being developed to prevent open fracture infections. PMID:25566415
Three-dimensional mosaicking of the South Korean radar network
NASA Astrophysics Data System (ADS)
Berenguer, Marc; Sempere-Torres, Daniel; Lee, GyuWon
2016-04-01
Dense radar networks offer the possibility of improved Quantitative Precipitation Estimation thanks to the additional information collected in the overlapping areas, which allows mitigating errors associated with the Vertical Profile of Reflectivity or path attenuation by intense rain. With this aim, Roca-Sancho et al. (2014) proposed a technique to generate 3-D reflectivity mosaics from the multiple radars of a network. The technique is based on an inverse method that simulates the radar sampling of the atmosphere considering the characteristics (location, frequency and scanning protocol) of each individual radar. This technique has been applied to mosaic the observations of the radar network of South Korea (composed of 14 S-band radars), and integrate the observations of the small X-band network which to be installed near Seoul in the framework of a project funded by the Korea Agency for Infrastructure Technology Advancement (KAIA). The evaluation of the generated 3-D mosaics has been done by comparison with point measurements (i.e. rain gauges and disdrometers) and with the observations of independent radars. Reference: Roca-Sancho, J., M. Berenguer, and D. Sempere-Torres (2014), An inverse method to retrieve 3D radar reflectivity composites, Journal of Hydrology, 519, 947-965, doi: 10.1016/j.jhydrol.2014.07.039.
A novel minimally invasive dual-modality fiber optic probe for prostate cancer detection
NASA Astrophysics Data System (ADS)
Sharma, Vikrant
Prostate cancer is the most common form of cancer in males, and is the second leading cause of cancer related deaths in United States. In prostate cancer diagnostics and therapy, there is a critical need for a minimally invasive tool for in vivo evaluation of prostate tissue. Such a tool finds its niche in improving TRUS (trans-rectal ultrasound) guided biopsy procedure, surgical margin assessment during radical prostatectomy, and active surveillance of patients with a certain risk levels. This work is focused on development of a fiber-based dual-modality optical device (dMOD), to differentiate prostate cancer from benign tissue, in vivo. dMOD utilizes two independent optical techniques, LRS (light reflectance spectroscopy) and AFLS (auto-fluorescence lifetime spectroscopy). LRS quantifies scattering coefficient of the tissue, as well as concentrations of major tissue chromophores like hemoglobin derivatives, β-carotene and melanin. AFLS was designed to target lifetime signatures of multiple endogenous fluorophores like flavins, porphyrins and lipo-pigments. Each of these methods was independently developed, and the two modalities were integrated using a thin (1-mm outer diameter) fiber-optic probe. Resulting dMOD probe was implemented and evaluated on animal models of prostate cancer, as well as on human prostate tissue. Application of dMOD to human breast cancer (invasive ductal carcinoma) identification was also evaluated. The results obtained reveal that both LRS and AFLS are excellent techniques to discriminate prostate cancer tissue from surrounding benign tissue in animal models. Each technique independently is capable of providing near absolute (100%) accuracy for cancer detection, indicating that either of them could be used independently without the need of implementing them together. Also, in case of human breast cancer, LRS and AFLS provided comparable accuracies to dMOD, LRS accuracy (96%) being the highest for the studied population. However, the dual-modality integration proved to be ideal for human prostate cancer detection, as dMOD provided much better accuracy i.e., 82.7% for cancer detection in intra-capsular prostatic tissues (ICT), and 92.4% for cancer detection in extra-capsular prostatic tissues (ECT), when compared with either LRS (74.7% ICT, 86.6% ECT) or AFLS(67.1% ICT, 82.1% ECT) alone. A classification algorithm was also developed to identify different grades of prostate cancers based on Gleason scores (GS). When stratified by grade, each high grade prostate cancer (GS 7, 8 and 9) was successfully identified using dMOD with excellent accuracy in ICT (88%, 90%, 85%), as well as ECT (91%, 92%, 94%).
Are independent probes truly independent?
Camp, Gino; Pecher, Diane; Schmidt, Henk G; Zeelenberg, René
2009-07-01
The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval process. Participants first studied a subset of cues (e.g., rope) that were subsequently studied together with a target in a 2nd study phase (e.g., rope-sailing, sunflower-yellow). In the test phase, an extralist category cue (e.g., sports, color) was presented, and participants were instructed to recall an item from the study list that was a member of the category (e.g., sailing, yellow). The experiments showed that previous study of the paired-associate word (e.g., rope) enhanced category cued recall even though this word was not presented at test. This experimental demonstration of covert cuing has important implications for the effectiveness of the independent cue technique.
Jaipuria, Jiten; Suryavanshi, Manav; Sen, Tridib K
2016-12-01
To assess the reliability of the Guy's Stone Score, the Seoul National University Renal Stone Complexity (S-ReSC) score and the S.T.O.N.E. scores in percutaneous nephrolithotomy (PCNL), and assess their utility in discriminating outcomes [stone free rate (SFR), complications, need for multiple PCNL sessions, and auxiliary procedures] valid across parameters of experience of surgeon, independence from surgical approach, and variations in institution-specific instrumentation. A prospectively maintained database of two tertiary institutions was analysed (606 cases). Institutes differed in instrumentation, while the overall surgical team comprised: two trainees (experience <100 cases), two junior consultants (experience 100-200 cases), and two senior surgeons (experience >1000 cases). Scores were assigned and re-assigned after 4 months by one trainee and an expert surgeon. Inter-rater and test-retest agreement were analysed by Cohen's κ and intraclass correlation coefficient. Multivariate logistic regression models were created adjusting outcomes for the institution, comorbidity, Amplatz size, access tract location, the number of punctures, the experience level of the surgeon, and individual scoring system, and receiver operating curves were analysed for comparison. Despite some areas of inconsistencies, individually all scores had excellent inter-rater and test-retest concordance. On multivariable analyses, while the experience of the surgeon and surgical approach characteristics (such as access tract location, Amplatz size, and number of punctures) remained independently associated with different outcomes in varying combinations, calculus complexity scores were found consistently to be independently associated with all outcomes. The S-ReSC score had a superior association with SFR, the need for multiple PCNL sessions, and auxiliary procedures. Individually all scoring systems performed well. On cross comparison, the S-ReSC score consistently emerged to be more superiorly associated with all outcomes, signifying the importance of the distributional complexity of the calculus (which also indirectly amalgamates the influence of stone number, size, and anatomical location) in discriminating outcomes. Our study proves the utility of scoring systems in prognosticating multiple outcomes and also clarifies important aspects of their practical application including future roles such as benchmarking, audit, training, and objective assessment of surgical technique modifications. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.
Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Boccasini, Adele; La Martire, Maria L; Lang, Russell
2014-08-01
Recent literature has shown the possibility of enabling individuals with multiple disabilities to make telephone calls independently via computer-aided telephone technology. These two case studies assessed a modified version of such technology and a commercial alternative to it for a woman and a man with multiple disabilities, respectively. The modified version used in Study 1 (a) presented the names of the persons available for a call and (b) reminded the participant of the response she needed to perform (i.e., pressing a microswitch) if she wanted to call any of those names/persons. The commercial device used in Study 2 was a Galaxy S3 (Samsung) equipped with the S-voice module, which allowed the participant to activate phone calls by uttering the word "Call" followed by the name of the persons he wanted to call. The results of the studies showed that the participants learned to make phone calls independently using the technology/device available. Implications of the results are discussed.
Sutherland, R J; Lehmann, H
2011-06-01
We discuss very recent experiments with rodents addressing the idea that long-term memories initially depending on the hippocampus, over a prolonged period, become independent of it. No unambiguous recent evidence exists to substantiate that this occurs. Most experiments find that recent and remote memories are equally affected by hippocampus damage. Nearly all experiments that report spared remote memories suffer from two problems: retrieval could be based upon substantial regions of spared hippocampus and recent memory is tested at intervals that are of the same order of magnitude as cellular consolidation. Accordingly, we point the way beyond systems consolidation theories, both the Standard Model of Consolidation and the Multiple Trace Theory, and propose a simpler multiple storage site hypothesis. On this view, with event reiterations, different memory representations are independently established in multiple networks. Many detailed memories always depend on the hippocampus; the others may be established and maintained independently. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chen, Yundai; Wang, Changhua; Yang, Xinchun; Wang, Lefeng; Sun, Zhijun; Liu, Hongbin; Chen, Lian
2012-05-01
Independent no-reflow predictors should be evaluated in female patients with ST-segment elevation acute myocardial infarction (STEMI) and successfully treated with primary percutaneous coronary intervention (PPCI) in the current interventional equipment and techniques, thus to be constructed a no-reflow predicting model. In this study, 320 female patients with STEMI were successfully treated with PPCI within 12 h after the onset of AMI from 2007 to 2010. All clinical, angiographic, and procedural data were collected. Multiple logistic regression analysis was used to identify independent no-reflow predictors. The no-reflow was found in 81 (25.3%) of 320 female patients. Univariate and multivariate stepwise logistic regression analysis identified that low SBP on admission <100 mmHg (OR 1.991, 95% CI 1.018-3.896; p = 0.004), target lesion length >20 mm (OR 1.948, 95% CI 1.908-1.990; p = 0.016), collateral circulation 0-1 (OR 1.952, 95% CI 1.914-1.992; p = 0.019), pre-PCI thrombus score ≥ 4 (OR 4.184, 95% CI 1.482-11.813; p = 0.007), and IABP use before PCI (OR 1.949, 95% CI 1.168-3.253; p = 0.011) were independent no-reflow predictors. The no-reflow incidence significantly increased as the numbers of independent predictors increased [0% (0/2), 10.8% (9/84), 14.5% (17/117), 37.7% (29/77), 56.7% (17/30), and 81.8% (9/11) in female patients with 0, 1, 2, 3, 4, and 5 independent predictors, respectively; p < 0.0001]. The five no-reflow predicting variables were admission SBP <100 mmHg, target lesion length >20 mm, collateral circulation 0-1, pre-PCI thrombus score ≥ 4, and IABP use before PCI in female patients with STEMI treated with PPCI.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
Marshall, Leon; Carvalheiro, Luísa G; Aguirre-Gutiérrez, Jesús; Bos, Merijn; de Groot, G Arjen; Kleijn, David; Potts, Simon G; Reemer, Menno; Roberts, Stuart; Scheper, Jeroen; Biesmeijer, Jacobus C
2015-10-01
Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs' usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and flower availability. Additionally, testing SDMs with field surveys should involve multiple collection techniques.
A method for independent component graph analysis of resting-state fMRI.
Ribeiro de Paula, Demetrius; Ziegler, Erik; Abeyasinghe, Pubuditha M; Das, Tushar K; Cavaliere, Carlo; Aiello, Marco; Heine, Lizette; di Perri, Carol; Demertzi, Athena; Noirhomme, Quentin; Charland-Verville, Vanessa; Vanhaudenhuyse, Audrey; Stender, Johan; Gomez, Francisco; Tshibanda, Jean-Flory L; Laureys, Steven; Owen, Adrian M; Soddu, Andrea
2017-03-01
Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non-contiguous regions. To date, the spatial patterns of the networks have been analyzed with techniques developed for volumetric data. Here, we detail a graph building technique that allows these ICNs to be analyzed with graph theory. First, ICA was performed at the single-subject level in 15 healthy volunteers using a 3T MRI scanner. The identification of nine networks was performed by a multiple-template matching procedure and a subsequent component classification based on the network "neuronal" properties. Second, for each of the identified networks, the nodes were defined as 1,015 anatomically parcellated regions. Third, between-node functional connectivity was established by building edge weights for each networks. Group-level graph analysis was finally performed for each network and compared to the classical network. Network graph comparison between the classically constructed network and the nine networks showed significant differences in the auditory and visual medial networks with regard to the average degree and the number of edges, while the visual lateral network showed a significant difference in the small-worldness. This novel approach permits us to take advantage of the well-recognized power of ICA in BOLD signal decomposition and, at the same time, to make use of well-established graph measures to evaluate connectivity differences. Moreover, by providing a graph for each separate network, it can offer the possibility to extract graph measures in a specific way for each network. This increased specificity could be relevant for studying pathological brain activity or altered states of consciousness as induced by anesthesia or sleep, where specific networks are known to be altered in different strength.
McDonald, Sarah D; Vermeulen, Marian J; Ray, Joel G
2007-07-01
Substance use in pregnancy is associated with placental abruption, but the risk of fetal death independent of abruption remains undetermined. Our objective was to examine the effect of maternal drug dependence on placental abruption and on fetal death in association with abruption and independent of it. To examine placental abruption and fetal death, we performed a retrospective population-based study of 1 854 463 consecutive deliveries of liveborn and stillborn infants occurring between January 1, 1995 and March 31, 2001, using the Canadian Institute for Health Information Discharge Abstract Database. Maternal drug dependence was associated with a tripling of the risk of placental abruption in singleton pregnancies (adjusted odds ratio [OR] 3.1; 95% confidence intervals [CI] 2.6-3.7), but not in multiple gestations (adjusted OR 0.88; 95% CI 0.12-6.4). Maternal drug dependence was associated with an increased risk of fetal death independent of abruption (adjusted OR 1.6: 95% CI 1.1-2.2) in singleton pregnancies, but not in multiples. Risk of fetal death was increased with placental abruption in both singleton and multiple gestations, even after controlling for drug dependence (adjusted OR 11.4 in singleton pregnancy; 95% CI 10.6-12.2, and 3.4 in multiple pregnancy; 95% CI 2.4-4.9). Maternal drug use is associated with an increased risk of intrauterine fetal death independent of placental abruption. In singleton pregnancies, maternal drug dependence is associated with an increased risk of placental abruption.
Netscher, David T; Lewis, Eric V
2008-06-01
A combination of nonvascularized multiple toe phalangeal transfers, web space deepening, and distraction lengthening may provide excellent function in the child born with the oligodactylous type of symbrachydactyly. These techniques may reconstruct multiple digits, maintaining a wide and stable grip span with good prehension to the thumb. We detail the techniques of each of these 3 stages in reconstruction and describe appropriate patient selection. Potential complications are discussed. However, with strict attention to technical details, these complications can be minimized.
Indirect fabrication of multiple post-and-core patterns with a vinyl polysiloxane matrix.
Sabbak, Sahar Asaad
2002-11-01
In the described technique, a vinyl polysiloxane material is used as a matrix for the indirect fabrication of multiple custom-cast posts and cores. The matrix technique enables the clinician to fabricate multiple posts and cores in a short period of time. The form, harmony, and common axis of preparation for all cores are well controlled before the definitive crown/fixed partial denture restorations are fabricated. Oral tissues are not exposed to the heat of polymerization or the excess monomer of the resin material when this technique is used.
Hydra multiple head star sensor and its in-flight self-calibration of optical heads alignment
NASA Astrophysics Data System (ADS)
Majewski, L.; Blarre, L.; Perrimon, N.; Kocher, Y.; Martinez, P. E.; Dussy, S.
2017-11-01
HYDRA is EADS SODERN new product line of APS-based autonomous star trackers. The baseline is a multiple head sensor made of three separated optical heads and one electronic unit. Actually the concept which was chosen offers more than three single-head star trackers working independently. Since HYDRA merges all fields of view the result is a more accurate, more robust and completely autonomous multiple-head sensor, releasing the AOCS from the need to manage the outputs of independent single-head star trackers. Specific to the multiple head architecture and the underlying data fusion, is the calibration of the relative alignments between the sensor optical heads. The performance of the sensor is related to its estimation of such alignments. HYDRA design is first reminded in this paper along with simplification it can bring at system level (AOCS). Then self-calibration of optical heads alignment is highlighted through descriptions and simulation results, thus demonstrating the performances of a key part of HYDRA multiple-head concept.
ERIC Educational Resources Information Center
Viens, Julie; Kallenbach, Silja
2001-01-01
Dr. Howard Gardner's introduction of multiple intelligences theory (MI theory) in 1983 generated considerable interest in the educational community. Multiple intelligences was a provocative new theory, claiming at least seven relatively independent intelligences. MI theory presented a conception of intelligence that was in marked contrast to the…
Monitoring and Management of a Sensitive Resource: A Landscape-level Approach with Amphibians
2001-03-01
Results show that each technique is effective for a portion of the amphibian community and that the use of multiple techniques is essential to any...combinations of species. These results show that multiple techniques are needed for a full assessment of amphibian populations and communities at...against which future assessments of amphibian populations and communities on each installation can be evaluated. The standardized techniques used in FY
Orthogonal Cas9 proteins for RNA-guided gene regulation and editing
Church, George M.; Esvelt, Kevin; Mali, Prashant
2017-03-07
Methods of modulating expression of a target nucleic acid in a cell are provided including use of multiple orthogonal Cas9 proteins to simultaneously and independently regulate corresponding genes or simultaneously and independently edit corresponding genes.
Rafati, Hasan; Talebpour, Zahra; Adlnasab, Laleh; Ebrahimi, Samad Nejad
2009-07-01
In this study, pH responsive macroparticles incorporating peppermint oil (PO) were prepared using a simple emulsification/polymer precipitation technique. The formulations were examined for their properties and the desired quality was then achieved using a quality by design (QBD) approach. For this purpose, a Draper-Lin small composite design study was employed in order to investigate the effect of four independent variables, including the PO to water ratio, the concentration of pH sensitive polymer (hydroxypropyl methylcellulose phthalate), acid and plasticizer concentrations, on the encapsulation efficiency and PO loading. The analysis of variance showed that the polymer concentration was the most important variable on encapsulation efficiency (p < 0.05). The multiple regression analysis of the results led to equations that adequately described the influence of the independent variables on the selected responses. Furthermore, the desirability function was employed as an effective tool for transforming each response separately and encompassing all of these responses in an overall desirability function for global optimization of the encapsulation process. The optimized macroparticles were predicted to yield 93.4% encapsulation efficiency and 72.8% PO loading, which were remarkably close to the experimental values of 89.2% and 69.5%, consequently.
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
Method and apparatus to predict the remaining service life of an operating system
Greitzer, Frank L.; Kangas, Lars J.; Terrones, Kristine M.; Maynard, Melody A.; Pawlowski, Ronald A. , Ferryman; Thomas A.; Skorpik, James R.; Wilson, Bary W.
2008-11-25
A method and computer-based apparatus for monitoring the degradation of, predicting the remaining service life of, and/or planning maintenance for, an operating system are disclosed. Diagnostic information on degradation of the operating system is obtained through measurement of one or more performance characteristics by one or more sensors onboard and/or proximate the operating system. Though not required, it is preferred that the sensor data are validated to improve the accuracy and reliability of the service life predictions. The condition or degree of degradation of the operating system is presented to a user by way of one or more calculated, numeric degradation figures of merit that are trended against one or more independent variables using one or more mathematical techniques. Furthermore, more than one trendline and uncertainty interval may be generated for a given degradation figure of merit/independent variable data set. The trendline(s) and uncertainty interval(s) are subsequently compared to one or more degradation figure of merit thresholds to predict the remaining service life of the operating system. The present invention enables multiple mathematical approaches in determining which trendline(s) to use to provide the best estimate of the remaining service life.
Spontaneous abortion in multiple pregnancy: focus on fetal pathology.
Joó, József Gábor; Csaba, Ákos; Szigeti, Zsanett; Rigó, János
2012-08-15
Multiple pregnancy with its wide array of medical consequences poses an important condition during pregnancy. We performed perinatal autopsy in 49 cases of spontaneous abortion resulting from multiple pregnancies during the study period. Twenty-seven of the 44 twin pregnancies ending in miscarriage were conceived naturally, whereas 17 were conceived through assisted reproductive techniques. Each of the 5 triplet pregnancies ending in miscarriage was conceived through assisted reproductive techniques. There was a positive history of miscarriage in 22.4% of the cases. Monochorial placentation occurred more commonly in multiple pregnancies terminating with miscarriage than in multiple pregnancies without miscarriage. A fetal congenital malformation was found in 8 cases. Three of these cases were conceived through assisted reproductive techniques, and 5 were conceived naturally. Miscarriage was due to intrauterine infection in 36% of the cases. Our study confirms that spontaneous abortion is more common in multiple than in singleton pregnancies. Monochorial placentation predicted a higher fetal morbidity and mortality. In pregnancies where all fetuses were of male gender, miscarriage was more common than in pregnancies where all fetuses were female. Assisted reproductive techniques do not predispose to the development of fetal malformations. Copyright © 2012 Elsevier GmbH. All rights reserved.
Prediction and outcomes of impossible mask ventilation: a review of 50,000 anesthetics.
Kheterpal, Sachin; Martin, Lizabeth; Shanks, Amy M; Tremper, Kevin K
2009-04-01
There are no existing data regarding risk factors for impossible mask ventilation and limited data regarding its incidence. The authors sought to determine the incidence, predictors, and outcomes associated with impossible mask ventilation. The authors performed an observational study over a 4-yr period. For each adult patient undergoing a general anesthetic, preoperative patient characteristics, detailed airway physical exam, and airway outcome data were collected. The primary outcome was impossible mask ventilation defined as the inability to exchange air during bag-mask ventilation attempts, despite multiple providers, airway adjuvants, or neuromuscular blockade. Secondary outcomes included the final, definitive airway management technique and direct laryngoscopy view. The incidence of impossible mask ventilation was calculated. Independent (P < 0.05) predictors of impossible mask ventilation were identified by performing a logistic regression full model fit. Over a 4-yr period from 2004 to 2008, 53,041 attempts at mask ventilation were recorded. A total of 77 cases of impossible mask ventilation (0.15%) were observed. Neck radiation changes, male sex, sleep apnea, Mallampati III or IV, and presence of beard were identified as independent predictors. The receiver-operating-characteristic area under the curve for this model was 0.80 +/- 0.03. Nineteen impossible mask ventilation patients (25%) also demonstrated difficult intubation, with 15 being intubated successfully. Twelve patients required an alternative intubation technique, including two surgical airways and two patients who were awakened and underwent successful fiberoptic intubation. Impossible mask ventilation is an infrequent airway event that is associated with difficult intubation. Neck radiation changes represent the most significant clinical predictor of impossible mask ventilation in the patient dataset.
Deep Temporal Nerve Transfer for Facial Reanimation: Anatomic Dissections and Surgical Case Report.
Mahan, Mark A; Sivakumar, Walavan; Weingarten, David; Brown, Justin M
2017-09-08
Facial nerve palsy is a disabling condition that may arise from a variety of injuries or insults and may occur at any point along the nerve or its intracerebral origin. To examine the use of the deep temporal branches of the motor division of the trigeminal nerve for neural reconstruction of the temporal branches of the facial nerve for restoration of active blink and periorbital facial expression. Formalin-fixed human cadaver hemifaces were dissected to identify landmarks for the deep temporal branches and the tension-free coaptation lengths. This technique was then utilized in 1 patient with a history of facial palsy due to a brainstem cavernoma. Sixteen hemifaces were dissected. The middle deep temporal nerve could be consistently identified on the deep side of the temporalis, within 9 to 12 mm posterior to the jugal point of the zygoma. From a lateral approach through the temporalis, the middle deep temporal nerve could be directly coapted to facial temporal branches in all specimens. Our patient has recovered active and independent upper facial muscle contraction, providing the first case report of a distinct distal nerve transfer for upper facial function. The middle deep temporal branches can be readily identified and utilized for facial reanimation. This technique provided a successful reanimation of upper facial muscles with independent activation. Utilizing multiple sources for neurotization of the facial muscles, different potions of the face can be selectively reanimated to reduce the risk of synkinesis and improved control. Copyright © 2017 by the Congress of Neurological Surgeons
A new multicriteria risk mapping approach based on a multiattribute frontier concept.
Yemshanov, Denys; Koch, Frank H; Ben-Haim, Yakov; Downing, Marla; Sapio, Frank; Siltanen, Marty
2013-09-01
Invasive species risk maps provide broad guidance on where to allocate resources for pest monitoring and regulation, but they often present individual risk components (such as climatic suitability, host abundance, or introduction potential) as independent entities. These independent risk components are integrated using various multicriteria analysis techniques that typically require prior knowledge of the risk components' importance. Such information is often nonexistent for many invasive pests. This study proposes a new approach for building integrated risk maps using the principle of a multiattribute efficient frontier and analyzing the partial order of elements of a risk map as distributed in multidimensional criteria space. The integrated risks are estimated as subsequent multiattribute frontiers in dimensions of individual risk criteria. We demonstrate the approach with the example of Agrilus biguttatus Fabricius, a high-risk pest that may threaten North American oak forests in the near future. Drawing on U.S. and Canadian data, we compare the performance of the multiattribute ranking against a multicriteria linear weighted averaging technique in the presence of uncertainties, using the concept of robustness from info-gap decision theory. The results show major geographic hotspots where the consideration of tradeoffs between multiple risk components changes integrated risk rankings. Both methods delineate similar geographical regions of high and low risks. Overall, aggregation based on a delineation of multiattribute efficient frontiers can be a useful tool to prioritize risks for anticipated invasive pests, which usually have an extremely poor prior knowledge base. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Agent independent task planning
NASA Technical Reports Server (NTRS)
Davis, William S.
1990-01-01
Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.
Theory of Multiple Coulomb Scattering from Extended Nuclei
DOE R&D Accomplishments Database
Cooper, L. N.; Rainwater, J.
1954-08-01
Two independent methods are described for calculating the multiple scattering distribution for projected angle scattering resulting when very high energy charged particles traverse a thick scatterer. The results are compared with the theories of Moliere and Olbert.
Characterising and modelling regolith stratigraphy using multiple geophysical techniques
NASA Astrophysics Data System (ADS)
Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.
2013-12-01
Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. < 1 m) regolith stratigraphy. The geophysics included: ground penetrating radar collected at a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo-registration, depth correction, etc.) each geophysical profile was evaluated by matching the core data. Applying traditional geophysical techniques, the best profiles were inverted using the core data creating two-dimensional (2-D) stratigraphic regolith models for each transect, and evaluated using independent validation. Next, in a test of an alternative method borrowed from digital soil mapping, the best preprocessed geophysical profiles were co-registered and stratigraphic models for each property created using multivariate environmental correlation. After independent validation, the qualities of the latest models were compared to the traditionally derived 2-D inverted models. Finally, the best overall stratigraphic models were used in conjunction with local environmental data (e.g. geology, geochemistry, terrain, soils) to create conceptual regolith hillslope models for each transect highlighting important features and processes, e.g. morphology, hydropedology and weathering characteristics. Results are presented with recommendations regarding the use of geophysics in modelling regolith stratigraphy at fine scales.
Effects of PECS Phase III Application Training on Independent Mands in Young Children with Autism
ERIC Educational Resources Information Center
Love, Jessica June
2013-01-01
The purpose of this study was to examine the effects of PECS phase III application training on independent mands in young children with autism. Participants were five children with autism ranging from ages 2 to 4 years old. A multiple baseline across participants was used to evaluate acquisition of independent correct mands across baseline and…
Ghafari, Somayeh; Ahmadi, Fazlolah; Nabavi, Masoud; Anoshirvan, Kazemnejad; Memarian, Robabe; Rafatbakhsh, Mohamad
2009-08-01
To identify the effects of applying Progressive Muscle Relaxation Technique on Quality of Life of patients with multiple Sclerosis. In view of the growing caring options in Multiple Sclerosis, improvement of quality of life has become increasingly relevant as a caring intervention. Complementary therapies are widely used by multiple sclerosis patients and Progressive Muscle Relaxation Technique is a form of complementary therapies. Quasi-experimental study. Multiple Sclerosis patients (n = 66) were selected with no probability sampling then assigned to experimental and control groups (33 patients in each group). Means of data collection included: Individual Information Questionnaire, SF-8 Health Survey, Self-reported checklist. PMRT performed for 63 sessions by experimental group during two months but no intervention was done for control group. Statistical analysis was done by SPSS software. Student t-test showed that there was no significant difference between two groups in mean scores of health-related quality of life before the study but this test showed a significant difference between two groups, one and two months after intervention (p < 0.05). anova test with repeated measurements showed that there is a significant difference in mean score of whole and dimensions of health-related quality of life between two groups in three times (p < 0.05). Although this study provides modest support for the effectiveness of Progressive Muscle Relaxation Technique on quality of life of multiple sclerosis patients, further research is required to determine better methods to promote quality of life of patients suffer multiple sclerosis and other chronic disease. Progressive Muscle Relaxation Technique is practically feasible and is associated with increase of life quality of multiple sclerosis patients; so that health professionals need to update their knowledge about complementary therapies.
Case study on complex sporadic E layers observed by GPS radio occultations
NASA Astrophysics Data System (ADS)
Yue, X.; Schreiner, W. S.; Zeng, Z.; Kuo, Y.-H.; Xue, X.
2015-01-01
The occurrence of sporadic E (Es) layers has been a hot scientific topic for a long time. The GNSS (global navigation satellite system)-based radio occultation (RO) has proven to be a powerful technique for detecting the global E
The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.
Thompson, Christopher Glen; Becker, Betsy Jane
2014-09-01
A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.
Leslie, Daniel C; Melnikoff, Brett A; Marchiarullo, Daniel J; Cash, Devin R; Ferrance, Jerome P; Landers, James P
2010-08-07
Quality control of microdevices adds significant costs, in time and money, to any fabrication process. A simple, rapid quantitative method for the post-fabrication characterization of microchannel architecture using the measurement of flow with volumes relevant to microfluidics is presented. By measuring the mass of a dye solution passed through the device, it circumvents traditional gravimetric and interface-tracking methods that suffer from variable evaporation rates and the increased error associated with smaller volumes. The multiplexed fluidic resistance (MFR) measurement method measures flow via stable visible-wavelength dyes, a standard spectrophotometer and common laboratory glassware. Individual dyes are used as molecular markers of flow for individual channels, and in channel architectures where multiple channels terminate at a common reservoir, spectral deconvolution reveals the individual flow contributions. On-chip, this method was found to maintain accurate flow measurement at lower flow rates than the gravimetric approach. Multiple dyes are shown to allow for independent measurement of multiple flows on the same device simultaneously. We demonstrate that this technique is applicable for measuring the fluidic resistance, which is dependent on channel dimensions, in four fluidically connected channels simultaneously, ultimately determining that one chip was partially collapsed and, therefore, unusable for its intended purpose. This method is thus shown to be widely useful in troubleshooting microfluidic flow characteristics.
Interpret with caution: multicollinearity in multiple regression of cognitive data.
Morrison, Catriona M
2003-08-01
Shibihara and Kondo in 2002 reported a reanalysis of the 1997 Kanji picture-naming data of Yamazaki, Ellis, Morrison, and Lambon-Ralph in which independent variables were highly correlated. Their addition of the variable visual familiarity altered the previously reported pattern of results, indicating that visual familiarity, but not age of acquisition, was important in predicting Kanji naming speed. The present paper argues that caution should be taken when drawing conclusions from multiple regression analyses in which the independent variables are so highly correlated, as such multicollinearity can lead to unreliable output.
Black Male Labor Force Participation.
ERIC Educational Resources Information Center
Baer, Roger K.
This study attempts to test (via multiple regression analysis) hypothesized relationships between designated independent variables and age specific incidences of labor force participation for black male subpopulations in 54 Standard Metropolitan Statistical Areas. Leading independent variables tested include net migration, earnings, unemployment,…
Ménez, T; Michot, A; Tamburino, S; Weigert, R; Pinsolle, V
2018-04-01
Breast reconstruction techniques are multiple and they should be chosen in order to improve women's satisfaction and well-being, thus obtaining a personalized treatment. This report's major purpose was to study, through the Breast-Q questionnaire, how the functional and aesthetic outcomes, as well as the complications, of the main autologous breast reconstruction techniques, can affect patients quality of life and well-being at long-term. The secondary purpose was to analyse, thus to identify, the independent factors characterizing the different reconstructive techniques, which may affect patients' satisfaction. Women who underwent autologous breast reconstruction through deep inferior epigastric artery perforator or Latissimus dorsi muscle flap from May 2006 to May 2013 were included. The assessment was based on the Breast-Q reconstruction questionnaire. All times of post-mastectomy reconstruction were concerned: immediate, delayed, after previous procedure failure or conversion to another reconstructive technique due to the patient's dissatisfaction. A total of 98 patients were included. Concerning patients satisfaction, the breast-Q score is highest in patients who underwent immediate breast reconstruction, while scores after delayed breast reconstruction, previous surgery failure or conversion to another technique are generally equivalent. Higher scores have been observed in patients who underwent reconstruction through autologous Latissimus dorsi compared to Latissimus dorsi with prosthetic implant reconstruction. The authors identified factors of higher patients' satisfaction, like absence of major complication and advanced patient's age, in order to personalize the surgical planning according to the patient's priorities. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Projection Mapping User Interface for Disabled People
Simutis, Rimvydas; Maskeliūnas, Rytis
2018-01-01
Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities. PMID:29686827
Li, Yankun; Shao, Xueguang; Cai, Wensheng
2007-04-15
Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.
Projection Mapping User Interface for Disabled People.
Gelšvartas, Julius; Simutis, Rimvydas; Maskeliūnas, Rytis
2018-01-01
Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.
Using selective withdrawal to encapsulate pancreatic islets for immunoisolation
NASA Astrophysics Data System (ADS)
Wyman, Jason; Murphy, William; Mrksich, Milan
2005-11-01
We apply selective-withdrawal for encapsulating insulin-producing pancreatic islets within thin poly(ethylene glycol) (PEG) coats. Islets placed in an aqueous PEG solution are drawn into the selective-withdrawal spout which then breaks up, leaving the islets surrounded by a thin, 20μm, polymer coat. These coats, whose thickness is independent of the size of the encapsulated islet, are photo-crosslinked to form hydrogel capsules. We can apply multiple coats of varying chemical composition. These coats provide a semi-permeable membrane which allows the islets to respond to changes in glucose concentration by producing insulin in a manner similar to that of unencapsulated islets. Furthermore, the hydrogel capsules exclude large molecules the size of the smallest antibodies. Our results suggest that this microencapsulation technique may be useful for the transplantation of islets for treatment of Type I diabetes.
Lattice Boltzmann simulations of immiscible displacement process with large viscosity ratios
NASA Astrophysics Data System (ADS)
Rao, Parthib; Schaefer, Laura
2017-11-01
Immiscible displacement is a key physical mechanism involved in enhanced oil recovery and carbon sequestration processes. This multiphase flow phenomenon involves a complex interplay of viscous, capillary, inertial and wettability effects. The lattice Boltzmann (LB) method is an accurate and efficient technique for modeling and simulating multiphase/multicomponent flows especially in complex flow configurations and media. In this presentation we present numerical simulation results of displacement process in thin long channels. The results are based on a new psuedo-potential multicomponent LB model with multiple relaxation time collision (MRT) model and explicit forcing scheme. We demonstrate that the proposed model is capable of accurately simulating the displacement process involving fluids with a wider range of viscosity ratios (>100) and which also leads to viscosity-independent interfacial tension and reduction of some important numerical artifacts.
A feature-based approach to combine functional MRI, structural MRI and EEG brain imaging data.
Calhoun, V; Adali, T; Liu, J
2006-01-01
The acquisition of multiple brain imaging types for a given study is a very common practice. However these data are typically examined in separate analyses, rather than in a combined model. We propose a novel methodology to perform joint independent component analysis across image modalities, including structural MRI data, functional MRI activation data and EEG data, and to visualize the results via a joint histogram visualization technique. Evaluation of which combination of fused data is most useful is determined by using the Kullback-Leibler divergence. We demonstrate our method on a data set composed of functional MRI data from two tasks, structural MRI data, and EEG data collected on patients with schizophrenia and healthy controls. We show that combining data types can improve our ability to distinguish differences between groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less
Depression in high voltage power line workers.
de Souza, Suerda Fortaleza; Carvalho, Fernando Martins; de Araújo, Tânia Maria; Koifman, Sergio; Porto, Lauro Antonio
2012-06-01
To investigate the association between effort-reward imbalance and depressive symptoms among workers in high voltage power lines. A cross-sectional study among 158 workers from an electric power company in Northeast Brazil. The main independent variables were the Effort-Reward Imbalance Model (ERI) dimensions and the main dependent variable was the prevalence of depression, as measured by the Center for Epidemiologic Studies Depression (CES-D) scale. Data were analyzed by multiple logistic regression techniques. The group of low reward workers presented a depression prevalence rate 6.2 times greater than those in the high reward group. The depression prevalence rate was 3.3 greater in workers in the situation of imbalanced effort-reward than in those in effort-reward equilibrium. The prevalence of depression was strongly associated with psychosocial factors present in the work of electricity workers.
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
NASA Astrophysics Data System (ADS)
Go, Gwangjun; Choi, Hyunchul; Jeong, Semi; Ko, Seong Young; Park, Jong-Oh; Park, Sukho
2016-03-01
Microparticle manipulation using a microrobot in an enclosed environment, such as a lab-on-a-chip, has been actively studied because an electromagnetic actuated microrobot can have accurate motility and wireless controllability. In most studies on electromagnetic actuated microrobots, only a single microrobot has been used to manipulate cells or microparticles. However, the use of a single microrobot can pose several limitations when performing multiple roles in microparticle manipulation. To overcome the limitations associated with using a single microrobot, we propose a new method for the control of multiple microrobots. Multiple microrobots can be controlled independently by an electromagnetic actuation system and multiple microclampers combined with microheaters. To select a specific microrobot among multiple microrobots, we propose a microclamper composed of a clamper structure using thermally responsive hydrogel and a microheater for controlling the microclamper. A fundamental test of the proposed microparticle manipulation system is performed by selecting a specific microrobot among multiple microrobots. Through the independent locomotion of multiple microrobots with U- and V-shaped tips, heterogeneous microparticle manipulation is demonstrated in the creation of a two-dimensional structure. In the future, our proposed multiple-microrobot system can be applied to tasks that are difficult to perform using a single microrobot, such as cell manipulation, cargo delivery, tissue assembly, and cloning.
NASA Astrophysics Data System (ADS)
Fishkin, Joshua B.; So, Peter T. C.; Cerussi, Albert E.; Gratton, Enrico; Fantini, Sergio; Franceschini, Maria Angela
1995-03-01
We have measured the optical absorption and scattering coefficient spectra of a multiple-scattering medium (i.e., a biological tissue-simulating phantom comprising a lipid colloid) containing methemoglobin by using frequency-domain techniques. The methemoglobin absorption spectrum determined in the multiple-scattering medium is in excellent agreement with a corrected methemoglobin absorption spectrum obtained from a steady-state spectrophotometer measurement of the optical density of a minimally scattering medium. The determination of the corrected methemoglobin absorption spectrum takes into account the scattering from impurities in the methemoglobin solution containing no lipid colloid. Frequency-domain techniques allow for the separation of the absorbing from the scattering properties of multiple-scattering media, and these techniques thus provide an absolute
Özkan Tuncay, Fatma; Mollaoğlu, Mukadder
2017-12-01
To determine the effects of cooling suit on fatigue and activities of daily living of individuals with multiple sclerosis. Fatigue is one of the most common symptoms in people with multiple sclerosis and adversely affects their activities of daily living. Studies evaluating fatigue associated with multiple sclerosis have reported that most of the fatigue cases are related to the increase in body temperature and that cooling therapy is effective in coping with fatigue. This study used a two sample, control group design. The study sample comprised 75 individuals who met the inclusion criteria. Data were collected with study forms. After the study data were collected, cooling suit treatment was administered to the experimental group. During home visits paid at the fourth and eighth weeks after the intervention, the aforementioned scales were re-administered to the participants in the experimental and control groups. The analyses performed demonstrated that the severity levels of fatigue experienced by the participants in the experimental group wearing cooling suit decreased. The experimental group also exhibited a significant improvement in the participants' levels of independence in activities of daily living. The cooling suit worn by individuals with multiple sclerosis was determined to significantly improve the participants' levels of fatigue and independence in activities of daily living. The cooling suit therapy was found to be an effective intervention for the debilitating fatigue suffered by many multiple sclerosis patients, thus significantly improving their level of independence in activities of daily living. © 2017 John Wiley & Sons Ltd.
Techniques for High-contrast Imaging in Multi-star Systems. II. Multi-star Wavefront Control
NASA Astrophysics Data System (ADS)
Sirbu, D.; Thomas, S.; Belikov, R.; Bendek, E.
2017-11-01
Direct imaging of exoplanets represents a challenge for astronomical instrumentation due to the high-contrast ratio and small angular separation between the host star and the faint planet. Multi-star systems pose additional challenges for coronagraphic instruments due to the diffraction and aberration leakage caused by companion stars. Consequently, many scientifically valuable multi-star systems are excluded from direct imaging target lists for exoplanet surveys and characterization missions. Multi-star Wavefront Control (MSWC) is a technique that uses a coronagraphic instrument’s deformable mirror (DM) to create high-contrast regions in the focal plane in the presence of multiple stars. MSWC uses “non-redundant” modes on the DM to independently control speckles from each star in the dark zone. Our previous paper also introduced the Super-Nyquist wavefront control technique, which uses a diffraction grating to generate high-contrast regions beyond the Nyquist limit (nominal region correctable by the DM). These two techniques can be combined as MSWC-s to generate high-contrast regions for multi-star systems at wide (Super-Nyquist) angular separations, while MSWC-0 refers to close (Sub-Nyquist) angular separations. As a case study, a high-contrast wavefront control simulation that applies these techniques shows that the habitable region of the Alpha Centauri system can be imaged with a small aperture at 8× {10}-9 mean raw contrast in 10% broadband light in one-sided dark holes from 1.6-5.5 λ/D. Another case study using a larger 2.4 m aperture telescope such as the Wide-Field Infrared Survey Telescope uses these techniques to image the habitable zone of Alpha Centauri at 3.2× {10}-9 mean raw contrast in monochromatic light.
Digital processing of array seismic recordings
Ryall, Alan; Birtill, John
1962-01-01
This technical letter contains a brief review of the operations which are involved in digital processing of array seismic recordings by the methods of velocity filtering, summation, cross-multiplication and integration, and by combinations of these operations (the "UK Method" and multiple correlation). Examples are presented of analyses by the several techniques on array recordings which were obtained by the U.S. Geological Survey during chemical and nuclear explosions in the western United States. Seismograms are synthesized using actual noise and Pn-signal recordings, such that the signal-to-noise ratio, onset time and velocity of the signal are predetermined for the synthetic record. These records are then analyzed by summation, cross-multiplication, multiple correlation and the UK technique, and the results are compared. For all of the examples presented, analysis by the non-linear techniques of multiple correlation and cross-multiplication of the traces on an array recording are preferred to analyses by the linear operations involved in summation and the UK Method.
ERIC Educational Resources Information Center
Kallenbach, Silja, Ed.; Viens, Julie, Ed.
This document contains nine papers from a systematic, classroom-based study of multiple intelligences (MI) theory in different adult learning contexts during which adult educators from rural and urban areas throughout the United States conducted independent inquiries into the question of how MI theory can support instruction and assessment in…
Faster Double-Size Bipartite Multiplication out of Montgomery Multipliers
NASA Astrophysics Data System (ADS)
Yoshino, Masayuki; Okeya, Katsuyuki; Vuillaume, Camille
This paper proposes novel algorithms for computing double-size modular multiplications with few modulus-dependent precomputations. Low-end devices such as smartcards are usually equipped with hardware Montgomery multipliers. However, due to progresses of mathematical attacks, security institutions such as NIST have steadily demanded longer bit-lengths for public-key cryptography, making the multipliers quickly obsolete. In an attempt to extend the lifespan of such multipliers, double-size techniques compute modular multiplications with twice the bit-length of the multipliers. Techniques are known for extending the bit-length of classical Euclidean multipliers, of Montgomery multipliers and the combination thereof, namely bipartite multipliers. However, unlike classical and bipartite multiplications, Montgomery multiplications involve modulus-dependent precomputations, which amount to a large part of an RSA encryption or signature verification. The proposed double-size technique simulates double-size multiplications based on single-size Montgomery multipliers, and yet precomputations are essentially free: in an 2048-bit RSA encryption or signature verification with public exponent e=216+1, the proposal with a 1024-bit Montgomery multiplier is at least 1.5 times faster than previous double-size Montgomery multiplications.
NASA Astrophysics Data System (ADS)
Wagner, Jenny; Liesenborgs, Jori; Tessore, Nicolas
2018-04-01
Context. Local gravitational lensing properties, such as convergence and shear, determined at the positions of multiply imaged background objects, yield valuable information on the smaller-scale lensing matter distribution in the central part of galaxy clusters. Highly distorted multiple images with resolved brightness features like the ones observed in CL0024 allow us to study these local lensing properties and to tighten the constraints on the properties of dark matter on sub-cluster scale. Aim. We investigate to what precision local magnification ratios, J, ratios of convergences, f, and reduced shears, g = (g1, g2), can be determined independently of a lens model for the five resolved multiple images of the source at zs = 1.675 in CL0024. We also determine if a comparison to the respective results obtained by the parametric modelling tool Lenstool and by the non-parametric modelling tool Grale can detect biases in the models. For these lens models, we analyse the influence of the number and location of the constraints from multiple images on the lens properties at the positions of the five multiple images of the source at zs = 1.675. Methods: Our model-independent approach uses a linear mapping between the five resolved multiple images to determine the magnification ratios, ratios of convergences, and reduced shears at their positions. With constraints from up to six multiple image systems, we generate Lenstool and Grale models using the same image positions, cosmological parameters, and number of generated convergence and shear maps to determine the local values of J, f, and g at the same positions across all methods. Results: All approaches show strong agreement on the local values of J, f, and g. We find that Lenstool obtains the tightest confidence bounds even for convergences around one using constraints from six multiple-image systems, while the best Grale model is generated only using constraints from all multiple images with resolved brightness features and adding limited small-scale mass corrections. Yet, confidence bounds as large as the values themselves can occur for convergences close to one in all approaches. Conclusions: Our results agree with previous findings, support the light-traces-mass assumption, and the merger hypothesis for CL0024. Comparing the different approaches can detect model biases. The model-independent approach determines the local lens properties to a comparable precision in less than one second.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tate, John G.; Richardson, Bradley S.; Love, Lonnie J.
ORNL worked with the Schaeffler Group USA to explore additive manufacturing techniques that might be appropriate for prototyping of bearing cages. Multiple additive manufacturing techniques were investigated, including e-beam, binder jet and multiple laser based processes. The binder jet process worked best for the thin, detailed cages printed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
2011-05-01
As the nuclear weapon stockpile ages, there is increased concern about common degradation ultimately leading to common cause failure of multiple weapons that could significantly impact reliability or safety. Current acceptable limits for the reliability and safety of a weapon are based on upper limits on the probability of failure of an individual item, assuming that failures among items are independent. We expanded the current acceptable limits to apply to situations with common cause failure. Then, we developed a simple screening process to quickly assess the importance of observed common degradation for both reliability and safety to determine if furthermore » action is necessary. The screening process conservatively assumes that common degradation is common cause failure. For a population with between 100 and 5000 items we applied the screening process and conclude the following. In general, for a reliability requirement specified in the Military Characteristics (MCs) for a specific weapon system, common degradation is of concern if more than 100(1-x)% of the weapons are susceptible to common degradation, where x is the required reliability expressed as a fraction. Common degradation is of concern for the safety of a weapon subsystem if more than 0.1% of the population is susceptible to common degradation. Common degradation is of concern for the safety of a weapon component or overall weapon system if two or more components/weapons in the population are susceptible to degradation. Finally, we developed a technique for detailed evaluation of common degradation leading to common cause failure for situations that are determined to be of concern using the screening process. The detailed evaluation requires that best estimates of common cause and independent failure probabilities be produced. Using these techniques, observed common degradation can be evaluated for effects on reliability and safety.« less
A multiple technique approach to the analysis of urinary calculi.
Rodgers, A L; Nassimbeni, L R; Mulder, K J
1982-01-01
10 urinary calculi have been qualitatively and quantitatively analysed using X-ray diffraction, infra-red, scanning electron microscopy, X-ray fluorescence, atomic absorption and density gradient procedures. Constituents and compositional features which often go undetected due to limitations in the particular analytical procedure being used, have been identified and a detailed picture of each stone's composition and structure has been obtained. In all cases at least two components were detected suggesting that the multiple technique approach might cast some doubt as to the existence of "pure" stones. Evidence for a continuous, non-sequential deposition mechanism has been detected. In addition, the usefulness of each technique in the analysis of urinary stones has been assessed and the multiple technique approach has been evaluated as a whole.
Fission prompt gamma-ray multiplicity distribution measurements and simulations at DANCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chyzh, A; Wu, C Y; Ullmann, J
2010-08-24
The nearly energy independence of the DANCE efficiency and multiplicity response to {gamma} rays makes it possible to measure the prompt {gamma}-ray multiplicity distribution in fission. We demonstrate this unique capability of DANCE through the comparison of {gamma}-ray energy and multiplicity distribution between the measurement and numerical simulation for three radioactive sources {sup 22}Na, {sup 60}Co, and {sup 88}Y. The prospect for measuring the {gamma}-ray multiplicity distribution for both spontaneous and neutron-induced fission is discussed.
Multiple emulsions: an overview.
Khan, Azhar Yaqoob; Talegaonkar, Sushama; Iqbal, Zeenat; Ahmed, Farhan Jalees; Khar, Roop Krishan
2006-10-01
Multiple emulsions are complex polydispersed systems where both oil in water and water in oil emulsion exists simultaneously which are stabilized by lipophillic and hydrophilic surfactants respectively. The ratio of these surfactants is important in achieving stable multiple emulsions. Among water-in-oil-in-water (w/o/w) and oil-in-water-in-oil (o/w/o) type multiple emulsions, the former has wider areas of application and hence are studied in great detail. Formulation, preparation techniques and in vitro characterization methods for multiple emulsions are reviewed. Various factors affecting the stability of multiple emulsions and the stabilization approaches with specific reference to w/o/w type multiple emulsions are discussed in detail. Favorable drug release mechanisms and/or rate along with in vivo fate of multiple emulsions make them a versatile carrier. It finds wide range of applications in controlled or sustained drug delivery, targeted delivery, taste masking, bioavailability enhancement, enzyme immobilization, etc. Multiple emulsions have also been employed as intermediate step in the microencapsulation process and are the systems of increasing interest for the oral delivery of hydrophilic drugs, which are unstable in gastrointestinal tract like proteins and peptides. With the advancement in techniques for preparation, stabilization and rheological characterization of multiple emulsions, it will be able to provide a novel carrier system for drugs, cosmetics and pharmaceutical agents. In this review, emphasis is laid down on formulation, stabilization techniques and potential applications of multiple emulsion system.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
NASA Technical Reports Server (NTRS)
Devismes, D.; Cohen, B. A.
2016-01-01
Geochronology is a fundamental measurement for planetary samples, providing the ability to establish an absolute chronology for geological events, including crystallization history, magmatic evolution, and alteration events, and providing global and solar system context for such events. The capability for in situ geochronology will open up the ability for geochronology to be accomplished as part of lander or rover complement, on multiple samples rather than just those returned. An in situ geochronology package can also complement sample return missions by identifying the most interesting rocks to cache or return to Earth. The K-Ar radiometric dating approach to in situ dating has been validated by the Curiosity rover on Mars as well as several laboratories on Earth. Several independent projects developing in situ rock dating for planetary samples, based on the K-Ar method, are giving promising results. Among them, the Potassium (K)-Argon Laser Experiment (KArLE) at MSFC is based on techniques already in use for in planetary exploration, specifically, Laser-induced Breakdown Spectroscopy (LIBS, used on the Curiosity Chemcam), mass spectroscopy (used on multiple planetary missions, including Curiosity, ExoMars, and Rosetta), and optical imaging (used on most missions).
Adetoro, O O
1988-06-01
Multiple exposure photography (MEP), an objective technique, was used in determining the percentage of motile sperms in the semen samples from 41 males being investigated for infertility. This technique was compared with the conventional subjective ordinary microscopy method of spermatozoal motility assessment. A satisfactory correlation was observed in percentage sperm motility assessment using the two methods but the MEP estimation was more consistent and reliable. The value of this technique of sperm motility study in the developing world is discussed.
Local Fields in Human Subthalamic Nucleus Track the Lead-up to Impulsive Choices.
Pearson, John M; Hickey, Patrick T; Lad, Shivanand P; Platt, Michael L; Turner, Dennis A
2017-01-01
The ability to adaptively minimize not only motor but cognitive symptoms of neurological diseases, such as Parkinson's Disease (PD) and obsessive-compulsive disorder (OCD), is a primary goal of next-generation deep brain stimulation (DBS) devices. On the basis of studies demonstrating a link between beta-band synchronization and severity of motor symptoms in PD, the minimization of beta band activity has been proposed as a potential training target for closed-loop DBS. At present, no comparable signal is known for the impulsive side effects of PD, though multiple studies have implicated theta band activity within the subthalamic nucleus (STN), the site of DBS treatment, in processes of conflict monitoring and countermanding. Here, we address this challenge by recording from multiple independent channels within the STN in a self-paced decision task to test whether these signals carry information sufficient to predict stopping behavior on a trial-by-trial basis. As in previous studies, we found that local field potentials (LFPs) exhibited modulations preceding self-initiated movements, with power ramping across multiple frequencies during the deliberation period. In addition, signals showed phasic changes in power around the time of decision. However, a prospective model that attempted to use these signals to predict decision times showed effects of risk level did not improve with the addition of LFPs as regressors. These findings suggest information tracking the lead-up to impulsive choices is distributed across multiple frequency scales in STN, though current techniques may not possess sufficient signal-to-noise ratios to predict-and thus curb-impulsive behavior on a moment-to-moment basis.
Feature diagnosticity and task context shape activity in human scene-selective cortex.
Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S
2016-01-15
Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
NASA Technical Reports Server (NTRS)
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.
Federated queries of clinical data repositories: the sum of the parts does not equal the whole
Weber, Griffin M
2013-01-01
Background and objective In 2008 we developed a shared health research information network (SHRINE), which for the first time enabled research queries across the full patient populations of four Boston hospitals. It uses a federated architecture, where each hospital returns only the aggregate count of the number of patients who match a query. This allows hospitals to retain control over their local databases and comply with federal and state privacy laws. However, because patients may receive care from multiple hospitals, the result of a federated query might differ from what the result would be if the query were run against a single central repository. This paper describes the situations when this happens and presents a technique for correcting these errors. Methods We use a one-time process of identifying which patients have data in multiple repositories by comparing one-way hash values of patient demographics. This enables us to partition the local databases such that all patients within a given partition have data at the same subset of hospitals. Federated queries are then run separately on each partition independently, and the combined results are presented to the user. Results Using theoretical bounds and simulated hospital networks, we demonstrate that once the partitions are made, SHRINE can produce more precise estimates of the number of patients matching a query. Conclusions Uncertainty in the overlap of patient populations across hospitals limits the effectiveness of SHRINE and other federated query tools. Our technique reduces this uncertainty while retaining an aggregate federated architecture. PMID:23349080
Adaptive beamforming in a CDMA mobile satellite communications system
NASA Technical Reports Server (NTRS)
Munoz-Garcia, Samuel G.
1993-01-01
Code-Division Multiple-Access (CDMA) stands out as a strong contender for the choice of multiple access scheme in these future mobile communication systems. This is due to a variety of reasons such as the excellent performance in multipath environments, high scope for frequency reuse and graceful degradation near saturation. However, the capacity of CDMA is limited by the self-interference between the transmissions of the different users in the network. Moreover, the disparity between the received power levels gives rise to the near-far problem, this is, weak signals are severely degraded by the transmissions from other users. In this paper, the use of time-reference adaptive digital beamforming on board the satellite is proposed as a means to overcome the problems associated with CDMA. This technique enables a high number of independently steered beams to be generated from a single phased array antenna, which automatically track the desired user signal and null the unwanted interference sources. Since CDMA is interference limited, the interference protection provided by the antenna converts directly and linearly into an increase in capacity. Furthermore, the proposed concept allows the near-far effect to be mitigated without requiring a tight coordination of the users in terms of power control. A payload architecture will be presented that illustrates the practical implementation of this concept. This digital payload architecture shows that with the advent of high performance CMOS digital processing, the on-board implementation of complex DSP techniques -in particular digital beamforming- has become possible, being most attractive for Mobile Satellite Communications.
Adaptive beamforming in a CDMA mobile satellite communications system
NASA Astrophysics Data System (ADS)
Munoz-Garcia, Samuel G.
Code-Division Multiple-Access (CDMA) stands out as a strong contender for the choice of multiple access scheme in these future mobile communication systems. This is due to a variety of reasons such as the excellent performance in multipath environments, high scope for frequency reuse and graceful degradation near saturation. However, the capacity of CDMA is limited by the self-interference between the transmissions of the different users in the network. Moreover, the disparity between the received power levels gives rise to the near-far problem, this is, weak signals are severely degraded by the transmissions from other users. In this paper, the use of time-reference adaptive digital beamforming on board the satellite is proposed as a means to overcome the problems associated with CDMA. This technique enables a high number of independently steered beams to be generated from a single phased array antenna, which automatically track the desired user signal and null the unwanted interference sources. Since CDMA is interference limited, the interference protection provided by the antenna converts directly and linearly into an increase in capacity. Furthermore, the proposed concept allows the near-far effect to be mitigated without requiring a tight coordination of the users in terms of power control. A payload architecture will be presented that illustrates the practical implementation of this concept. This digital payload architecture shows that with the advent of high performance CMOS digital processing, the on-board implementation of complex DSP techniques -in particular digital beamforming- has become possible, being most attractive for Mobile Satellite Communications.
STUDIES ON THE PROPAGATION IN VITRO OF POLIOMYELITIS VIRUSES
Scherer, William F.; Syverton, Jerome T.
1952-01-01
The growth of poliomyelitis virus, Type 2, Yale-SK strain, in cultures of monkey testicular tissue was observed to occur in discrete cycles. Growth curves showed that each cycle was composed of (a) an initial lag phase when little or no virus was released from the cells, (6) a phase of viral production, and (c) a plateau which represented a decrement in the rate of viral production. This pattern of viral multiplication occurred in monkey testicular tissue cultures which have as the liquid phase either ox serum ultrafiltrate or monkey serum-chicken embryonic extract medium. The presence of a solid medium composed of chicken plasma, clotted either with chicken embryonic extract or bovine thrombin, did not alter the pattern of viral multiplication. The shape of the growth curve as established by any of four different techniques for tissue cultivation, was shown to be independent of the cultural technique employed. For cultures of monkey testicular tissue, the amount of virus in the tissue was as much as tenfold greater than that in the liquid of the same cultures. Moreover, viral production was evident earlier and was detectable for a longer period of time in the tissue than in the liquid phase. The rapidly incremental phase of the growth cycle, when large quantities of virus were released into the liquid phase, coincided in time with the destruction of the spindle-shaped cells, which extended from the explants. Although destruction of outgrowth cells was marked, there remained cells within the explants capable of supporting the growth of poliomyelitis virus. PMID:12981221
Collagenase Treatment in Dupuytren Contractures: A Review of the Current State Versus Future Needs.
Degreef, Ilse
2016-06-01
Dupuytren disease is highly prevalent and the finger contractures can be very extensile, compromising the patients' hand function. To restore full function, contractures have been addressed by cutting the causative strands for nearly 200 years, ever since Baron Guillaume Dupuytren demonstrated his technique at the beginning of the nineteenth century. Surgery can be minimal (fasciotomy) or quite invasive (fasciectomy and even skin replacement). However, in the last decade translational research has introduced the non-surgical technique of enzymatic fasciotomy with collagenase injections. Now, finger contractures can be released with single injections on monthly intervals, to address one joint contracture at a time. However, in hands affected with Dupuytren contractures to the extent that the patient calls for treatment, most often more than one joint is involved. In surgical treatment options all contracted joints are addressed in a single procedure. Nevertheless, extensile surgery withholds inherent risks of complications and intense rehabilitation. Today, the minimally-invasive method with enzymatic fasciotomy by collagenase injection has demonstrated reliable outcomes with few morbidities and early recovery. However, single-site injection is todays' standard procedure and multiple joints are addressed in several sessions with monthly intervals. This triggers a longer recovery and treatment burden in severely affected hands even though surgery is avoided. Therefore, further treatment modalities of collagenase use are explored. Adjustments in the treatment regimes' flexibility and collagenase injections addressing more than one joint contracture simultaneously will improve the burden of multiple sessions and, therefore, enzymatic fasciotomy may become the preferred method in more extensile Dupuytren contractures. In this independent review, the challenge of Dupuytren disease affecting a single versus multiple joints is presented. The pros and cons of collagenase use are weighed, founded by the available scientific background. The demands and options for collagenase in future treatment regimens for extensile Dupuytren contractures are discussed.
Seshul, Merritt; Pillsbury, Harold; Eby, Thomas
2006-09-01
The objective was to determine the agreement of the positive results from a multiple skin prick test (SPT) device with the ability to determine a definable endpoint through intradermal dilutional testing (IDT) to compare semiquantitatively the degree of positivity of SPT results with quantitative results from IDT and to analyze the cost of immunotherapy based on SPT compared with IDT guided by SPT. Retrospective review of clinical data (random accrual). One hundred thirty-four patients underwent allergy screening using a multiple SPT device. Antigens testing positive by skin prick device were tested using IDT on a separate day. Antigens testing negative by SPT were not evaluated by IDT. Regional allergy testing practice patterns were determined, and a cost analysis using Medicare rates was performed There was good agreement between an antigen testing positive by SPT and the determination of a definable endpoint (93.33%, n = 1,334 antigens). The degree of positivity from the SPT correlated poorly with the final endpoint concentration (r = 0.40, P < .0001). Blended testing techniques were similar in cost when compared with several commonly used allergy testing protocols. Antigens which show reactivity to a multiple SPT device usually have a treatable endpoint that is independent of the degree of positivity of the SPT result. IDT is an important step in the determination of the strongest starting dose of immunotherapy that may be safely administered. Initiating immunotherapy in this manner may potentially create significant health care savings by shortening the time required for a patient to reach their individual maximally tolerated dose. The use of a relatively large screening panel is cost effective and does not increase the average number of antigens treated by immunotherapy. Blended allergy testing techniques that include IDT in their protocol are comparable in cost with commonly used allergy testing protocols.
NASA Astrophysics Data System (ADS)
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2008-02-01
Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.
Miyagi, S; Kawagishi, N; Kashiwadate, T; Fujio, A; Tokodai, K; Hara, Y; Nakanishi, C; Kamei, T; Ohuchi, N; Satomi, S
2016-05-01
In living donor liver transplantation (LDLT), the recipient bile duct is thin and short. Bile duct complications often occur in LDLT, with persistent long-term adverse effects. Recently, we began to perform microsurgical reconstruction of the bile duct. The purpose of this study was to investigate the relationship between bile duct reconstruction methods and complications in LDLT. From 1991 to 2014, we performed 161 LDLTs (pediatric:adult = 90:71; left lobe:right lobe = 95:66). In this study, we retrospectively investigated the initial bile duct complications in LDLT and performed univariate and multivariate analyses to identify the independent risk factors for complications. The most frequent complication was biliary stricture (9.9%), followed by biliary leakage (6.8%). On univariate and multiple logistic regression analysis, the independent risk factors for biliary stricture were bile leakage (P = .0103) and recurrent cholangitis (P = .0077). However, there were no risk factors for biliary leakage on univariate analysis in our study. The reconstruction methods (hepaticojejunostomy or duct-to-duct anastomosis) and reconstruction technique (with or without microsurgery) were not risk factors for biliary stricture and leakage. In this study, the most frequent complication of LDLT was biliary stricture. The independent risk factors for biliary stricture were biliary leakage and recurrent cholangitis. Duct-to-duct anastomosis and microsurgical reconstruction of the bile duct were not risk factors for biliary stricture and leakage. Copyright © 2016 Elsevier Inc. All rights reserved.
Lapierre, Mark; Howe, Piers D. L.; Cropper, Simon J.
2013-01-01
Many tasks involve tracking multiple moving objects, or stimuli. Some require that individuals adapt to changing or unfamiliar conditions to be able to track well. This study explores processes involved in such adaptation through an investigation of the interaction of attention and memory during tracking. Previous research has shown that during tracking, attention operates independently to some degree in the left and right visual hemifields, due to putative anatomical constraints. It has been suggested that the degree of independence is related to the relative dominance of processes of attention versus processes of memory. Here we show that when individuals are trained to track a unique pattern of movement in one hemifield, that learning can be transferred to the opposite hemifield, without any evidence of hemifield independence. However, learning is not influenced by an explicit strategy of memorisation of brief periods of recognisable movement. The findings lend support to a role for implicit memory in overcoming putative anatomical constraints on the dynamic, distributed spatial allocation of attention involved in tracking multiple objects. PMID:24349555
Independent Assessment of ITRF Site Velocities using GPS Imaging
NASA Astrophysics Data System (ADS)
Blewitt, G.; Hammond, W. C.; Kreemer, C.; Altamimi, Z.
2015-12-01
The long-term stability of ITRF is critical to the most challenging scientific applications such as the slow variation of sea level, and of ice sheet loading in Greenland and Antarctica. In 2010, the National Research Council recommended aiming for stability at the level of 1 mm/decade in the ITRF origin and scale. This requires that the ITRF include many globally-distributed sites with motions that are predictable to within a few mm/decade, with a significant number of sites having collocated stations of multiple techniques. Quantifying the stability of ITRF stations can be useful to understand stability of ITRF parameters, and to help the selection and weighting of ITRF stations. Here we apply a new suite of techniques for an independent assessment of ITRF site velocities. Our "GPS Imaging" suite is founded on the principle that, for the case of large numbers of data, the trend can be estimated objectively, automatically, robustly, and accurately by applying non-parametric techniques, which use quantile statistics (e.g., the median). At the foundation of GPS Imaging is the estimator "MIDAS" (Median Interannual Difference Adjusted for Skewness). MIDAS estimates the velocity with a realistic error bar based on sub-sampling the coordinate time series. MIDAS is robust to step discontinuities, outliers, seasonality, and heteroscedasticity. Common-mode noise filters enhance regional- to continental-scale precision in MIDAS estimates, just as they do for standard estimation techniques. Secondly, in regions where there is sufficient spatial sampling, GPS Imaging uses MIDAS velocity estimates to generate a regionally-representative velocity map. For this we apply a median spatial filter to despeckle the maps. We use GPS Imaging to address two questions: (1) How well do the ITRF site velocities derived by parametric estimation agree with non-parametric techniques? (2) Are ITRF site velocities regionally representative? These questions aim to get a handle on (1) the accuracy of ITRF site velocities as a function of characteristics of contributing station data, such as number of step parameters and total time span; and (2) evidence of local processes affecting site velocity, which may impact site stability. Such quantification can be used to rank stations in terms the risk that they may pose to the stability of ITRF.
Simple lock-in detection technique utilizing multiple harmonics for digital PGC demodulators.
Duan, Fajie; Huang, Tingting; Jiang, Jiajia; Fu, Xiao; Ma, Ling
2017-06-01
A simple lock-in detection technique especially suited for digital phase-generated carrier (PGC) demodulators is proposed in this paper. It mixes the interference signal with rectangular waves whose Fourier expansions contain multiple odd or multiple even harmonics of the carrier to recover the quadrature components needed for interference phase demodulation. In this way, the use of a multiplier is avoided and the efficiency of the algorithm is improved. Noise performance with regard to light intensity variation and circuit noise is analyzed theoretically for both the proposed technique and the traditional lock-in technique, and results show that the former provides a better signal-to-noise ratio than the latter with proper modulation depth and average interference phase. Detailed simulations were conducted and the theoretical analysis was verified. A fiber-optic Michelson interferometer was constructed and the feasibility of the proposed technique is demonstrated.
SpaRibs Geometry Parameterization for Wings with Multiple Sections using Single Design
NASA Technical Reports Server (NTRS)
De, Shuvodeep; Jrad, Mohamed; Locatelli, Davide; Kapania, Rakesh K.; Baker, Myles; Pak, Chan-Gi
2017-01-01
The SpaRibs topology of an aircraft wing has a significant effect on its structural behavior and stability as well as the flutter performance. The development of additive manufacturing techniques like Electron Beam Free Form Fabrication (EBF3) has made it feasible to manufacture aircraft wings with curvilinear spars, ribs (SpaRibs) and stiffeners. In this article a new global-local optimization framework for wing with multiple sections using curvilinear SpaRibs is described. A single design space is used to parameterize the SpaRibs geometry. This method has been implemented using MSC-PATRAN to create a broad range of SpaRibs topologies using limited number of parameters. It ensures C0 and C1 continuities in SpaRibs geometry at the junction of two wing sections with airfoil thickness gradient discontinuity as well as mesh continuity between all structural components. This method is advantageous in complex multi-disciplinary optimization due to its potential to reduce the number of design variables. For the global-local optimization the local panels are generated by an algorithm which is totally based on a set algebra on the connectivity matrix data. The great advantage of this method is that it is completely independent of the coordinates of the nodes of the finite element model. It is also independent of the order in which the elements are distributed in the FEM. The code is verified by optimizing of the CRM Baseline model at trim condition at Mach number equal to 0.85 for five different angle of attack (-2deg, 0deg,2deg,4deg and 6deg). The final weight of the wing is 19,090.61 lb. This value is comparable to that obtained by Qiang et al. 6 (19,269 lb).
Keller, Andrew; Bader, Samuel L.; Shteynberg, David; Hood, Leroy; Moritz, Robert L.
2015-01-01
Proteomics by mass spectrometry technology is widely used for identifying and quantifying peptides and proteins. The breadth and sensitivity of peptide detection have been advanced by the advent of data-independent acquisition mass spectrometry. Analysis of such data, however, is challenging due to the complexity of fragment ion spectra that have contributions from multiple co-eluting precursor ions. We present SWATHProphet software that identifies and quantifies peptide fragment ion traces in data-independent acquisition data, provides accurate probabilities to ensure results are correct, and automatically detects and removes contributions to quantitation originating from interfering precursor ions. Integration in the widely used open source Trans-Proteomic Pipeline facilitates subsequent analyses such as combining results of multiple data sets together for improved discrimination using iProphet and inferring sample proteins using ProteinProphet. This novel development should greatly help make data-independent acquisition mass spectrometry accessible to large numbers of users. PMID:25713123
Overcoming multicollinearity in multiple regression using correlation coefficient
NASA Astrophysics Data System (ADS)
Zainodin, H. J.; Yap, S. J.
2013-09-01
Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.
Gibson, Barbara E; Carnevale, Franco A; King, Gillian
2012-01-01
Postmodernism provides a radical alternative to the dominant discourses of Western societies that emphasize autonomy and independence. It suggests a reimagining of the relationship between the self and the body and the increasingly blurred boundaries between biology and machine. The purpose of this article is to explore in/dependence through a discussion of interconnectedness of persons and assistive technologies. Drawing on postmodern theories, we discuss the interconnections inherent in disability experiences through the case example of Mimi, an adolescent girl with severe physical disabilities. We consider how Mimi, her assistive technologies and her parents can be viewed as assemblages of bodies/technologies/subjectivities that together achieve a set of practices. An examination of these various couplings suggests different understandings of disability that open up possibilities for multiple connections and reimagines dependencies as connectivities. Connectivity can be embraced to explore multiple ways of being-in-the-world for all persons and problematizes the goals of independence inherent in rehabilitation practices.
A Systematic Analysis of 2 Monoisocentric Techniques for the Treatment of Multiple Brain Metastases.
Narayanasamy, Ganesh; Stathakis, Sotirios; Gutierrez, Alonso N; Pappas, Evangelos; Crownover, Richard; Floyd, John R; Papanikolaou, Niko
2017-10-01
In this treatment planning study, we compare the plan quality and delivery parameters for the treatment of multiple brain metastases using 2 monoisocentric techniques: the Multiple Metastases Element from Brainlab and the RapidArc volumetric-modulated arc therapy from Varian Medical Systems. Eight patients who were treated in our institution for multiple metastases (3-7 lesions) were replanned with Multiple Metastases Element using noncoplanar dynamic conformal arcs. The same patients were replanned with the RapidArc technique in Eclipse using 4 noncoplanar arcs. Both techniques were designed using a single isocenter. Plan quality metrics (conformity index, homogeneity index, gradient index, and R 50% ), monitor unit, and the planning time were recorded. Comparison of the Multiple Metastases Element and RapidArc plans was performed using Shapiro-Wilk test, paired Student t test, and Wilcoxon signed rank test. A paired Wilcoxon signed rank test between Multiple Metastases Element and RapidArc showed comparable plan quality metrics and dose to brain. Mean ± standard deviation values of conformity index were 1.8 ± 0.7 and 1.7 ± 0.6, homogeneity index were 1.3 ± 0.1 and 1.3 ± 0.1, gradient index were 5.0 ± 1.8 and 5.1 ± 1.9, and R 50% were 4.9 ± 1.8 and 5.0 ± 1.9 for Multiple Metastases Element and RapidArc plans, respectively. Mean brain dose was 2.3 and 2.7 Gy for Multiple Metastases Element and RapidArc plans, respectively. The mean value of monitor units in Multiple Metastases Element plan was 7286 ± 1065, which is significantly lower than the RapidArc monitor units of 9966 ± 1533 ( P < .05). For the planning of multiple brain lesions to be treated with stereotactic radiosurgery, Multiple Metastases Element planning software produced equivalent conformity, homogeneity, dose falloff, and brain V 12 Gy but required significantly lower monitor units, when compared to RapidArc plans.
A Systematic Analysis of 2 Monoisocentric Techniques for the Treatment of Multiple Brain Metastases
Stathakis, Sotirios; Gutierrez, Alonso N.; Pappas, Evangelos; Crownover, Richard; Floyd, John R.; Papanikolaou, Niko
2016-01-01
Background: In this treatment planning study, we compare the plan quality and delivery parameters for the treatment of multiple brain metastases using 2 monoisocentric techniques: the Multiple Metastases Element from Brainlab and the RapidArc volumetric-modulated arc therapy from Varian Medical Systems. Methods: Eight patients who were treated in our institution for multiple metastases (3-7 lesions) were replanned with Multiple Metastases Element using noncoplanar dynamic conformal arcs. The same patients were replanned with the RapidArc technique in Eclipse using 4 noncoplanar arcs. Both techniques were designed using a single isocenter. Plan quality metrics (conformity index, homogeneity index, gradient index, and R50%), monitor unit, and the planning time were recorded. Comparison of the Multiple Metastases Element and RapidArc plans was performed using Shapiro-Wilk test, paired Student t test, and Wilcoxon signed rank test. Results: A paired Wilcoxon signed rank test between Multiple Metastases Element and RapidArc showed comparable plan quality metrics and dose to brain. Mean ± standard deviation values of conformity index were 1.8 ± 0.7 and 1.7 ± 0.6, homogeneity index were 1.3 ± 0.1 and 1.3 ± 0.1, gradient index were 5.0 ± 1.8 and 5.1 ± 1.9, and R50% were 4.9 ± 1.8 and 5.0 ± 1.9 for Multiple Metastases Element and RapidArc plans, respectively. Mean brain dose was 2.3 and 2.7 Gy for Multiple Metastases Element and RapidArc plans, respectively. The mean value of monitor units in Multiple Metastases Element plan was 7286 ± 1065, which is significantly lower than the RapidArc monitor units of 9966 ± 1533 (P < .05). Conclusion: For the planning of multiple brain lesions to be treated with stereotactic radiosurgery, Multiple Metastases Element planning software produced equivalent conformity, homogeneity, dose falloff, and brain V12 Gy but required significantly lower monitor units, when compared to RapidArc plans. PMID:27612917
Bleeker, H J; Lewin, P A
2000-01-01
A new calibration technique for PVDF ultrasonic hydrophone probes is described. Current implementation of the technique allows determination of hydrophone frequency response between 2 and 100 MHz and is based on the comparison of theoretically predicted and experimentally determined pressure-time waveforms produced by a focused, circular source. The simulation model was derived from the time domain algorithm that solves the non linear KZK (Khokhlov-Zabolotskaya-Kuznetsov) equation describing acoustic wave propagation. The calibration technique data were experimentally verified using independent calibration procedures in the frequency range from 2 to 40 MHz using a combined time delay spectrometry and reciprocity approach or calibration data provided by the National Physical Laboratory (NPL), UK. The results of verification indicated good agreement between the results obtained using KZK and the above-mentioned independent calibration techniques from 2 to 40 MHz, with the maximum discrepancy of 18% at 30 MHz. The frequency responses obtained using different hydrophone designs, including several membrane and needle probes, are presented, and it is shown that the technique developed provides a desirable tool for independent verification of primary calibration techniques such as those based on optical interferometry. Fundamental limitations of the presented calibration method are also examined.
EVALUATION OF A DNA PROBE TEST KIT FOR DETECTION OF SALMONELLAE IN BIOSOLIDS
Aims: Current United States regulations (40 CFR 503) for "Class A" biosolids requires use of multiple-tube fermentation techniques for fecal coliform or multiple tube enrichment techniques for Salmonella spp. followed by isolation and biochemical and serological confirmation. T...
Activity Detection and Retrieval for Image and Video Data with Limited Training
2015-06-10
applications. Here we propose two techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a... automata . For our second approach to segmentation, we employ a region based segmentation technique that is capable of handling intensity inhomogeneity...techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a mixture of Gaussian is fitted to the
NASA Astrophysics Data System (ADS)
Kuo, Chih-Hao
Efficient and accurate modeling of electromagnetic scattering from layered rough surfaces with buried objects finds applications ranging from detection of landmines to remote sensing of subsurface soil moisture. The formulation of a hybrid numerical/analytical solution to electromagnetic scattering from layered rough surfaces is first presented in this dissertation. The solution to scattering from each rough interface is sought independently based on the extended boundary condition method (EBCM), where the scattered fields of each rough interface are expressed as a summation of plane waves and then cast into reflection/transmission matrices. To account for interactions between multiple rough boundaries, the scattering matrix method (SMM) is applied to recursively cascade reflection and transmission matrices of each rough interface and obtain the composite reflection matrix from the overall scattering medium. The validation of this method against the Method of Moments (MoM) and Small Perturbation Method (SPM) is addressed and the numerical results which investigate the potential of low frequency radar systems in estimating deep soil moisture are presented. Computational efficiency of the proposed method is also discussed. In order to demonstrate the capability of this method in modeling coherent multiple scattering phenomena, the proposed method has been employed to analyze backscattering enhancement and satellite peaks due to surface plasmon waves from layered rough surfaces. Numerical results which show the appearance of enhanced backscattered peaks and satellite peaks are presented. Following the development of the EBCM/SMM technique, a technique which incorporates a buried object in layered rough surfaces by employing the T-matrix method and the cylindrical-to-spatial harmonics transformation is proposed. Validation and numerical results are provided. Finally, a multi-frequency polarimetric inversion algorithm for the retrieval of subsurface soil properties using VHF/UHF band radar measurements is devised. The top soil dielectric constant is first determined using an L-band inversion algorithm. For the retrieval of subsurface properties, a time-domain inversion technique is employed together with a parameter optimization for the pulse shape of time delay echoes from VHF/UHF band radar observations. Numerical studies to investigate the accuracy of the proposed inversion technique in presence of errors are addressed.
Jäncke, Lutz; Alahmadi, Nsreen
2016-04-13
The measurement of brain activation during music listening is a topic that is attracting increased attention from many researchers. Because of their high spatial accuracy, functional MRI measurements are often used for measuring brain activation in the context of music listening. However, this technique faces the issues of contaminating scanner noise and an uncomfortable experimental environment. Electroencephalogram (EEG), however, is a neural registration technique that allows the measurement of neurophysiological activation in silent and more comfortable experimental environments. Thus, it is optimal for recording brain activations during pleasant music stimulation. Using a new mathematical approach to calculate intracortical independent components (sLORETA-IC) on the basis of scalp-recorded EEG, we identified specific intracortical independent components during listening of a musical piece and scales, which differ substantially from intracortical independent components calculated from the resting state EEG. Most intracortical independent components are located bilaterally in perisylvian brain areas known to be involved in auditory processing and specifically in music perception. Some intracortical independent components differ between the music and scale listening conditions. The most prominent difference is found in the anterior part of the perisylvian brain region, with stronger activations seen in the left-sided anterior perisylvian regions during music listening, most likely indicating semantic processing during music listening. A further finding is that the intracortical independent components obtained for the music and scale listening are most prominent in higher frequency bands (e.g. beta-2 and beta-3), whereas the resting state intracortical independent components are active in lower frequency bands (alpha-1 and theta). This new technique for calculating intracortical independent components is able to differentiate independent neural networks associated with music and scale listening. Thus, this tool offers new opportunities for studying neural activations during music listening using the silent and more convenient EEG technology.
Konopsky, Valery N; Alieva, Elena V
2010-01-15
A high-precision optical biosensor technique capable of independently determining the refractive index (RI) of liquids is presented. Photonic crystal surface waves were used to detect surface binding events, while an independent registration of the critical angle was used for accurate determination of the liquid RI. This technique was tested using binding of biotin molecules to a streptavidin monolayer at low and high biotin concentrations. The attained baseline noise is 5x10(-13) m/Hz(1/2) for adlayer thickness changes and 9x10(-8) RIU/Hz(1/2) for RI changes. Copyright 2009 Elsevier B.V. All rights reserved.
Recurrent transient ischaemic attack and early risk of stroke: data from the PROMAPA study.
Purroy, Francisco; Jiménez Caballero, Pedro Enrique; Gorospe, Arantza; Torres, María José; Alvarez-Sabin, José; Santamarina, Estevo; Martínez-Sánchez, Patricia; Cánovas, David; Freijo, María José; Egido, Jose Antonio; Ramírez-Moreno, Jose M; Alonso-Arias, Arantza; Rodríguez-Campello, Ana; Casado, Ignacio; Delgado-Mederos, Raquel; Martí-Fàbregas, Joan; Fuentes, Blanca; Silva, Yolanda; Quesada, Helena; Cardona, Pere; Morales, Ana; de la Ossa, Natalia Pérez; García-Pastor, Antonio; Arenillas, Juan F; Segura, Tomas; Jiménez, Carmen; Masjuán, Jaime
2013-06-01
Many guidelines recommend urgent intervention for patients with two or more transient ischaemic attacks (TIAs) within 7 days (multiple TIAs) to reduce the early risk of stroke. To determine whether all patients with multiple TIAs have the same high early risk of stroke. Between April 2008 and December 2009, we included 1255 consecutive patients with a TIA from 30 Spanish stroke centres (PROMAPA study). We prospectively recorded clinical characteristics. We also determined the short-term risk of stroke (at 7 and 90 days). Aetiology was categorised using the TOAST (Trial of Org 10172 in Acute Stroke Treatment) classification. Clinical variables and extracranial vascular imaging were available and assessed in 1137/1255 (90.6%) patients. 7-Day and 90-day stroke risk were 2.6% and 3.8%, respectively. Large-artery atherosclerosis (LAA) was confirmed in 190 (16.7%) patients. Multiple TIAs were seen in 274 (24.1%) patients. Duration <1 h (OR=2.97, 95% CI 2.20 to 4.01, p<0.001), LAA (OR=1.92, 95% CI 1.35 to 2.72, p<0.001) and motor weakness (OR=1.37, 95% CI 1.03 to 1.81, p=0.031) were independent predictors of multiple TIAs. The subsequent risk of stroke in these patients at 7 and 90 days was significantly higher than the risk after a single TIA (5.9% vs 1.5%, p<0.001 and 6.8% vs 3.0%, respectively). In the logistic regression model, among patients with multiple TIAs, no variables remained as independent predictors of stroke recurrence. According to our results, multiple TIAs within 7 days are associated with a greater subsequent risk of stroke than after a single TIA. Nevertheless, we found no independent predictor of stroke recurrence among these patients.
Developing Multiple Choice Tests: Tips & Techniques
ERIC Educational Resources Information Center
McCowan, Richard J.
1999-01-01
Item writing is a major responsibility of trainers. Too often, qualified staff who prepare lessons carefully and teach conscientiously use inadequate tests that do not validly reflect the true level of trainee achievement. This monograph describes techniques for constructing multiple-choice items that measure student performance accurately. It…
Scott, T W; Amerasinghe, P H; Morrison, A C; Lorenz, L H; Clark, G G; Strickman, D; Kittayapong, P; Edman, J D
2000-01-01
We used a histologic technique to study multiple blood feeding in a single gonotrophic cycle by engorged Aedes aegypti (L.) that were collected weekly for 2 yr from houses in a rural village in Thailand (n = 1,891) and a residential section of San Juan, Puerto Rico (n = 1,675). Overall, mosquitoes from Thailand contained significantly more multiple meals (n = 1,300, 42% double meals, 5% triple meals) than mosquitoes collected in Puerto Rico (n = 1,156, 32% double meals, 2% triple meals). The portion of specimens for which frequency of feeding could not be determined was 31% at both sites. We estimated that on average Ae. aegypti take 0.76 and 0.63 human blood meals per day in Thailand and Puerto Rico, respectively. However, frequency of multiple feeding varied among houses and, in Puerto Rico, the neighborhoods from which mosquitoes were collected. In Thailand 65% of the mosquitoes fed twice on the same day, whereas in Puerto Rico 57% took multiple meals separated by > or = 1 d. At both sites, the majority of engorged specimens were collected inside houses (Thailand 86%, Puerto Rico 95%). The number of blood meals detected was independent of where mosquitoes were collected (inside versus outside of the house) at both sites and the time of day collections were made in Puerto Rico. Feeding rates were slightly higher for mosquitoes collected in the afternoon in Thailand. Temperatures were significantly higher and mosquitoes significantly smaller in Thailand than in Puerto Rico. At both sites female size was negatively associated with temperature. Rates of multiple feeding were associated positively with temperature and negatively with mosquito size in Thailand, but not in Puerto Rico. Multiple feeding during a single gonotrophic cycle is a regular part of Ae. aegypti biology, can vary geographically and under different climate conditions, and may be associated with variation in patterns of dengue virus transmission.
Automatic Visual Tracking and Social Behaviour Analysis with Multiple Mice
Giancardo, Luca; Sona, Diego; Huang, Huiping; Sannino, Sara; Managò, Francesca; Scheggia, Diego; Papaleo, Francesco; Murino, Vittorio
2013-01-01
Social interactions are made of complex behavioural actions that might be found in all mammalians, including humans and rodents. Recently, mouse models are increasingly being used in preclinical research to understand the biological basis of social-related pathologies or abnormalities. However, reliable and flexible automatic systems able to precisely quantify social behavioural interactions of multiple mice are still missing. Here, we present a system built on two components. A module able to accurately track the position of multiple interacting mice from videos, regardless of their fur colour or light settings, and a module that automatically characterise social and non-social behaviours. The behavioural analysis is obtained by deriving a new set of specialised spatio-temporal features from the tracker output. These features are further employed by a learning-by-example classifier, which predicts for each frame and for each mouse in the cage one of the behaviours learnt from the examples given by the experimenters. The system is validated on an extensive set of experimental trials involving multiple mice in an open arena. In a first evaluation we compare the classifier output with the independent evaluation of two human graders, obtaining comparable results. Then, we show the applicability of our technique to multiple mice settings, using up to four interacting mice. The system is also compared with a solution recently proposed in the literature that, similarly to us, addresses the problem with a learning-by-examples approach. Finally, we further validated our automatic system to differentiate between C57B/6J (a commonly used reference inbred strain) and BTBR T+tf/J (a mouse model for autism spectrum disorders). Overall, these data demonstrate the validity and effectiveness of this new machine learning system in the detection of social and non-social behaviours in multiple (>2) interacting mice, and its versatility to deal with different experimental settings and scenarios. PMID:24066146
Asymmetric Dual-Band Tracking Technique for Optimal Joint Processing of BDS B1I and B1C Signals
Wang, Chuhan; Cui, Xiaowei; Ma, Tianyi; Lu, Mingquan
2017-01-01
Along with the rapid development of the Global Navigation Satellite System (GNSS), satellite navigation signals have become more diversified, complex, and agile in adapting to increasing market demands. Various techniques have been developed for processing multiple navigation signals to achieve better performance in terms of accuracy, sensitivity, and robustness. This paper focuses on a technique for processing two signals with separate but adjacent center frequencies, such as B1I and B1C signals in the BeiDou global system. The two signals may differ in modulation scheme, power, and initial phase relation and can be processed independently by user receivers; however, the propagation delays of the two signals from a satellite are nearly identical as they are modulated on adjacent frequencies, share the same reference clock, and undergo nearly identical propagation paths to the receiver, resulting in strong coherence between the two signals. Joint processing of these signals can achieve optimal measurement performance due to the increased Gabor bandwidth and power. In this paper, we propose a universal scheme of asymmetric dual-band tracking (ASYM-DBT) to take advantage of the strong coherence, the increased Gabor bandwidth, and power of the two signals in achieving much-reduced thermal noise and more accurate ranging results when compared with the traditional single-band algorithm. PMID:29035350
MEG-SIM: a web portal for testing MEG analysis methods using realistic simulated and empirical data.
Aine, C J; Sanfratello, L; Ranken, D; Best, E; MacArthur, J A; Wallace, T; Gilliam, K; Donahue, C H; Montaño, R; Bryant, J E; Scott, A; Stephen, J M
2012-04-01
MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes ( http://cobre.mrn.org/megsim/ ). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis.
MEG-SIM: A Web Portal for Testing MEG Analysis Methods using Realistic Simulated and Empirical Data
Aine, C. J.; Sanfratello, L.; Ranken, D.; Best, E.; MacArthur, J. A.; Wallace, T.; Gilliam, K.; Donahue, C. H.; Montaño, R.; Bryant, J. E.; Scott, A.; Stephen, J. M.
2012-01-01
MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes (http://cobre.mrn.org/megsim/). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis. PMID:22068921
Characteristics of Dry Chin-Tuck Swallowing Vibrations and Sounds
Dudik, Joshua M; Jestrović, Iva; Luan, Bo; Coyle, James L.; Sejdić, Ervin
2015-01-01
Objective The effects of the chin-tuck maneuver, a technique commonly employed to compensate for dysphagia, on cervical auscultation are not fully understood. Characterizing a technique that is known to affect swallowing function is an important step on the way to developing a new instrumentation-based swallowing screening tool. Methods In this study, we recorded data from 55 adult participants who each completed five saliva swallows in a chin-tuck position. The resulting data was processed using previously designed filtering and segmentation algorithms. We then calculated 9 time, frequency, and time-frequency domain features for each independent signal. Results We found that multiple frequency and time domain features varied significantly between male and female subjects as well as between swallowing sounds and vibrations. However, our analysis showed that participant age did not play a significant role on the values of the extracted features. Finally, we found that various frequency features corresponding to swallowing vibrations did demonstrate statistically significant variation between the neutral and chin-tuck positions but sounds showed no changes between these two positions. Conclusion The chin-tuck maneuver affects many facets of swallowing vibrations and sounds and its effects can be monitored via cervical auscultation. Significance These results suggest that a subject’s swallowing technique does need to be accounted for when monitoring their performance with cervical auscultation based instrumentation. PMID:25974926
Asymmetric Dual-Band Tracking Technique for Optimal Joint Processing of BDS B1I and B1C Signals.
Wang, Chuhan; Cui, Xiaowei; Ma, Tianyi; Zhao, Sihao; Lu, Mingquan
2017-10-16
Along with the rapid development of the Global Navigation Satellite System (GNSS), satellite navigation signals have become more diversified, complex, and agile in adapting to increasing market demands. Various techniques have been developed for processing multiple navigation signals to achieve better performance in terms of accuracy, sensitivity, and robustness. This paper focuses on a technique for processing two signals with separate but adjacent center frequencies, such as B1I and B1C signals in the BeiDou global system. The two signals may differ in modulation scheme, power, and initial phase relation and can be processed independently by user receivers; however, the propagation delays of the two signals from a satellite are nearly identical as they are modulated on adjacent frequencies, share the same reference clock, and undergo nearly identical propagation paths to the receiver, resulting in strong coherence between the two signals. Joint processing of these signals can achieve optimal measurement performance due to the increased Gabor bandwidth and power. In this paper, we propose a universal scheme of asymmetric dual-band tracking (ASYM-DBT) to take advantage of the strong coherence, the increased Gabor bandwidth, and power of the two signals in achieving much-reduced thermal noise and more accurate ranging results when compared with the traditional single-band algorithm.
Feltus, F Alex
2014-06-01
Understanding the control of any trait optimally requires the detection of causal genes, gene interaction, and mechanism of action to discover and model the biochemical pathways underlying the expressed phenotype. Functional genomics techniques, including RNA expression profiling via microarray and high-throughput DNA sequencing, allow for the precise genome localization of biological information. Powerful genetic approaches, including quantitative trait locus (QTL) and genome-wide association study mapping, link phenotype with genome positions, yet genetics is less precise in localizing the relevant mechanistic information encoded in DNA. The coupling of salient functional genomic signals with genetically mapped positions is an appealing approach to discover meaningful gene-phenotype relationships. Techniques used to define this genetic-genomic convergence comprise the field of systems genetics. This short review will address an application of systems genetics where RNA profiles are associated with genetically mapped genome positions of individual genes (eQTL mapping) or as gene sets (co-expression network modules). Both approaches can be applied for knowledge independent selection of candidate genes (and possible control mechanisms) underlying complex traits where multiple, likely unlinked, genomic regions might control specific complex traits. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Ren, Yongxiong; Li, Long; Wang, Zhe; ...
2016-09-12
To increase system capacity of underwater optical communications, we employ the spatial domain to simultaneously transmit multiple orthogonal spatial beams, each carrying an independent data channel. In this paper, we show up to a 40-Gbit/s link by multiplexing and transmitting four green orbital angular momentum (OAM) beams through a single aperture. Moreover, we investigate the degrading effects of scattering/turbidity, water current, and thermal gradient-induced turbulence, and we find that thermal gradients cause the most distortions and turbidity causes the most loss. We show systems results using two different data generation techniques, one at 1064 nm for 10-Gbit/s/beam and one atmore » 520 nm for 1-Gbit/s/beam; we use both techniques since present data-modulation technologies are faster for infrared (IR) than for green. For the 40-Gbit/s link, data is modulated in the IR, and OAM imprinting is performed in the green using a specially-designed metasurface phase mask. For the 4-Gbit/s link, a green laser diode is directly modulated. Lastly, we show that inter-channel crosstalk induced by thermal gradients can be mitigated using multi-channel equalisation processing.« less
Chen, Xin; Qin, Shanshan; Chen, Shuai; Li, Jinlong; Li, Lixin; Wang, Zhongling; Wang, Quan; Lin, Jianping; Yang, Cheng; Shui, Wenqing
2015-01-01
In fragment-based lead discovery (FBLD), a cascade combining multiple orthogonal technologies is required for reliable detection and characterization of fragment binding to the target. Given the limitations of the mainstream screening techniques, we presented a ligand-observed mass spectrometry approach to expand the toolkits and increase the flexibility of building a FBLD pipeline especially for tough targets. In this study, this approach was integrated into a FBLD program targeting the HCV RNA polymerase NS5B. Our ligand-observed mass spectrometry analysis resulted in the discovery of 10 hits from a 384-member fragment library through two independent screens of complex cocktails and a follow-up validation assay. Moreover, this MS-based approach enabled quantitative measurement of weak binding affinities of fragments which was in general consistent with SPR analysis. Five out of the ten hits were then successfully translated to X-ray structures of fragment-bound complexes to lay a foundation for structure-based inhibitor design. With distinctive strengths in terms of high capacity and speed, minimal method development, easy sample preparation, low material consumption and quantitative capability, this MS-based assay is anticipated to be a valuable addition to the repertoire of current fragment screening techniques. PMID:25666181
Boundary particle method for Laplace transformed time fractional diffusion equations
NASA Astrophysics Data System (ADS)
Fu, Zhuo-Jia; Chen, Wen; Yang, Hai-Tian
2013-02-01
This paper develops a novel boundary meshless approach, Laplace transformed boundary particle method (LTBPM), for numerical modeling of time fractional diffusion equations. It implements Laplace transform technique to obtain the corresponding time-independent inhomogeneous equation in Laplace space and then employs a truly boundary-only meshless boundary particle method (BPM) to solve this Laplace-transformed problem. Unlike the other boundary discretization methods, the BPM does not require any inner nodes, since the recursive composite multiple reciprocity technique (RC-MRM) is used to convert the inhomogeneous problem into the higher-order homogeneous problem. Finally, the Stehfest numerical inverse Laplace transform (NILT) is implemented to retrieve the numerical solutions of time fractional diffusion equations from the corresponding BPM solutions. In comparison with finite difference discretization, the LTBPM introduces Laplace transform and Stehfest NILT algorithm to deal with time fractional derivative term, which evades costly convolution integral calculation in time fractional derivation approximation and avoids the effect of time step on numerical accuracy and stability. Consequently, it can effectively simulate long time-history fractional diffusion systems. Error analysis and numerical experiments demonstrate that the present LTBPM is highly accurate and computationally efficient for 2D and 3D time fractional diffusion equations.
Gohel, Mukesh; Patel, Madhabhai; Amin, Avani; Agrawal, Ruchi; Dave, Rikita; Bariya, Nehal
2004-04-26
The purpose of this research was to develop mouth dissolve tablets of nimesulide. Granules containing nimesulide, camphor, crospovidone, and lactose were prepared by wet granulation technique. Camphor was sublimed from the dried granules by exposure to vacuum. The porous granules were then compressed. Alternatively, tablets were first prepared and later exposed to vacuum. The tablets were evaluated for percentage friability, wetting time, and disintegration time. In the investigation, a 32 full factorial design was used to investigate the joint influence of 2 formulation variables: amount of camphor and crospovidone. The results of multiple linear regression analysis revealed that for obtaining a rapidly disintegrating dosage form, tablets should be prepared using an optimum concentration of camphor and a higher percentage of crospovidone. A contour plot is also presented to graphically represent the effect of the independent variables on the disintegration time and percentage friability. A checkpoint batch was also prepared to prove the validity of the evolved mathematical model. Sublimation of camphor from tablets resulted in superior tablets as compared with the tablets prepared from granules that were exposed to vacuum. The systematic formulation approach helped in understanding the effect of formulation processing variables.
Li, Heming; Meng, Qing H; Noh, Hyangsoon; Batth, Izhar Singh; Somaiah, Neeta; Torres, Keila E; Xia, Xueqing; Wang, Ruoyu; Li, Shulin
2017-09-10
Circulating tumor cells (CTCs) enter the vasculature or lymphatic system after shedding from the primary tumor. CTCs may serve as "seed" cells for tumor metastasis. The utility of CTCs in clinical applications for sarcoma is not fully investigated, partly owing to the necessity for fresh blood samples and the lack of a CTC-specific antibody. To overcome these drawbacks, we developed a technique for sarcoma CTCs capture and detection using cryopreserved peripheral blood mononuclear cells (PBMCs) and our proprietary cell-surface vimentin (CSV) antibody 84-1, which is specific to tumor cells. This technique was validated by sarcoma cell spiking assay, matched CTCs comparison between fresh and cryopreserved PBMCs, and independent tumor markers in multiple types of sarcoma patient blood samples. The reproducibility was maximized when cryopreserved PBMCs were prepared from fresh blood samples within 2 h of the blood draw. In summary, as far as we are aware, ours is the first report to capture and detect CTCs from cryopreserved PBMCs. Further validation in other types of tumor may help boost the feasibility and utility of CTC-based diagnosis in a centralized laboratory. Copyright © 2017 Elsevier B.V. All rights reserved.
Ren, Yongxiong; Li, Long; Wang, Zhe; Kamali, Seyedeh Mahsa; Arbabi, Ehsan; Arbabi, Amir; Zhao, Zhe; Xie, Guodong; Cao, Yinwen; Ahmed, Nisar; Yan, Yan; Liu, Cong; Willner, Asher J.; Ashrafi, Solyman; Tur, Moshe; Faraon, Andrei; Willner, Alan E.
2016-01-01
To increase system capacity of underwater optical communications, we employ the spatial domain to simultaneously transmit multiple orthogonal spatial beams, each carrying an independent data channel. In this paper, we show up to a 40-Gbit/s link by multiplexing and transmitting four green orbital angular momentum (OAM) beams through a single aperture. Moreover, we investigate the degrading effects of scattering/turbidity, water current, and thermal gradient-induced turbulence, and we find that thermal gradients cause the most distortions and turbidity causes the most loss. We show systems results using two different data generation techniques, one at 1064 nm for 10-Gbit/s/beam and one at 520 nm for 1-Gbit/s/beam; we use both techniques since present data-modulation technologies are faster for infrared (IR) than for green. For the 40-Gbit/s link, data is modulated in the IR, and OAM imprinting is performed in the green using a specially-designed metasurface phase mask. For the 4-Gbit/s link, a green laser diode is directly modulated. Finally, we show that inter-channel crosstalk induced by thermal gradients can be mitigated using multi-channel equalisation processing. PMID:27615808
NASA Astrophysics Data System (ADS)
Ren, Yongxiong; Li, Long; Wang, Zhe; Kamali, Seyedeh Mahsa; Arbabi, Ehsan; Arbabi, Amir; Zhao, Zhe; Xie, Guodong; Cao, Yinwen; Ahmed, Nisar; Yan, Yan; Liu, Cong; Willner, Asher J.; Ashrafi, Solyman; Tur, Moshe; Faraon, Andrei; Willner, Alan E.
2016-09-01
To increase system capacity of underwater optical communications, we employ the spatial domain to simultaneously transmit multiple orthogonal spatial beams, each carrying an independent data channel. In this paper, we show up to a 40-Gbit/s link by multiplexing and transmitting four green orbital angular momentum (OAM) beams through a single aperture. Moreover, we investigate the degrading effects of scattering/turbidity, water current, and thermal gradient-induced turbulence, and we find that thermal gradients cause the most distortions and turbidity causes the most loss. We show systems results using two different data generation techniques, one at 1064 nm for 10-Gbit/s/beam and one at 520 nm for 1-Gbit/s/beam; we use both techniques since present data-modulation technologies are faster for infrared (IR) than for green. For the 40-Gbit/s link, data is modulated in the IR, and OAM imprinting is performed in the green using a specially-designed metasurface phase mask. For the 4-Gbit/s link, a green laser diode is directly modulated. Finally, we show that inter-channel crosstalk induced by thermal gradients can be mitigated using multi-channel equalisation processing.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test
NASA Technical Reports Server (NTRS)
Messer, Bradley
2007-01-01
Propulsion ground test facilities face the daily challenge of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Over the last decade NASA s propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and exceeded the capabilities of numerous test facility and test article components. A logistic regression mathematical modeling technique has been developed to predict the probability of successfully completing a rocket propulsion test. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),.., X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure of accomplishing a full duration test. The use of logistic regression modeling is not new; however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from this type of model provide project managers with insight and confidence into the effectiveness of rocket propulsion ground testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Yongxiong; Li, Long; Wang, Zhe
To increase system capacity of underwater optical communications, we employ the spatial domain to simultaneously transmit multiple orthogonal spatial beams, each carrying an independent data channel. In this paper, we show up to a 40-Gbit/s link by multiplexing and transmitting four green orbital angular momentum (OAM) beams through a single aperture. Moreover, we investigate the degrading effects of scattering/turbidity, water current, and thermal gradient-induced turbulence, and we find that thermal gradients cause the most distortions and turbidity causes the most loss. We show systems results using two different data generation techniques, one at 1064 nm for 10-Gbit/s/beam and one atmore » 520 nm for 1-Gbit/s/beam; we use both techniques since present data-modulation technologies are faster for infrared (IR) than for green. For the 40-Gbit/s link, data is modulated in the IR, and OAM imprinting is performed in the green using a specially-designed metasurface phase mask. For the 4-Gbit/s link, a green laser diode is directly modulated. Lastly, we show that inter-channel crosstalk induced by thermal gradients can be mitigated using multi-channel equalisation processing.« less
NASA Technical Reports Server (NTRS)
Vajingortin, L. D.; Roisman, W. P.
1991-01-01
The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.
Ren, Yongxiong; Li, Long; Wang, Zhe; Kamali, Seyedeh Mahsa; Arbabi, Ehsan; Arbabi, Amir; Zhao, Zhe; Xie, Guodong; Cao, Yinwen; Ahmed, Nisar; Yan, Yan; Liu, Cong; Willner, Asher J; Ashrafi, Solyman; Tur, Moshe; Faraon, Andrei; Willner, Alan E
2016-09-12
To increase system capacity of underwater optical communications, we employ the spatial domain to simultaneously transmit multiple orthogonal spatial beams, each carrying an independent data channel. In this paper, we show up to a 40-Gbit/s link by multiplexing and transmitting four green orbital angular momentum (OAM) beams through a single aperture. Moreover, we investigate the degrading effects of scattering/turbidity, water current, and thermal gradient-induced turbulence, and we find that thermal gradients cause the most distortions and turbidity causes the most loss. We show systems results using two different data generation techniques, one at 1064 nm for 10-Gbit/s/beam and one at 520 nm for 1-Gbit/s/beam; we use both techniques since present data-modulation technologies are faster for infrared (IR) than for green. For the 40-Gbit/s link, data is modulated in the IR, and OAM imprinting is performed in the green using a specially-designed metasurface phase mask. For the 4-Gbit/s link, a green laser diode is directly modulated. Finally, we show that inter-channel crosstalk induced by thermal gradients can be mitigated using multi-channel equalisation processing.
Automated simultaneous multiple feature classification of MTI data
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.
2002-08-01
Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.
NASA Astrophysics Data System (ADS)
Li, Xiang; Luo, Ming; Qiu, Ying; Alphones, Arokiaswami; Zhong, Wen-De; Yu, Changyuan; Yang, Qi
2018-02-01
In this paper, channel equalization techniques for coherent optical fiber transmission systems based on independent component analysis (ICA) are reviewed. The principle of ICA for blind source separation is introduced. The ICA based channel equalization after both single-mode fiber and few-mode fiber transmission for single-carrier and orthogonal frequency division multiplexing (OFDM) modulation formats are investigated, respectively. The performance comparisons with conventional channel equalization techniques are discussed.
Model-independent analysis of the Fermilab Tevatron turn-by-turn beam position monitor measurements
NASA Astrophysics Data System (ADS)
Petrenko, A. V.; Valishev, A. A.; Lebedev, V. A.
2011-09-01
Coherent transverse beam oscillations in the Tevatron were analyzed with the model-independent analysis (MIA) technique. This allowed one to obtain the model-independent values of coupled betatron amplitudes, phase advances, and dispersion function around the ring from a single dipole kick measurement. In order to solve the MIA mode mixing problem which limits the accuracy of determination of the optical functions, we have developed a new technique of rotational MIA mode untangling. The basic idea is to treat each beam position monitor (BPM) as two BPMs separated in a ring by exactly one turn. This leads to a simple criterion of MIA mode separation: the betatron phase advance between any BPM and its counterpart shifted by one turn should be equal to the betatron tune and therefore should not depend on the BPM position in the ring. Furthermore, we describe a MIA-based technique to locate vibrating magnets in a storage ring.
Yu, Guozhi; Hozé, Nathanaël; Rolff, Jens
2016-01-01
Antimicrobial peptides (AMPs) and antibiotics reduce the net growth rate of bacterial populations they target. It is relevant to understand if effects of multiple antimicrobials are synergistic or antagonistic, in particular for AMP responses, because naturally occurring responses involve multiple AMPs. There are several competing proposals describing how multiple types of antimicrobials add up when applied in combination, such as Loewe additivity or Bliss independence. These additivity terms are defined ad hoc from abstract principles explaining the supposed interaction between the antimicrobials. Here, we link these ad hoc combination terms to a mathematical model that represents the dynamics of antimicrobial molecules hitting targets on bacterial cells. In this multi-hit model, bacteria are killed when a certain number of targets are hit by antimicrobials. Using this bottom-up approach reveals that Bliss independence should be the model of choice if no interaction between antimicrobial molecules is expected. Loewe additivity, on the other hand, describes scenarios in which antimicrobials affect the same components of the cell, i.e. are not acting independently. While our approach idealizes the dynamics of antimicrobials, it provides a conceptual underpinning of the additivity terms. The choice of the additivity term is essential to determine synergy or antagonism of antimicrobials. This article is part of the themed issue ‘Evolutionary ecology of arthropod antimicrobial peptides’. PMID:27160596
Interactions Dominate the Dynamics of Visual Cognition
ERIC Educational Resources Information Center
Stephen, Damian G.; Mirman, Daniel
2010-01-01
Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of…
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Analysis and Interpretation of Findings Using Multiple Regression Techniques
ERIC Educational Resources Information Center
Hoyt, William T.; Leierer, Stephen; Millington, Michael J.
2006-01-01
Multiple regression and correlation (MRC) methods form a flexible family of statistical techniques that can address a wide variety of different types of research questions of interest to rehabilitation professionals. In this article, we review basic concepts and terms, with an emphasis on interpretation of findings relevant to research questions…
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
ERIC Educational Resources Information Center
Jones, Gregory V.
1987-01-01
It is suggested that theorists may develop both independence and exclusivity forms of multiple-process models, allowing choice between them to be made on empirical rather than a priori grounds. This theoretical approach is adopted in the specific case of memory retrival (Author/LMO)
Independent Events in Elementary Probability Theory
ERIC Educational Resources Information Center
Csenki, Attila
2011-01-01
In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E[subscript 1],…
USDA-ARS?s Scientific Manuscript database
This field study investigated the colony effect of a Fipronil spot-treatment applied to active infestations of Formosan subterranean termite, Coptotermes formosanus Shiraki. Spot-treatments were applied to a single active independent monitor from each of four colonies in which multiple independent m...
Evaluating the Effectiveness of the Lecture versus Independent Study.
ERIC Educational Resources Information Center
DaRosa, Debra A.; And Others
1991-01-01
The impacts of independent study and the lecture approach on student test scores and study time were compared for 205 medical students studying surgery. Learning objective, multiple-choice, and essay questions were developed for selected topics related to surgery. Findings support increased individual active learning strategies and decreased…
Racism, mental illness and social support in the UK.
Chakraborty, Apu T; McKenzie, Kwame J; Hajat, Shakoor; Stansfeld, Stephen A
2010-12-01
The difference in risk of mental illness in UK ethnic minorities may be related to a balance between specific risk factors such as racial discrimination and mediating factors such as social support. We investigated whether social support from friends or relatives reduces the cross-sectional association between perceived racism and the risk of mental illness in an ethnic minority group. We conducted secondary analyses of nationally representative community samples of five UK ethnic minority groups (EMPIRIC dataset; n = 4,281) using multiple regression techniques. We found that the associations between perceived racism, common mental disorder and potentially psychotic symptoms were mainly independent of social support as measured by the number of close persons and their proximity to the individual. Social support when measured in this way does not mediate the associations between perceived racism and mental ill health in this population-based sample.
Measurement Of Multiphase Flow Water Fraction And Water-cut
NASA Astrophysics Data System (ADS)
Xie, Cheng-gang
2007-06-01
This paper describes a microwave transmission multiphase flow water-cut meter that measures the amplitude attenuation and phase shift across a pipe diameter at multiple frequencies using cavity-backed antennas. The multiphase flow mixture permittivity and conductivity are derived from a unified microwave transmission model for both water- and oil-continuous flows over a wide water-conductivity range; this is far beyond the capability of microwave-resonance-based sensors currently on the market. The water fraction and water cut are derived from a three-component gas-oil-water mixing model using the mixture permittivity or the mixture conductivity and an independently measured mixture density. Water salinity variations caused, for example, by changing formation water or formation/injection water breakthrough can be detected and corrected using an online water-conductivity tracking technique based on the interpretation of the mixture permittivity and conductivity, simultaneously measured by a single-modality microwave sensor.
A Survey of Middleware for Sensor and Network Virtualization
Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd.
2014-01-01
Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization. PMID:25615737
Time-resolved, dual heterodyne phase collection transient grating spectroscopy
Dennett, Cody A.; Short, Michael P.
2017-05-23
The application of optical heterodyne detection for transient grating spectroscopy (TGS) using a fixed, binary phase mask often relies on taking the difference between signals captured at multiple heterodyne phases. To date, this has been accomplished by manually controlling the heterodyne phase between measurements with an optical flat. In this letter, an optical configuration is presented which allows for collection of TGS measurements at two heterodyne phases concurrently through the use of two independently phase controlled interrogation paths. This arrangement allows for complete, heterodyne amplified TGS measurements to be made in a manner not constrained by a mechanical actuation time.more » Measurements are instead constrained only by the desired signal-to-noise ratio. A temporal resolution of between 1 and 10 s, demonstrated here on single crystal metallic samples, will allow TGS experiments to be used as an in-situ, time-resolved monitoring technique for many material processing applications.« less
Life sciences flight experiments microcomputer
NASA Technical Reports Server (NTRS)
Bartram, Peter N.
1987-01-01
A promising microcomputer configuration for the Spacelab Life Sciences Lab. Equipment inventory consists of multiple processors. One processor's use is reserved, with additional processors dedicated to real time input and output operations. A simple form of such a configuration, with a processor board for analog to digital conversion and another processor board for digital to analog conversion, was studied. The system used digital parallel data lines between the boards, operating independently of the system bus. Good performance of individual components was demonstrated: the analog to digital converter was at over 10,000 samples per second. The combination of the data transfer between boards with the input or output functions on each board slowed performance, with a maximum throughput of 2800 to 2900 analog samples per second. Any of several techniques, such as use of the system bus for data transfer or the addition of direct memory access hardware to the processor boards, should give significantly improved performance.
Overcoming Barriers to Integrating Behavioral Health and Primary Care Services
Grazier, Kyle L.; Smiley, Mary L.; Bondalapati, Kirsten S.
2016-01-01
Objective: Despite barriers, organizations with varying characteristics have achieved full integration of primary care services with providers and services that identify, treat, and manage those with mental health and substance use disorders. What are the key factors and common themes in stories of this success? Methods: A systematic literature review and snowball sampling technique was used to identify organizations. Site visits and key informant interviews were conducted with 6 organizations that had over time integrated behavioral health and primary care services. Case studies of each organization were independently coded to identify traits common to multiple organizations. Results: Common characteristics include prioritized vulnerable populations, extensive community collaboration, team approaches that included the patient and family, diversified funding streams, and data-driven approaches and practices. Conclusions: While significant barriers to integrating behavioral health and primary care services exist, case studies of organizations that have successfully overcome these barriers share certain common factors. PMID:27380923
FPGA-based gating and logic for multichannel single photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pooser, Raphael C; Earl, Dennis Duncan; Evans, Philip G
2012-01-01
We present results characterizing multichannel InGaAs single photon detectors utilizing gated passive quenching circuits (GPQC), self-differencing techniques, and field programmable gate array (FPGA)-based logic for both diode gating and coincidence counting. Utilizing FPGAs for the diode gating frontend and the logic counting backend has the advantage of low cost compared to custom built logic circuits and current off-the-shelf detector technology. Further, FPGA logic counters have been shown to work well in quantum key distribution (QKD) test beds. Our setup combines multiple independent detector channels in a reconfigurable manner via an FPGA backend and post processing in order to perform coincidencemore » measurements between any two or more detector channels simultaneously. Using this method, states from a multi-photon polarization entangled source are detected and characterized via coincidence counting on the FPGA. Photons detection events are also processed by the quantum information toolkit for application testing (QITKAT)« less
Microfluidic array platform for simultaneous lipid bilayer membrane formation.
Zagnoni, M; Sandison, M E; Morgan, H
2009-01-01
In recent years, protein array technologies have found widespread applications in proteomics. However, new methods for high-throughput analysis of protein-protein and protein-compound interactions are still required. In this paper, an array of lipid bilayer membranes formed within a microfluidic system with integrated electrodes is presented. The system is comprised of three layers that are clamped together, thus rendering the device cleanable and reusable. The device microfluidics enable the simultaneous formation of an array of lipid bilayers using a previously developed air-exposure technique, thereby avoiding the need to manually form individual bilayers. The Ag/AgCl electrodes allow for ion channel measurements, each of the sites being independently addressable. Typically, a 50% yield in simultaneous lipid bilayer formation over 12 sites was obtained and ion channel recordings have been acquired over multiple sites. This system has great potential for the development of an automatable platform of suspended lipid bilayer arrays.
Circulating tumor cells: silent predictors of metastasis
Zhou, LanLan; Dicker, David T.; Matthew, Elizabeth; El-Deiry, Wafik S.; Alpaugh, R. Katherine
2017-01-01
Circulating tumor cells (CTCs) were added to the arsenal of clinical testing in 2004 for three cancer types: metastatic breast, prostate, and colorectal cancer. CTCs were found to be an independent prognostic indicator of survival for these three diseases. Multiple enrichment/isolation strategies have been developed and numerous assay applications have been performed using both single and pooled captured/enriched CTCs. We have reviewed the isolation techniques and touched on many analyses. The true utility of a CTC is that it acts as a “silent” predictor of metastatic disease. The mere presence of a single CTC is an indication that disease has spread from the primary site. Comments and suggestions have been set forth for CTCs and cell-free DNA to be used as a screening panel for the early detection of disease recurrence and metastatic spread, providing the opportunity for early intervention with curative intent to treat metastatic disease. PMID:28868131
A survey of middleware for sensor and network virtualization.
Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd
2014-12-12
Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization.
Teaching severely multihandicapped students to put on their own hearing aids.
Tucker, D J; Berry, G W
1980-01-01
Two experiments were conducted with six severely multihandicapped students with hearing impairments to: (a) train the six students to put on their own hearing aids independently, and (b) provide an empirical evaluation of a comprehensive instructional program for putting on a hearing aid by assessing acquisition, maintenance, and generalization of that skill across environments. All six students acquired the skill rapidly, with two students requiring remedial training on one step of the program. Because for two of the original three students the newly learned skill failed initially to generalize to other environments, a second experiment was initiated to assess generalization across environments as well as to replicate the efficiency of the acquisition program. When a variation of the multiple-probe baseline technique was used, the behavior of three additional students generalized to other settings without direct training in those settings. PMID:6444931
Towards a Single Sensor Passive Solution for Automated Fall Detection
Belshaw, Michael; Taati, Babak; Snoek, Jasper; Mihailidis, Alex
2012-01-01
Falling in the home is one of the major challenges to independent living among older adults. The associated costs, coupled with a rapidly growing elderly population, are placing a burden on healthcare systems worldwide that will swiftly become unbearable. To facilitate expeditious emergency care, we have developed an artificially intelligent camera-based system that automatically detects if a person within the field-of-view has fallen. The system addresses concerns raised in earlier work and the requirements of a widely deployable in-home solution. The presented prototype utilizes a consumer-grade camera modified with a wide-angle lens. Machine learning techniques applied to carefully engineered features allow the system to classify falls at high accuracy while maintaining invariance to lighting, environment and the presence of multiple moving objects. This paper describes the system, outlines the algorithms used and presents empirical validation of its effectiveness. PMID:22254671
NASA Astrophysics Data System (ADS)
Duffó, G. S.; Arva, E. A.; Schulz, F. M.; Vazquez, D. R.
2012-01-01
The National Atomic Energy Commission of the Argentine Republic is developing a nuclear waste disposal management programme that contemplates the design and construction of a facility for the final disposal of intermediate-level radioactive wastes. The repository is based on the use of multiple, independent and redundant barriers. The major components are made in reinforced concrete so, the durability of these structures is an important aspect for the facility integrity. This work presents an investigation performed on a reinforced concrete specifically designed for this purpose, to predict the service life of the intermediate level radioactive waste disposal facility from data obtained with several techniques. Results obtained with corrosion sensors embedded in a concrete prototype are also included. The information obtained will be used for the final design of the facility in order to guarantee a service life more or equal than the foreseen durability for this type of facilities.
Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed
NASA Technical Reports Server (NTRS)
Taylor, Jaime; Rakoczy, John; Steincamp, James
2003-01-01
Phase retrieval requires calculation of the real-valued phase of the pupil fimction from the image intensity distribution and characteristics of an optical system. Genetic 'algorithms were used to solve two one-dimensional phase retrieval problem. A GA successfully estimated the coefficients of a polynomial expansion of the phase when the number of coefficients was correctly specified. A GA also successfully estimated the multiple p h e s of a segmented optical system analogous to the seven-mirror Systematic Image-Based Optical Alignment (SIBOA) testbed located at NASA s Marshall Space Flight Center. The SIBOA testbed was developed to investigate phase retrieval techniques. Tiphilt and piston motions of the mirrors accomplish phase corrections. A constant phase over each mirror can be achieved by an independent tip/tilt correction: the phase Conection term can then be factored out of the Discrete Fourier Tranform (DFT), greatly reducing computations.
Max-margin multiattribute learning with low-rank constraint.
Zhang, Qiang; Chen, Lin; Li, Baoxin
2014-07-01
Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.
Music therapy research and applications in pediatric oncology treatment.
Standley, J M; Hanser, S B
1995-01-01
Music therapy is a profession which meets multiple physical, social, and psychological needs. Music therapists can facilitate health objectives by reducing the intensity or duration of pain, alleviating anxiety, and decreasing the amount of analgesic medication needed. Rehabilitative objectives can include activities which incorporate exercise, range of motion therapy, or gait training. Reduction of fear, anxiety, stress, or grief are common psychological objectives. Music therapy is particularly effective in promoting social objectives such as increased interaction, verbalization, independence, and cooperation; enhanced relationships with health care personnel and family members; and increased stimulation during long-term hospitalization or isolation. Counseling techniques are often paired with music to achieve emotional objectives such as expression, adjustment, stability, or locus of control. The purpose of this article is to synthesize the extant music/medical research literature and clarify how music therapy can provide a quintessential combination of physical, social, and psychological benefits to enhance the health care of pediatric oncology patients.
Prieto, Luis P; Sharma, Kshitij; Kidzinski, Łukasz; Rodríguez-Triana, María Jesús; Dillenbourg, Pierre
2018-04-01
The pedagogical modelling of everyday classroom practice is an interesting kind of evidence, both for educational research and teachers' own professional development. This paper explores the usage of wearable sensors and machine learning techniques to automatically extract orchestration graphs (teaching activities and their social plane over time), on a dataset of 12 classroom sessions enacted by two different teachers in different classroom settings. The dataset included mobile eye-tracking as well as audiovisual and accelerometry data from sensors worn by the teacher. We evaluated both time-independent and time-aware models, achieving median F1 scores of about 0.7-0.8 on leave-one-session-out k-fold cross-validation. Although these results show the feasibility of this approach, they also highlight the need for larger datasets, recorded in a wider variety of classroom settings, to provide automated tagging of classroom practice that can be used in everyday practice across multiple teachers.
Time-resolved, dual heterodyne phase collection transient grating spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennett, Cody A.; Short, Michael P.
The application of optical heterodyne detection for transient grating spectroscopy (TGS) using a fixed, binary phase mask often relies on taking the difference between signals captured at multiple heterodyne phases. To date, this has been accomplished by manually controlling the heterodyne phase between measurements with an optical flat. In this letter, an optical configuration is presented which allows for collection of TGS measurements at two heterodyne phases concurrently through the use of two independently phase controlled interrogation paths. This arrangement allows for complete, heterodyne amplified TGS measurements to be made in a manner not constrained by a mechanical actuation time.more » Measurements are instead constrained only by the desired signal-to-noise ratio. A temporal resolution of between 1 and 10 s, demonstrated here on single crystal metallic samples, will allow TGS experiments to be used as an in-situ, time-resolved monitoring technique for many material processing applications.« less
In vitro plant tissue culture: means for production of biological active compounds.
Espinosa-Leal, Claudia A; Puente-Garza, César A; García-Lara, Silverio
2018-05-07
Plant tissue culture as an important tool for the continuous production of active compounds including secondary metabolites and engineered molecules. Novel methods (gene editing, abiotic stress) can improve the technique. Humans have a long history of reliance on plants for a supply of food, shelter and, most importantly, medicine. Current-day pharmaceuticals are typically based on plant-derived metabolites, with new products being discovered constantly. Nevertheless, the consistent and uniform supply of plant pharmaceuticals has often been compromised. One alternative for the production of important plant active compounds is in vitro plant tissue culture, as it assures independence from geographical conditions by eliminating the need to rely on wild plants. Plant transformation also allows the further use of plants for the production of engineered compounds, such as vaccines and multiple pharmaceuticals. This review summarizes the important bioactive compounds currently produced by plant tissue culture and the fundamental methods and plants employed for their production.
Residual and suppressed-carrier arraying techniques for deep-space communications
NASA Technical Reports Server (NTRS)
Shihabi, M.; Shah, B.; Hinedi, S.; Million, S.
1995-01-01
Three techniques that use carrier information from multiple antennas to enhance carrier acquisition and tracking are presented. These techniques in combination with baseband combining are analyzed and simulated for residual and suppressed-carrier modulation. It is shown that the carrier arraying using a single carrier loop technique can acquire and track the carrier even when any single antenna in the array cannot do so by itself. The carrier aiding and carrier arraying using multiple carrier loop techniques, on the other hand, are shown to lock on the carrier only when one of the array elements has sufficient margin to acquire the carrier on its own.
Electrical characterization of a Mapham inverter using pulse testing techniques
NASA Technical Reports Server (NTRS)
Baumann, E. D.; Myers, I. T.; Hammoud, A. N.
1990-01-01
The use of a multiple pulse testing technique to determine the electrical characteristics of large megawatt-level power systems for aerospace missions is proposed. An innovative test method based on the multiple pulse technique is demonstrated on a 2-kW Mapham inverter. The concept of this technique shows that characterization of large power systems under electrical equilibrium at rated power can be accomplished without large costly power supplies. The heat generation that occurs in systems when tested in a continuous mode is eliminated. The results indicate that there is a good agreement between this testing technique and that of steady state testing.
Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth
NASA Technical Reports Server (NTRS)
Dawicke, D. S.; Newman, J. C., Jr.
1992-01-01
A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.
Gooley, Robert P; Cameron, James D; Soon, Jennifer; Loi, Duncan; Chitale, Gauri; Syeda, Rifath; Meredith, Ian T
2015-09-01
Multidetector computed tomographic (MDCT) assessment of the aortoventricular interface has gained increased importance with the advent of minimally invasive treatment modalities for aortic and mitral valve disease. This has included a standardised technique of identifying a plane through the nadir of each coronary cusp, the basal plane, and taking further measurements in relation to this plane. Despite this there is no published data defining normal ranges for these aortoventricular metrics in a healthy cohort. This study seeks to quantify normative ranges for MDCT derived aortoventricular dimensions and evaluate baseline demographic and anthropomorphic associates of these measurements in a normal cohort. 250 consecutive patients undergoing MDCT coronary angiography were included. Aortoventricular dimensions at multiple levels of the aortoventricular interface were assessed and normative ranges quantified. Multivariate linear regression was performed to identify baseline predictors of each metric. The mean age was 59±12 years. The basal plane was eccentric (EI=0.22±0.06) while the left ventricular outflow tract was more eccentric (EI=0.32±0.06), with no correlation to gender, age or hypertension. Male gender, height and body mass index were consistent independent predictors of larger aortoventricular dimensions at all anatomical levels, while age was predictive of supra-annular measurements. Male gender, height and BMI are independent predictors of all aortoventricular dimensions while age predicts only supra-annular dimensions. Use of defined metrics such as the basal plane and formation of normative ranges for these metrics allows reference for clinical reporting and for future research studies by using a standardised measurement technique. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Investigation of quartz grain surface textures by atomic force microscopy for forensic analysis.
Konopinski, D I; Hudziak, S; Morgan, R M; Bull, P A; Kenyon, A J
2012-11-30
This paper presents a study of quartz sand grain surface textures using atomic force microscopy (AFM) to image the surface. Until now scanning electron microscopy (SEM) has provided the primary technique used in the forensic surface texture analysis of quartz sand grains as a means of establishing the provenance of the grains for forensic reconstructions. The ability to independently corroborate the grain type classifications is desirable and provides additional weight to the findings of SEM analysis of the textures of quartz grains identified in forensic soil/sediment samples. AFM offers a quantitative means of analysis that complements SEM examination, and is a non-destructive technique that requires no sample preparation prior to scanning. It therefore has great potential to be used for forensic analysis where sample preservation is highly valuable. By taking quantitative topography scans, it is possible to produce 3D representations of microscopic surface textures and diagnostic features for examination. Furthermore, various empirical measures can be obtained from analysing the topography scans, including arithmetic average roughness, root-mean-square surface roughness, skewness, kurtosis, and multiple gaussian fits to height distributions. These empirical measures, combined with qualitative examination of the surfaces can help to discriminate between grain types and provide independent analysis that can corroborate the morphological grain typing based on the surface textures assigned using SEM. Furthermore, the findings from this study also demonstrate that quartz sand grain surfaces exhibit a statistically self-similar fractal nature that remains unchanged across scales. This indicates the potential for a further quantitative measure that could be utilised in the discrimination of quartz grains based on their provenance for forensic investigations. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Qureshi, Abid; Tandon, Himani; Kumar, Manoj
2015-11-01
Peptide-based antiviral therapeutics has gradually paved their way into mainstream drug discovery research. Experimental determination of peptides' antiviral activity as expressed by their IC50 values involves a lot of effort. Therefore, we have developed "AVP-IC50 Pred," a regression-based algorithm to predict the antiviral activity in terms of IC50 values (μM). A total of 759 non-redundant peptides from AVPdb and HIPdb were divided into a training/test set having 683 peptides (T(683)) and a validation set with 76 independent peptides (V(76)) for evaluation. We utilized important peptide sequence features like amino-acid compositions, binary profile of N8-C8 residues, physicochemical properties and their hybrids. Four different machine learning techniques (MLTs) namely Support vector machine, Random Forest, Instance-based classifier, and K-Star were employed. During 10-fold cross validation, we achieved maximum Pearson correlation coefficients (PCCs) of 0.66, 0.64, 0.56, 0.55, respectively, for the above MLTs using the best combination of feature sets. All the predictive models also performed well on the independent validation dataset and achieved maximum PCCs of 0.74, 0.68, 0.59, 0.57, respectively, on the best combination of feature sets. The AVP-IC50 Pred web server is anticipated to assist the researchers working on antiviral therapeutics by enabling them to computationally screen many compounds and focus experimental validation on the most promising set of peptides, thus reducing cost and time efforts. The server is available at http://crdd.osdd.net/servers/ic50avp. © 2015 Wiley Periodicals, Inc.
Institutional Identity and Organizational Structure in Multi-Campus Universities.
ERIC Educational Resources Information Center
Dengerink, Harold A.
2001-01-01
Explores the structure of universities with multiple campuses but no independent central administrative system. Discusses the hybrid missions of branch campuses, which are asked to serve both the overall university and local constituent communities. Explains that these multiple missions may conflict and thus require intentional organizational…
Simple to complex modeling of breathing volume using a motion sensor.
John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-06-01
To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
A novel fiber-free technique for brain activity imaging in multiple freely behaving mice
NASA Astrophysics Data System (ADS)
Inagaki, Shigenori; Agetsuma, Masakazu; Nagai, Takeharu
2018-02-01
Brain functions and related psychiatric disorders have been investigated by recording electrophysiological field potential. When recording it, a conventional method requires fiber-based apparatus connected to the brain, which however hampers the simultaneous measurement in multiple animals (e.g. by a tangle of fibers). Here, we propose a fiber-free recording technique in conjunction with a ratiometric bioluminescent voltage indicator. Our method allows investigation of electrophysiological filed potential dynamics in multiple freely behaving animals simultaneously over a long time period. Therefore, this fiber-free technique opens up the way to investigate a new mechanism of brain function that governs social behaviors and animal-to-animal interaction.
An efficient Bayesian meta-analysis approach for studying cross-phenotype genetic associations
Majumdar, Arunabha; Haldar, Tanushree; Bhattacharya, Sourabh; Witte, John S.
2018-01-01
Simultaneous analysis of genetic associations with multiple phenotypes may reveal shared genetic susceptibility across traits (pleiotropy). For a locus exhibiting overall pleiotropy, it is important to identify which specific traits underlie this association. We propose a Bayesian meta-analysis approach (termed CPBayes) that uses summary-level data across multiple phenotypes to simultaneously measure the evidence of aggregate-level pleiotropic association and estimate an optimal subset of traits associated with the risk locus. This method uses a unified Bayesian statistical framework based on a spike and slab prior. CPBayes performs a fully Bayesian analysis by employing the Markov Chain Monte Carlo (MCMC) technique Gibbs sampling. It takes into account heterogeneity in the size and direction of the genetic effects across traits. It can be applied to both cohort data and separate studies of multiple traits having overlapping or non-overlapping subjects. Simulations show that CPBayes can produce higher accuracy in the selection of associated traits underlying a pleiotropic signal than the subset-based meta-analysis ASSET. We used CPBayes to undertake a genome-wide pleiotropic association study of 22 traits in the large Kaiser GERA cohort and detected six independent pleiotropic loci associated with at least two phenotypes. This includes a locus at chromosomal region 1q24.2 which exhibits an association simultaneously with the risk of five different diseases: Dermatophytosis, Hemorrhoids, Iron Deficiency, Osteoporosis and Peripheral Vascular Disease. We provide an R-package ‘CPBayes’ implementing the proposed method. PMID:29432419
Multifunctional and Context-Dependent Control of Vocal Acoustics by Individual Muscles
Srivastava, Kyle H.; Elemans, Coen P.H.
2015-01-01
The relationship between muscle activity and behavioral output determines how the brain controls and modifies complex skills. In vocal control, ensembles of muscles are used to precisely tune single acoustic parameters such as fundamental frequency and sound amplitude. If individual vocal muscles were dedicated to the control of single parameters, then the brain could control each parameter independently by modulating the appropriate muscle or muscles. Alternatively, if each muscle influenced multiple parameters, a more complex control strategy would be required to selectively modulate a single parameter. Additionally, it is unknown whether the function of single muscles is fixed or varies across different vocal gestures. A fixed relationship would allow the brain to use the same changes in muscle activation to, for example, increase the fundamental frequency of different vocal gestures, whereas a context-dependent scheme would require the brain to calculate different motor modifications in each case. We tested the hypothesis that single muscles control multiple acoustic parameters and that the function of single muscles varies across gestures using three complementary approaches. First, we recorded electromyographic data from vocal muscles in singing Bengalese finches. Second, we electrically perturbed the activity of single muscles during song. Third, we developed an ex vivo technique to analyze the biomechanical and acoustic consequences of single-muscle perturbations. We found that single muscles drive changes in multiple parameters and that the function of single muscles differs across vocal gestures, suggesting that the brain uses a complex, gesture-dependent control scheme to regulate vocal output. PMID:26490859
Separation of Doppler radar-based respiratory signatures.
Lee, Yee Siong; Pathirana, Pubudu N; Evans, Robin J; Steinfort, Christopher L
2016-08-01
Respiration detection using microwave Doppler radar has attracted significant interest primarily due to its unobtrusive form of measurement. With less preparation in comparison with attaching physical sensors on the body or wearing special clothing, Doppler radar for respiration detection and monitoring is particularly useful for long-term monitoring applications such as sleep studies (i.e. sleep apnoea, SIDS). However, motion artefacts and interference from multiple sources limit the widespread use and the scope of potential applications of this technique. Utilising the recent advances in independent component analysis (ICA) and multiple antenna configuration schemes, this work investigates the feasibility of decomposing respiratory signatures into each subject from the Doppler-based measurements. Experimental results demonstrated that FastICA is capable of separating two distinct respiratory signatures from two subjects adjacent to each other even in the presence of apnoea. In each test scenario, the separated respiratory patterns correlate closely to the reference respiration strap readings. The effectiveness of FastICA in dealing with the mixed Doppler radar respiration signals confirms its applicability in healthcare applications, especially in long-term home-based monitoring as it usually involves at least two people in the same environment (i.e. two people sleeping next to each other). Further, the use of FastICA to separate involuntary movements such as the arm swing from the respiratory signatures of a single subject was explored in a multiple antenna environment. The separated respiratory signal indeed demonstrated a high correlation with the measurements made by a respiratory strap used currently in clinical settings.
Yeo, B T Thomas; Krienen, Fenna M; Chee, Michael W L; Buckner, Randy L
2014-03-01
The organization of the human cerebral cortex has recently been explored using techniques for parcellating the cortex into distinct functionally coupled networks. The divergent and convergent nature of cortico-cortical anatomic connections suggests the need to consider the possibility of regions belonging to multiple networks and hierarchies among networks. Here we applied the Latent Dirichlet Allocation (LDA) model and spatial independent component analysis (ICA) to solve for functionally coupled cerebral networks without assuming that cortical regions belong to a single network. Data analyzed included 1000 subjects from the Brain Genomics Superstruct Project (GSP) and 12 high quality individual subjects from the Human Connectome Project (HCP). The organization of the cerebral cortex was similar regardless of whether a winner-take-all approach or the more relaxed constraints of LDA (or ICA) were imposed. This suggests that large-scale networks may function as partially isolated modules. Several notable interactions among networks were uncovered by the LDA analysis. Many association regions belong to at least two networks, while somatomotor and early visual cortices are especially isolated. As examples of interaction, the precuneus, lateral temporal cortex, medial prefrontal cortex and posterior parietal cortex participate in multiple paralimbic networks that together comprise subsystems of the default network. In addition, regions at or near the frontal eye field and human lateral intraparietal area homologue participate in multiple hierarchically organized networks. These observations were replicated in both datasets and could be detected (and replicated) in individual subjects from the HCP. © 2013.
Yeo, BT Thomas; Krienen, Fenna M; Chee, Michael WL; Buckner, Randy L
2014-01-01
The organization of the human cerebral cortex has recently been explored using techniques for parcellating the cortex into distinct functionally coupled networks. The divergent and convergent nature of cortico-cortical anatomic connections suggests the need to consider the possibility of regions belonging to multiple networks and hierarchies among networks. Here we applied the Latent Dirichlet Allocation (LDA) model and spatial independent component analysis (ICA) to solve for functionally coupled cerebral networks without assuming that cortical regions belong to a single network. Data analyzed included 1,000 subjects from the Brain Genomics Superstruct Project (GSP) and 12 high quality individual subjects from the Human Connectome Project (HCP). The organization of the cerebral cortex was similar regardless of whether a winner-take-all approach or the more relaxed constraints of LDA (or ICA) were imposed. This suggests that large-scale networks may function as partially isolated modules. Several notable interactions among networks were uncovered by the LDA analysis. Many association regions belong to at least two networks, while somatomotor and early visual cortices are especially isolated. As examples of interaction, the precuneus, lateral temporal cortex, medial prefrontal cortex and posterior parietal cortex participate in multiple paralimbic networks that together comprise subsystems of the default network. In addition, regions at or near the frontal eye field and human lateral intraparietal area homologue participate in multiple hierarchically organized networks. These observations were replicated in both datasets and could be detected (and replicated) in individual subjects from the HCP. PMID:24185018
NASA Technical Reports Server (NTRS)
Harrison, P. Ann
1992-01-01
The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. The VEG subgoal PROPORTION.GROUND.COVER has been completed and a number of additional techniques that infer the proportion ground cover of a sample have been implemented. Some techniques operate on sample data at a single wavelength. The techniques previously incorporated in VEG for other subgoals operated on data at a single wavelength so implementing the additional single wavelength techniques required no changes to the structure of VEG. Two techniques which use data at multiple wavelengths to infer proportion ground cover were also implemented. This work involved modifying the structure of VEG so that multiple wavelength techniques could be incorporated. All the new techniques were tested using both the VEG 'Research Mode' and the 'Automatic Mode.'
Maturano, Y Paola; Mestre, M Victoria; Combina, Mariana; Toro, María Eugenia; Vazquez, Fabio; Esteve-Zarzoso, Braulio
2016-11-21
Transformation of grape must into wine is a process that may vary according to the consumers' requirements. Application of cold soak prior to alcoholic fermentation is a common practice in cellars in order to enhance flavor complexity and extraction of phenolic compounds. However, the effect of this step on wine yeast microbiota is not well-known. The current study simultaneously analyzed the effect of different cold soak temperatures on the microbiological population throughout the process and the use of culture-dependent and independent techniques to study this yeast ecology. The temperatures assayed were those normally applied in wineries: 2.5, 8 and 12°C. PCR-DGGE allowed detection of the most representative species such as Hanseniaspora uvarum, Starmerella bacillaris and Saccharomyces cerevisiae. As could be expected, highest diversity indices were obtained at the beginning of each process, and survival of H. uvarum or S. bacillaris depended on the temperature. Our results are in agreement with those obtained with culture independent methods, but qPCR showed higher precision and a different behavior was observed for each yeast species and at each temperature assayed. Comparison of both culture-independent techniques can provide a general overview of the whole process, although DGGE does not reveal the diversity expected due to the reported problems with the sensitivity of this technique. Copyright © 2016 Elsevier B.V. All rights reserved.
Nature and distribution of feline sarcoma virus nucleotide sequences.
Frankel, A E; Gilbert, J H; Porzig, K J; Scolnick, E M; Aaronson, S A
1979-01-01
The genomes of three independent isolates of feline sarcoma virus (FeSV) were compared by molecular hybridization techniques. Using complementary DNAs prepared from two strains, SM- and ST-FeSV, common complementary DNA'S were selected by sequential hybridization to FeSV and feline leukemia virus RNAs. These DNAs were shown to be highly related among the three independent sarcoma virus isolates. FeSV-specific complementary DNAs were prepared by selection for hybridization by the homologous FeSV RNA and against hybridization by fline leukemia virus RNA. Sarcoma virus-specific sequences of SM-FeSV were shown to differ from those of either ST- or GA-FeSV strains, whereas ST-FeSV-specific DNA shared extensive sequence homology with GA-FeSV. By molecular hybridization, each set of FeSV-specific sequences was demonstrated to be present in normal cat cellular DNA in approximately one copy per haploid genome and was conserved throughout Felidae. In contrast, FeSV-common sequences were present in multiple DNA copies and were found only in Mediterranean cats. The present results are consistent with the concept that each FeSV strain has arisen by a mechanism involving recombination between feline leukemia virus and cat cellular DNA sequences, the latter represented within the cat genome in a manner analogous to that of a cellular gene. PMID:225544
NASA Astrophysics Data System (ADS)
Vinande, Eric T.
This research proposes several means to overcome challenges in the urban environment to ground vehicle global positioning system (GPS) receiver navigation performance through the integration of external sensor information. The effects of narrowband radio frequency interference and signal attenuation, both common in the urban environment, are examined with respect to receiver signal tracking processes. Low-cost microelectromechanical systems (MEMS) inertial sensors, suitable for the consumer market, are the focus of receiver augmentation as they provide an independent measure of motion and are independent of vehicle systems. A method for estimating the mounting angles of an inertial sensor cluster utilizing typical urban driving maneuvers is developed and is able to provide angular measurements within two degrees of truth. The integration of GPS and MEMS inertial sensors is developed utilizing a full state navigation filter. Appropriate statistical methods are developed to evaluate the urban environment navigation improvement due to the addition of MEMS inertial sensors. A receiver evaluation metric that combines accuracy, availability, and maximum error measurements is presented and evaluated over several drive tests. Following a description of proper drive test techniques, record and playback systems are evaluated as the optimal way of testing multiple receivers and/or integrated navigation systems in the urban environment as they simplify vehicle testing requirements.
Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L
2009-05-01
To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.
Benício, Maria Helena D.; Ferreira, Marcelo U.; Cardoso, Maria Regina A.; Konno, Sílvia C.; Monteiro, Carlos A.
2004-01-01
OBJECTIVE: To investigate the prevalence and risk factors for wheezing disorders in early childhood in São Paulo, Brazil, the largest metropolitan area of South America. METHODS: A population-based cross-sectional survey of 1132 children aged 6-59 months was carried out between 1995 and 1996 to obtain information on recent wheezing and on independent variables such as demographic, socioeconomic, environmental, maternal and nutritional variables and immunization status. Intestinal parasitic infections were diagnosed using standard techniques. Multiple unconditional logistic regression was used to describe associations between outcome and independent variables. FINDINGS: The prevalence of recent wheezing (one or more reported episodes in the past 12 months) was 12.5%; 93% of children with wheezing were also reported to have a medical diagnosis of asthma. Recent wheezing was associated with low per capita income, poor quality of housing, day-care attendance, low birth weight and infection with intestinal helminths. CONCLUSION: Wheezing in early childhood in São Paulo, although more common than in most developing countries, remains less prevalent than in urban areas of industrialized countries. Low income and conditions associated with poverty (poor housing, low birth weight and parasitic infections) are some of the main risk factors for wheezing disorders among young children in this city. PMID:15508196
Validation of measurements of ventilation-to-perfusion ratio inequality in the lung from expired gas
NASA Technical Reports Server (NTRS)
Prisk, G. Kim; Guy, Harold J B.; West, John B.; Reed, James W.
2003-01-01
The analysis of the gas in a single expirate has long been used to estimate the degree of ventilation-perfusion (Va/Q) inequality in the lung. To further validate this estimate, we examined three measures of Va/Q inhomogeneity calculated from a single full exhalation in nine anesthetized mongrel dogs under control conditions and after exposure to aerosolized methacholine. These measurements were then compared with arterial blood gases and with measurements of Va/Q inhomogeneity obtained using the multiple inert gas elimination technique. The slope of the instantaneous respiratory exchange ratio (R slope) vs. expired volume was poorly correlated with independent measures, probably because of the curvilinear nature of the relationship due to continuing gas exchange. When R was converted to the intrabreath Va/Q (iV/Q), the best index was the slope of iV/Q vs. volume over phase III (iV/Q slope). This was strongly correlated with independent measures, especially those relating to inhomogeneity of perfusion. The correlations for iV/Q slope and R slope considerably improved when only the first half of phase III was considered. We conclude that a useful noninvasive measurement of Va/Q inhomogeneity can be derived from the intrabreath respiratory exchange ratio.
A constant current charge technique for low Earth orbit life testing
NASA Technical Reports Server (NTRS)
Glueck, Peter
1991-01-01
A constant current charge technique for low earth orbit testing of nickel cadmium cells is presented. The method mimics the familiar taper charge of the constant potential technique while maintaining cell independence for statistical analysis. A detailed example application is provided and the advantages and disadvantages of this technique are discussed.
Multiple Contact Dates and SARS Incubation Periods
2004-01-01
Many severe acute respiratory syndrome (SARS) patients have multiple possible incubation periods due to multiple contact dates. Multiple contact dates cannot be used in standard statistical analytic techniques, however. I present a simple spreadsheet-based method that uses multiple contact dates to calculate the possible incubation periods of SARS. PMID:15030684
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
USDA-ARS?s Scientific Manuscript database
A novel technique named multiple-particle tracking (MPT) was used to investigate the micro-structural heterogeneities of Z-trim, a zero calorie cellulosic fiber biopolymer produced from corn hulls. The principle of MPT technique is to monitor the thermally driven motion of inert micro-spheres, which...
Techniques for assessing relative values for multiple objective management on private forests
Donald F. Dennis; Thomas H. Stevens; David B. Kittredge; Mark G. Rickenbach
2003-01-01
Decision models for assessing multiple objective management of private lands will require estimates of the relative values of various nonmarket outputs or objectives that have become increasingly important. In this study, conjoint techniques are used to assess the relative values and acceptable trade-offs (marginal rates of substitution) among various objectives...
Teaching Multiple Online Sections/Courses: Tactics and Techniques
ERIC Educational Resources Information Center
Bates, Rodger; LaBrecque, Bryan; Fortner, Emily
2016-01-01
The challenge of teaching online increases as the number of sections or courses increase in a semester. The tactics and techniques which enrich online instruction in the tradition of quality matters can be modified and adapted to the demands of multiple instructional needs during a semester. This paper addresses time management and instructional…
NASA Astrophysics Data System (ADS)
Carlson, Eric D.; Foley, Lee M.; Guzman, Edward; Korblova, Eva D.; Visvanathan, Rayshan; Ryu, SeongHo; Gim, Min-Jun; Tuchband, Michael R.; Yoon, Dong Ki; Clark, Noel A.; Walba, David M.
2017-08-01
The control of the molecular orientation of liquid crystals (LCs) is important in both understanding phase properties and the continuing development of new LC technologies including displays, organic transistors, and electro-optic devices. Many techniques have been developed for successfully inducing alignment of calamitic LCs, though these techniques typically do not translate to the alignment of bent-core liquid crystals (BCLCs). Some techniques have been utilized to align various phases of BCLCs, but these techniques are often unsuccessful for general alignment of multiple materials and/or multiple phases. Here, we demonstrate that glass cells treated with polydimethylsiloxane (PDMS) thin films induce high quality homeotropic alignment of multiple mesophases of four BCLCs. On cooling to the lowest temperature phase the homeotropic alignment is lost, and spherulitic growth is seen in crystal and crystal-like phases including the dark conglomerate (DC) and helical nanofilament (HNF) phases. Evidence of homeotropic alignment is observed using polarized optical microscopy. We speculate that the methyl groups on the surface of the PDMS films strongly interact with the aliphatic tails of each mesogens, resulting in homeotropic alignment.
Cramer, Holger; Lauche, Romy; Langhorst, Jost; Dobos, Gustav; Paul, Anna
2013-10-01
To assess sociodemographic, clinical, and psychological characteristics of patients with internal diseases who use relaxation techniques as a coping strategy. Cross-sectional analysis among patients with internal diseases. Department of Internal and Integrative Medicine at an academic teaching hospital in Germany. Prior use of relaxation techniques (e.g. meditation, autogenic training), perceived benefit, and perceived harm. Potential predictors of relaxation techniques use (sociodemographic characteristics, health behavior, internal medicine diagnosis, general health status, mental health, satisfaction, and health locus of control) were tested using multiple logistic regression analysis. Of 2486 participants, 1075 (43.2%) reported to have used relaxation techniques, 648 (60.3%) reported benefits, and 11 (1.0%) reported harms. Use of relaxation techniques was independently associated with female gender (Odds ratio [OR]=1.43; 95% confidence interval [CI]=1.08-1.89), higher education (OR=1.32; 95%CI=1.03-1.71), fibromyalgia (OR=1.78; 95%CI=1.22-2.61), and internal health locus of control (OR=1.27; 95%CI=1.01-1.60). Use of relaxation techniques was negatively associated with age below 30 (OR=0.32; 95%CI=0.20-0.52) or above 64 (OR=0.65; 95%CI=0.49-0.88), full-time employment (OR=0.75; 95%CI=0.57-0.98), current smoking (OR=0.72; 95%CI=0.54-0.95), osteoarthritis (OR=0.51; 95%CI=0.34-0.77), rheumatic arthritis (OR=0.59; 95%CI=0.37-0.93), good to excellent health status (OR=0.70; 95%CI=0.52-0.96), and high life satisfaction (OR=0.78; 95%CI=0.62-0.98). In a German sample of patients with internal diseases, relaxation techniques were used as a coping strategy by about 43%. Users were more likely to be middle-aged, female, well-educated, diagnosed with fibromyalgia, not smoking, not full-time employed, and not to have a good health status or high life satisfaction. A high internal health locus of control predicted relaxation techniques use. Considering health locus of control might improve adherence to relaxation techniques in internal medicine patients. Copyright © 2013 Elsevier Ltd. All rights reserved.
Differences between the sexes in technical mastery of rhythmic gymnastics.
Bozanic, Ana; Miletic, Durdica
2011-02-01
The aims of this study were to determine possible differences between the sexes in specific rhythmic gymnastics techniques, and to examine the influence of various aspects of technique on rhythmic composition performance. Seventy-five students aged 21 ± 2 years (45 males, 30 female) undertook four test sessions to determine: coefficients of asymmetry, stability, versatility, and the two rhythmic compositions (without apparatus and with rope). An independent-sample t-test revealed sex-based differences in technique acquisition: stability for ball (P < 0.05; effect size = 0.65) and club (P < 0.05; effect size = 0.79) performance and rhythmic composition without apparatus (P < 0.05; effect size = 0.66). Multiple regression analysis revealed that the variables for assessing stability (beta = 0.44; P < 0.05) and versatility (beta = 0.61; P < 0.05) explained 61% of the variance in the rhythmic composition performance of females, and the variables for assessing asymmetry (beta = -0.38; P < 0.05), versatility (beta = 0.32; P < 0.05), and stability (beta = 0.29; P < 0.05) explained 52% of the variance in the rhythmic composition performance of males. The results suggest that female students dominate in body skill technique, while male students have the advantage with apparatus. There was a lack of an expressive aesthetic component in performance for males. The need for ambidexterity should be considered in the planning of training programmes.
Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560